AI Guidance at FH CU

Recommendations for the use of generative artificial intelligence in education at the CU Faculty of Humanities (FHS UK)

The following material follows the recommendations formulated by Charles University and complements them while taking into account the specific aspects of our faculty (the document contains a general outline, including separate information for teachers and students). It also takes into account practice at other universities. This document is primarily intended as a starting point for discussion between teachers and students after they have read it – preferably together.


In our opinion, one's academic goals should be borne in mind at all times when working with artificial intelligence (AI). In the case of FHS UK, these are the emphasis on critical reading, joint discussion and one's own research and creative work. At the same time, the faculty emphasises one's own initiative and the search for an individual study path towards understanding the turbulent world of today. While we support experimentation, we also place higher demands on the personal responsibility of students. Therefore, while we support the exploration of the potential of AI, its use should be beneficial for education, safe, and in accordance with the principles of academic integrity. We recommend the following when working with AI:


  1. Get to know AI technologies on a practical level and test their strengths and weaknesses when working on various tasks – translations, paraphrases, text summarisation, explanation of concepts, preparation of materials for presentations, etc. Don't be afraid to experiment.

  2. Consider specific learning goals that AI could aid – or hinder – in relation to specific courses. Given the universality, diversity and rapid transformations of generative AI, there is no “correct” solution for its role in teaching. Users should always think within the context of a given discipline, subject, and especially the objectives of the course.

  3. At the beginning of each course, clarify the requirements for the use of AI. Teachers and students have the opportunity to learn from each other and discuss the possibilities, limitations and risks of AI technology together, as well as to seek rules of use that are comprehensible and acceptable to both parties. Ultimately, however, the instructions of the teacher of the particular course must always be respected.

  4. When using AI, ensure the protection of personal and sensitive data. All instructions and information you provide to the AI are recorded by its provider. Therefore, enter only such information that could be published without objection, or anonymise it.

  5. Approach AI outputs with caution and verify their correctness. Current systems – especially large-scale language models such as ChatGPT – have a significant error rate. That is, they can express themselves authoritatively and convincingly, without in any way indicating whether the information they are providing is speculative or entirely fictitious.

  6. Bear in mind that only a human author can assume responsibility for any published or submitted text. AI cannot be stated as the author. At all times, you bear sole responsibility for any errors or inaccuracies contained in the text that you authorise. AI is merely a tool that can make our own work easier and better – but not replace it.

  7. Be transparent when using AI. If you are relying on a tool for your work, declare it and provide a description of how it was used. This will both ensure the integrity of your work and help others form an idea of what AI tools can do.

This document was created in cooperation between teachers and students of FHS UK.

Recommendation on the Use of Generative Artificial Intelligence (AI) Tools in Final Theses at the CU Faculty of Humanities (FHS UK)

Based on the documents “Recommendation on Generative Artificial Intelligence in Final Theses” and “Citing the Outputs of Generative Artificial Intelligence”, issued by the Generative AI Team at the Central Library of Charles University (hereinafter “Charles University”), the following Recommendation on the Use of Generative Artificial Intelligence (AI) Tools in Final Theses at the Faculty of Humanities, Charles University is hereby issued (hereinafter the “Recommendation”). This Recommendation applies to all students of the Faculty of Humanities, Charles University, in all forms of study. Generative artificial intelligence (hereinafter “AI”) means a tool capable of producing structured text or other data by means of generative algorithms, usually on the basis of multimodal input (text, voice, etc.). A final thesis means a bachelor’s thesis, master’s thesis or doctoral thesis, the defence of which is required for the completion of the relevant accredited degree programme.


The Faculty of Humanities, Charles University supports the use of AI tools and education in the field of AI by students, academic staff and other employees. Such use must, however, comply with the applicable legislation, the legal order of the Czech Republic, the general ethical principles of academic integrity, and the higher-level internal regulations of Charles University and its constituent parts.


Appendix for download: Specification of the Use of Generative AI in a Final Thesis


Basic principles for the use of AI in the preparation of final theses

  • Students are responsible for their final theses and should always review AI outputs and use AI in a manner consistent with academic ethics and with the requirements of the thesis assignment. Students are always the authors of their final theses. If a final thesis contains errors caused by AI, responsibility cannot subsequently be shifted to AI. AI is a tool which modifies or generates content on the basis of interaction with the user; it is not a reliable source of information. Reliable sources of information are, above all, primary and secondary literature.


  • The use of AI tools must be safe. Most AI tools are trained on the data entered into them and generally do not ensure confidentiality or data protection. It is therefore unacceptable to upload personal, sensitive or otherwise protected data into AI tools, and students bear responsibility for any breach of data protection obligations, including those arising under applicable legislation (such as the GDPR). An exception may be made for AI tools operated locally, provided that they have been assessed by Charles University as safe. Students also undertake not to use AI tools that have been identified by the National Cyber and Information Security Agency or by Charles University as potentially unsafe (for example, DeepSeek).


  • The use of AI tools must be ethical, transparent and properly declared; see Dean’s Directive of the Faculty of Humanities, Charles University No. 3/2026, Article 5(2)(d). A finding that AI has been used without declaration, or that it has been used to an impermissible extent, may result in a negative assessment of the defence of the final thesis, the initiation of disciplinary proceedings, or the commencement of proceedings to declare invalid the state examination or part thereof, if the misconduct is discovered only after the relevant part of the state final examination has been completed or after graduation. In the event of any doubt, students are strongly encouraged to consult their supervisors regarding the use of AI tools.


  • The use of AI tools is a matter of personal choice for students, and students must not be compelled to use AI tools in any context. Some students will always prefer to rely on their own independent work and may feel that the use of AI would negatively affect the development of their competences and critical thinking. Other students may have serious objections, for example on moral, psychosocial, environmental or similar grounds.


  • Outputs generated by AI tools must be cited properly and accurately in accordance with an appropriate citation style, such as MLA (Modern Language Association of America – MLA Style Manual) or The Chicago Manual of Style. A citation in MLA style may, for example, take the following form:


    • “Describe the symbolism of the green light in The Great Gatsby by F. Scott Fitzgerald” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.


    • “A pointillist painting of a sheep in a sunlit field of blue flowers” prompt. DALL-E, version 2, OpenAI, 8 Mar. 2023, labs.openai.com/.

Ways in which AI tools may be used (the “traffic-light system”)

The ways in which AI tools may be used are divided into three levels according to the extent to which they affect the final thesis — from the least significant first level to the critical third level, in which the use of AI is contrary to applicable legislation, higher-level regulations, or the declaration of honour attached to the final thesis. These levels are symbolically marked using the colours of a traffic light.


  • The first (“green”) level is regarded as unproblematic.


  • The second (“orange”) level should always be discussed with the thesis supervisor, who may determine that certain uses of AI tools described at this level are not permissible.


  • The third (“red”) level is impermissible in all circumstances.

TEXT CREATION AND COMPOSITION

  • Level 2 (“orange”): consultation on the basic idea of the text; discussion of the main arguments of the text; generation of a draft outline; obtaining feedback on parts of the text during the writing process or after completion. In all such cases, the generated text is not inserted into the final thesis itself. Generation of text from written, audio or other notes, and verbatim transcription of audio notes. Reformulation of one’s own text. Comprehensive use of tools such as Microsoft Copilot 365 within Microsoft Office applications (Excel, Word, etc.).


  • Level 3 (“red”): producing the entire thesis by means of prompts, with the student acting only as editor. Generating entire sections of text, or generating sources or data.

LINGUISTIC AND STYLISTIC EDITING OF THE THESIS

  • Level 1 (“green”): grammatical correction of individual words or short passages; correction of syntax.


  • Level 2 (“orange”): stylistic editing of short passages of the thesis.


  • Level 3 (“red”): uploading long passages or the entire thesis for stylistic rewriting by AI tools.

TRANSLATION

  • Level 1 (“green”): consultation on one’s own translation with AI tools.


  • Level 2 (“orange”): correction of one’s own translation by AI tools. Translation of selected parts of the thesis by AI tools.


  • Level 3 (“red”): uploading the entire final thesis, or substantial parts of it, for translation by AI tools.

SEARCHING FOR, READING AND INTERPRETING SOURCES

  • Level 2 (“orange”): carrying out a basic search for available sources; consultation on search strategies; consultation on search queries; consultations supporting understanding. Analysis of specific literature, including prioritisation. Generation of search queries. Preparation of a literature search.


  • Level 3 (“red”): writing the Literature Review section or a similar part of the thesis.

CREATION OF DERIVED CONTENT

  • Level 1 (“green”): consultation on the form of one’s own derived content (abstract, bibliography, index). Generation of media content such as accompanying images, graphs or sketches.


  • Level 2 (“orange”): generation of derived content (abstract, bibliography, index).


  • Level 3 (“red”): generation of the introduction and conclusion of the thesis.

ANALYSIS AND INTERPRETATION OF SOURCES AND DATA

  • Level 2 (“orange”): consultation on strategies and methods for the analysis and interpretation of sources and data.


  • Level 3 (“red”): generation of the analysis and/or interpretation of data or sources.


Last change: April 23, 2026 18:13 
Your feedback
Contact

Charles University

Faculty of Humanities

Pátkova 2137/5

182 00 Praha 8 - Libeň

Czech Republic


E-mail:

Other contacts


Getting to us