Preparing to TeachWe encourage faculty to experiment with generative AI (GAI) tools, which can be used to generate ideas, summarize articles, develop computer code, create images, and compose music. These tools are increasingly sophisticated and powerful. We take seriously the risks that these tools present, including inaccuracies, fabrications (“hallucinations”), and amplified biases. We also note that the use of GAI may “short-circuit” learning. Students learn by actively engaging with course material, including through the very activities that GAI tools “automate,” like summarizing an argument, synthesizing information, or drafting an outline. We recommend that departments and faculty consider the ways that GAI may affect your field. What does GAI mean for your discipline – for how you perform scholarship and for what you teach your students? Does your curriculum need to shift in light of what GAI can do? Should you have new or different expectations for student work? Do your teaching methods need to change, and how? (You are always welcome to consult with the McGraw Center on this question.) Syllabus Language Syllabus LanguageWe encourage faculty to articulate a clear policy on GAI use in your syllabus and to talk to students about that policy. (See Jill Dolan’s August 2023 memo for further guidance). Students are likely to encounter different GAI policies in different courses, so we encourage you to set clear guidelines. Section 2.4.6 of Rights, Rules, and Responsibilities explains that if GAI use is permitted by the instructor “students must disclose its use rather than cite or acknowledge the use, since it is an algorithm rather than a source.” On your syllabus, make clear: Whether GAI tools are prohibited or permitted in your course, and which onesWhether GAI tools are permitted, and for what kinds of work How students should disclose their use of GAIBelow you will find template language about GAI use that you may adapt for your own syllabus and assignment guidelines, as well as Princeton-specific examples of GAI policies. We also recommend this article in The Chronicle for two examples of GAI use “disclosure.” Template languageIntellectual honesty is vital to an academic community and for my fair evaluation of your work. All work submitted in this course must be your own, completed in accordance with the University’s academic regulations. You may not make use of ChatGPT or other AI tools. Unauthorized use of these tools is subject to disciplinary action.Template languageStudents must obtain permission from me before using Generative AI tools (like ChatGPT) for any part of any assignment in this course. After receiving permission, you are responsible for disclosing when and how you have used the tools in each assignment you turn in. Representing output generated by or derived from GAI as your own work is a violation of the University’s academic regulations. (See Section 2.4.6.)Template languageIf you use GAI tools (such as Github CoPilot) on an assignment, you must describe how you used the tool and include both the prompt and the relevant output. Using these tools without disclosing when and how you used them is a violation of the University’s academic regulations. (See Section 2.4.6.) Example #1: AI and Your Writing Process (Princeton Writing Program)Given the importance of producing original intellectual work for our seminar, generative AI tools (like ChatGPT) should not be used in any way or at any time unless I as the instructor give the entire class explicit permission to use this technology under certain parameters (e.g., as part of a specific lesson or writing exercise, or as a potential topic to investigate for research). Using generative AI tools outside the parameters we discuss in class puts you at risk for becoming a passive participant in your writing process and compromising your academic integrity.Academic best practices require transparency and University policy requires that any use of generative AI be accompanied by an explicit disclosure. If I give the class permission to use ChatGPT or similar technology in this seminar, you must include a description of how and why this tool was used in your work, and you must keep complete records of your engagement for possible review (e.g., the log generated by the app). Please remember that suspicions of unacceptable source use will be referred to the Committee on Discipline and may have serious consequences.Example #2 - Generative AI (MAE 345/549, ECE 345, COS 346 - Anirudha Majumdar)Generative AI models such as ChatGPT and GitHub CoPilot hold great potential for education. However, using them indiscriminately can also hinder our learning goals. As such, we will try to strike a balance, imperfectly, no doubt, since we are all trying to figure out the long-term ramifications of this powerful technology between AI-augmented learning and independent learning.In particular, you are welcome to use language models (e.g., ChatGPT, Bard, etc.) in the following three ways. First, you may use it to analyze past assignments (i.e., assignments you have already submitted for grading). For example, you could use it to explore different solutions to problems on assignments that have been submitted, or debug previously submitted code. Second, you may also use language models to explain concepts; specifically, you can use \explain <topic>" as a prompt but without any further prompting. Third, you can ask a language model about Python syntax (e.g., \explain how to write a for loop in Python."). In case you do use a language model in the second or third ways, you must submit the prompt and output from the language model as part of your assignment submission. Any other use of language models beyond these three uses will not be allowed for this course. The use of GitHub CoPilot is also not allowed. Any use of Generative AI during the midterm will also not be allowed. As always, remember that you are bound by the Princeton honor code, and violations can have serious consequences.Example #3 - Preliminary GEO-425 Policy Regarding “Large Language Models” (e.g., ChatGPT) (GEO 425 - Gabriel A. Vecchi)This is a preliminary effort to develop a policy regarding the use of Large Language Models (LLMs) in this course, which will be subject to revision based on experience and any policy implementations at a Departmental or University-wide level. Since we do not yet have much (if any) experience on how Large Language Models will be used in this class by students and, since these tools are very new, it is unlikely that we will get these policies completely “right” the first time – we welcome suggestions and ideas as we learn to live and thrive with these new tools.Overarching Principles of this Preliminary Policy:The fundamental goal of this class is for the students to learn, and we assume that the students share that as the fundamental goal (so that grades and credit for the class are viewed as subordinate goals).There are many computerized tools that may be applied in this class, such as calculators, Wolfram, Matlab, Python with its libraries, spell checkers, etc. These tools reflect the range of tools available in the real world, so we welcome tools that can act to enhance or complement the learning in this class. We currently view LLMs as potentially equivalent tools if they are used to advance the fundamental goal of the class (learning).Based on these Principles, students may use LLMs in class assignments subject to the following constraints:Students need to acknowledge/cite the use of the LLM tool when it is used.Students’ answers should include a reflection, expansion, condensation, etc. based on the LLM output, not a verbatim quotation from the LLM tool.Evaluation of Assignments using LLMs:Any factual errors arising from the LLM that are not identified and corrected by the student will result in (at least partial, but potentially full) loss of points. You should work to understand the material sufficiently well to identify these errors. Ethical and Other Risks Ethical and Other Risks We recommend that faculty consider and discuss with students the significant ethical considerations and risks of using generative AI. The most important concerns are: Equity and Access Students’ varying levels of AI literacy coupled with unequal access to technology and lack of exposure to AI tools exacerbates existing digital divides in education. Safer, more accurate AI tools are often locked behind paywalls, giving rise to concerns regarding affordability and equitable access. Even though today’s student population is often well-versed in digital technology, disparities in digital literacy education and skill acquisition can affect students’ performance in college. Student data and privacy When students create an account in a program, they share personally identifiable information like their email address and phone number. Large language models such as ChatGPT or Bard can store conversations and uploaded content, which they might repurpose as training data. Princeton's Information Security Office has written the following position paper on the Prohibition of University Data in Artificial Intelligence (AI) Solutions. If you elect to use AI tools that require students to create accounts, we suggest that you highlight these risks and review the data usage policies with your students. Consider, in fact, making this a classroom exercise. You might also offer alternative options for students who are not comfortable creating their own accounts. Inaccuracies and fabrication Generative AI fabricates data, invents facts, and produces persuasive but completely inaccurate arguments, according to researchers at Stanford. When used as a research aid, these programs can concoct citations. ChatGPT, for instance, incorrectly stated that Princeton’s Hal Foster had written an article called “The Case Against Art History” in October. The citation included volume number, year, and page references—all a fabrication. Making students aware of this tendency toward inaccuracy might help to deter them from relying on these tools. Cognitive Offloading Cognitive offloading involves delegating the mental demands of a task to a technology or tool, such as relying on a calculator or smartphone reminders instead of one’s own knowledge and abilities. People may offload a task when they think the technology is more capable, they have a high degree of trust in the tools, and the tools are easily accessible. Offloading may improve a student’s short-term performance (i.e., getting good grades on an assignment) but diminish their long-term learning and cognition. We suggest that faculty encourage students to use AI to enhance their learning, not as a replacement for their own cognition. Bias and stereotypes Generative AI is fed and trained on data that can be biased and inaccurate, or geographically and racially skewed. It has a tendency to reproduce stereotypes. If prompted to depict a “Native American,” for instance, image-making software like DALL-E 2 and Stable Diffusion tend to produce images of people with traditional headdresses. Or, if asked to illustrate a profession using an adjective like “emotional” or “sensitive,” the program is more likely to produce an image of a woman as this article by the MIT Technology Review demonstrates. Labor concerns with how AI tools are trained Companies like OpenAI have relied on labor from the Global South to train their models, requiring workers to read and categorize graphic texts to identify hate speech, violence, and sexual abuse. This source offers a fuller account. Environmental Impact The computational requirements associated with large language models like ChatGPT contribute to high rates of energy consumption, carbon emissions, and electronic waste. Researchers at the University of Massachusetts found that training large AI models can produce nearly five times the lifetime emissions of an average car (including fuel). As AI datasets and models grow in complexity, so do their environmental impact. Implications for Assignments Implications for Assignments Generative AI requires us to be very intentional about assignment design -- to maximize students’ opportunity to engage critically with course material and minimize their risk of overusing GAI. Regardless of whether you permit the use of GAI tools, we encourage you to: Define your course learning goals, and share them with students. Explain to students what they will learn by completing your assignments. In what ways will it help them develop the skills or master the content of your discipline? Include a generative AI policy on your syllabus. You might also work with students to set a class policy, as Associate Professor Molly Crockett does in their Psychology courses. See McGraw’s Faculty Resource Library for guidance on Creating Community Agreements. Test your prompts. To understand more about the strengths and limitations of GAI tools, experiment with your assignment prompts and evaluate the results. For guidance on how to effectively prompt Chat GPT, see Open AI’s resource on Prompt Engineering. Scaffold assignments. Scaffold students’ work with draft and revision deadlines that offer you opportunities to give feedback. Incorporate reflection into assignments. Ask students to demonstrate their thought processes and reflect on their work. For example, they might annotate their solution to a problem, write an artist’s statement to accompany a submission, or write a cover letter for an essay. Assign “creative critical” assignments. Design assignments that ask students to engage creatively as well as critically with course material. This might take multiple forms, including digital assignments like digital exhibitions, podcasts, or story maps. Even without the use of digital technology, consider assignments that ask students to riff on, mix up, or playfully and purposefully engage course material. We have many ideas to share with you; feel free to consult with us. Try oral assignments, especially if you do not permit the use of GAI. Devise oral assignments such as presentations, simulations, or role plays. These can be low-stakes activities—for example, asking a student to talk through their response to a problem or share ideas as part of a “fishbowl” discussion—or higher-stakes activities that require advanced planning and preparation. Make an appointment for a consultation with us. We’re very happy to help you think through how generative AI may affect your teaching. We offer consultations in person and over Zoom; be in touch with us at [email protected]. Assigning Generative AI: If navigating AI is a skill you think is important for students to develop, you might design activities and assignments that embrace it. If you do ask students to use GAI tools, be mindful of the ethical concerns and other risks that they present. Remember that some students may have access to subscription-based tools like Chat GPTPlus, while others will only have access to the less powerful free versions. Be prepared to offer alternative assignments or other workarounds to students who don’t feel comfortable using these tools–which often require students to create an account–themselves. Ask students to analyze its output. For example, after they complete their own drafts of an assigned essay, you might ask students to request a draft of the assignment from a generative AI tool and analyze and/or critique the work it produces. Jacob Shapiro, Professor of Politics and International Affairs, requires students to prompt Chat GPT and then share the responses with classmates to revise them. Associate Professor Alexander Glaser from Mechanical and Aerospace Engineering asks students to compare their answers to those composed by ChatGPT and reflect on the differences between responses from a human and those from a machine. Steven Strauss, Visiting Professor in SPIA, asks graduate students to grade ChatGPT’s response to a prompt and then reverses the process, asking students to submit their draft answers to ChatGPT (with the appropriate context) so it can give them feedback. Allow students to use the tool for one part of a larger assignment. For example, Heather Thieringer, University Lecturer in Molecular Biology, allows students to use ChatGPT to create a potential introduction to their lab report, which they include as an appendix. The students critique and correct the response as part of the assignment. Emphasize the skill of prompt engineering. Assign students to use the tool and to turn in the prompts they use to get their responses. Ask them to write a short paper reflecting on how altering their prompt changed the output. Use the tools to enhance students' creativity. In his Storytelling course, Professor of Slavic Languages and Literatures Yuri Leving asks students to illustrate their writing projects with images produced by an AI generator. He also asks students to write stories inspired by images he has generated using the tool, giving them experience both creating images from text and generating text from images. Ask students to analyze the benefits and drawbacks of generative AI for certain tasks in class discussions, debates, or written assignments. For example, Steven Strauss, Visiting Professor in SPIA, devotes class time to what he calls “GAI housekeeping” before requiring students to use the tools, addressing topics such as student accountability, algorithmic bias, the potential for hallucinations, ethical dilemmas, and replicability concerns. Once students understand the challenges and limitations inherent in GAI technology, Strauss asks them to make and support an argument about how ChatGPT and similar tools might be used to improve productivity on an everyday task. Faculty have expressed interest in hearing about how colleagues are using AI tools in their courses. If you are assigning GAI in your course and would be willing to share your assignment, please reach out to us at [email protected]. Detection Software and Academic Integrity Detection Software and Academic Integrity Though companies like Turnitin, ZeroGPT, and OpenAI have all developed AI detection capabilities, we do not recommend you use such software to attempt to determine if student work is AI-generated. Our recommendation against using these tools is based both on Princeton’s standards for academic integrity and the practical limits of these tools. Detection tools seem unreliable at best and biased at worst. The creators of these tools have warned against using them to make decisions about academic honesty. Research has also demonstrated that the software consistently misclassifies writing samples by non-native English writing as AI-generated. Instead, we encourage you to emphasize your learning goals, consider our guidance on assignment design, and include a clearly stated GAI policy on your syllabus. If you suspect a student has used an unauthorized GAI tool in an assignment, please contact Joyce Chen at [email protected] or 609-258-3054. If you suspect a student has used an unauthorized GAI tool in an exam, please contact the Honor Committee at [email protected]. Resources and Readings Teaching guidance and case reports on teaching with GAI Assigning AI: Seven Approaches for Students, with Prompts (Social Science Research Network) ChatGPT and the Rise of Generative AI: Threat to Academic Integrity? (Science Direct) Artificial Intelligence and the Future of Teaching and Learning (U.S. Department of Education) Artificial Intelligence and Education: A Reading List (JSTOR Daily) Reactions: Princeton faculty discuss ChatGPT in the classroom (The Princetonian) The AI Pedagogy Project (Harvard University) Learn With AI Toolkit (University of Maine) Teaching in the Age of AI (Vanderbilt University) Artificial Intelligence Teaching Guide (Stanford University) Intentional Pedagogy with AI Technology (Brown University) Why I’m Encouraging my Students to use Generative AI (e.g., ChatGPT ) When Writing Their Assignments (Medium) Embracing Generative AI (GAI) in Education: Some Personal Reflections (Medium) Teaching CS50 with AI (SIGCSE) Developing AI Standards of Conduct as a Class (Exploring AI Pedagogy) What I Learned From an Experiment to Apply Generative AI to My Data Course (EdSurge) Regularly updated sources on GAI: channels; podcasts; listservs; substacks Arvind Narayanan Dr Philippa Hardman One Useful Thing Exploring AI Pedagogy AI+EDU=Simplified AutomatED: Teaching Better With Tech AI in Education Links to PU faculty research and initiatives on GAI Princeton Language and Intelligence Future Values Initiative Princeton Center for Information Technology Policy Princeton Precision Health Princeton Dialogues on AI and Ethics University Report and Resources Generative AI Working Group Report (Office of the Dean of the College) Generative AI (Princeton University Library)