Generative AI

Concept of digital technology. nadia at iStock dot com.

We encourage faculty to try out or experiment with generative AI (GAI) tools, which can be used to generate ideas, summarize articles, develop computer code, create images, and compose music. 

We also encourage faculty to have explicit conversations with students about the permissibility of their use in your courses and their independent work. If you allow students to use GAI tools, we suggest that you set clear and explicit guidelines and help your students understand the risks associated with these programs. In large language models, such as ChatGPT, risks include inaccuracies, fabrications (“hallucinations”), and amplified biases (see more below). We also take seriously the risk that use of these tools may short-circuit learning.

Syllabus Language

Syllabus Language

Faculty have the discretion to set their own GAI policies (see Jill Dolan’s August 2023 memo). This means that students likely will encounter different rules about AI use in different courses. For this reason, we strongly encourage faculty to articulate a clear policy on AI use in your syllabus. 

Make clear: 

  • Whether AI tools are prohibited or permitted in your courses
  • Whether AI tools are permitted for certain tasks
  • Whether AI tools are permitted for assignments or for certain stages of an assignment 
  • Which AI tools are permitted in your courses
    • AI tools include Chat GPT, Bard, Grammarly, Github Copilot, Google Translate, Adobe Firefly, etc. 
  • The the use of Generative AI should be acknowledged; see citation guidance from MLA, APA, and Chicago

Keep in mind that a course policy that allows some use of GAI may introduce complexity and open up the possibility of students using GAI in ways you don’t intend.

Below you will find Princeton-specific examples of syllabus language, grouped by category. You may also find this Chronicle article useful as you develop a policy statement. We also highly recommend the guidance and examples our colleagues at Georgetown have curated. 

Generative AI Use Not Permitted (two examples)

Example #1

Intellectual honesty is vital to an academic community and for my fair evaluation of your work.  All work submitted in this course must be your own, completed in accordance with the University’s academic regulations. You may not make use of ChatGPT or other AI composition software.

Example #2 - Academic honesty (LIN 306 - Laura Kalin)

Before submitting the first assignment, students should review the section in the University publication “Rights, Rules, and Responsibilities” pertaining to proper acknowledgment of sources. Note that the principles involved apply equally to electronic and print sources.

All work submitted in this course must be your own, completed in accordance with the University’s academic regulations. You may not engage in unauthorized collaboration or make use of ChatGPT or other AI composition software.

Each submission for the course should contain, at the end, the words “This assignment represents my own work in accordance with University regulations” followed by the student’s signature (which may be typed for electronic submissions). Work found by the Committee on Discipline to involve an academic infraction of any kind will receive a grade of zero.

Generative AI Use Permitted with Permission and Citation (two examples)

Example #1

Students must obtain permission from me before using AI composition software (like ChatGPT) for any assignments in this course. Using these tools without my permission puts your academic integrity at risk.

Example #2 - AI and Your Writing Process (Princeton Writing Program)

Given the importance of producing original intellectual work for our seminar, generative AI tools (like ChatGPT) should not be used in any way or at any time unless I as the instructor give the entire class explicit permission to use this technology under certain parameters (e.g., as part of a specific lesson or writing exercise, or as a potential topic to investigate for research). Using generative AI tools outside the parameters we discuss in class puts you at risk for becoming a passive participant in your writing process and compromising your academic integrity.. 

Keep in mind that academic writers are expected not only to cite but also to verify their sources; verification is not always possible with tools like ChatGPT, because this technology often draws upon source materials that are inaccessible or invisible to users, generating output through proprietary algorithms that should not be mistaken for authoritative analysis. The output generated by these tools cannot be accepted uncritically or at face value.  

For these reasons, it’s necessary to be transparent about how and when you use generative AI. Any use of technology like ChatGPT must be accompanied by an explicit acknowledgment and brief description of how this tool was used in your work, and you must keep complete records of your engagement for possible review (e.g., the log generated by the app).

Please remember that suspicions of plagiarism will be reported to the Committee on Discipline and may have serious consequences.

Generative AI Use Permitted Under Specific Circumstances (four examples)

Example #1 - Expectations regarding AI (ART491/SPA491 - Rachel Price & Irene Small)

Intellectual honesty is vital to an academic community and for fair evaluation of your work. All written work submitted in this course (including Canvas posts) must be your own, completed in accordance with the University’s academic regulations. You may use ChatGPT for circumscribed research needs if you find it helpful. However, please note that such tools often provide skewed and inaccurate accounts of scholarship, and cannot replace the rigor of academic research and study. Research skills are a vital component of graduate and undergraduate education. Should you be uncertain as to where you might start to research a given topic, the librarians at Firestone will be happy to help, as are we. Inevitably, short-circuiting the research process is a loss to your own intellectual development and skill sets.

Example #2 - A note about using AI language models like ChatGPT (NEU 490 - Elizabeth Gould)

You are permitted to use these tools to generate outlines or first drafts of your writing assignments. Please remember that (as of now) ChatGPT is not up on the very latest in neuroscience literature, it does not provide citations, and is sometimes incorrect. It also does not do a great job with critical analyses of data, so if you use it, you will need to heavily fact-check and edit the product. 

Example #3 - Generative AI (MAE 345/549, ECE 345, COS 346 - Anirudha Majumdar)

Generative AI models such as ChatGPT and GitHub CoPilot hold great potential for education. However, using them indiscriminately can also hinder our learning goals. As such, we will try to strike a balance, imperfectly, no doubt, since we are all trying to figure out the long-term ramifications of this powerful technology between AI-augmented learning and independent learning.

In particular, you are welcome to use language models (e.g., ChatGPT, Bard, etc.) in the following three ways. First, you may use it to analyze past assignments (i.e., assignments you have already submitted for grading). For example, you could use it to explore different solutions to problems on assignments that have been submitted, or debug previously submitted code. Second, you may also use language models to explain concepts; specifically, you can use \explain <topic>" as a prompt but without any further prompting. Third, you can ask a language model about Python syntax (e.g., \explain how to write a for loop in Python."). In case you do use a language model in the second or third ways, you must submit the prompt and output from the language model as part of your assignment submission. Any other use of language models beyond these three uses will not be allowed for this course. The use of GitHub CoPilot is also not allowed. Any use of Generative AI during the midterm will also not be allowed. As always, remember that you are bound by the Princeton honor code, and violations can have serious consequences.

Example #4 - ChatGPT and other AI tools (SOC 500 - Matthew J. Salganik)

ChatGPT and other AI tools are potentially quite helpful for your research. It is not yet clear, however, the best way to integrate them into learning, if at all. For now, you are welcome to use ChatGPT (or other automated assistance tools) in this class if (1) you do not directly put the assignment question into ChatGPT and (2) you acknowledge their contribution and add a written description of how you are using them. Over the course of the semester, we will work together to try to figure out how ChatGPT and related tools are helping or hurting learning, and we will refine this policy as we go.

A few things to keep in mind:

  • This class is designed to prepare you to conduct and evaluate research. At this point, it is not yet possible to put real research problems into ChatGPT and get reliable high-quality answers. Therefore, ChatGPT will not be able to do your research for you. That said, it is possible to take a research problem and break it into smaller parts and then use ChatGPT as a tool to help with some of those parts. Therefore, this is the skill we will allow you to explore in this class. You cannot directly put your assignments into ChatGPT, but you can break it down into smaller parts and use ChatGPT as a tool to help you with some of those parts. Naturally, you—and not ChatGPT—are responsible for anything that you submit. Also, it is not yet clear if using ChatGPT in this way will help or hinder your learning.
  • The assignments you will do in this class are a means to an end, not an end in themselves. They are designed to push you in ways that enable you to build new skills. Doing the assignment without the effort is not likely to build skills and will therefore have little value to you.
  • ChatGPT is not perfect; far from it. You should expect that what you get from ChatGPT will be wrong occasionally. Programming is a setting where this might be OK, as described more in this post by Arvind Narayanan and Sayash Kapoor.
  • If you are going to use ChatGPT extensively, it is important to understand how it works. I’d recommend this post “What Is ChatGPT Doing . . . and Why Does It Work?” by Steve Wolfram as a starting point.

Generative AI Use Permitted with Citation (four examples)

Example #1 - ChatGPT Policies (PSY 337 - Uri Hasson)

ChatGPT generates text patterns probabilistically (similar to “autofill” in your email client, though more powerful). For this reason, it should not be listed as a co-author on scholarly work; we see it as a compiler rather than a writer. You are invited to work with ChatGPT, ask questions about any topic related to any lecture, assess its responses, and critique its answers. With appropriate reference, you can cite any answer you get from ChatGPT. However, you should not relegate your coursework or your obligation to think, learn and synthesize knowledge to ChatGPT at any point during the course.

Example #2 - Artificial Intelligence (ART 248 - Monica Bravo)

You may not use AI software on any of the three required quizzes. Additionally, you may not use AI software to compose or write the reading response or final paper. If you use AI software (like ChatGPT) for research, then it must be cited in these submissions. Be warned, however, that I will hold you responsible for any misperceptions, inaccuracies, and outright errors that the software generates and deduct your grade accordingly. You should fact-check thoroughly.

Example #3 - Preliminary GEO-425 Policy Regarding “Large Language Models” (e.g., ChatGPT) (GEO 425 -  Gabriel A. Vecchi)

This is a preliminary effort to develop a policy regarding the use of Large Language Models (LLMs) in this course, which will be subject to revision based on experience and any policy implementations at a Departmental or University-wide level. Since we do not yet have much (if any) experience on how Large Language Models will be used in this class by students and, since these tools are very new, it is unlikely that we will get these policies completely “right” the first time – we welcome suggestions and ideas as we learn to live and thrive with these new tools.

Overarching Principles of this Preliminary Policy:

  1. The fundamental goal of this class is for the students to learn, and we assume that the students share that as the fundamental goal (so that grades and credit for the class are viewed as subordinate goals).
  2. There are many computerized tools that may be applied in this class, such as calculators, Wolfram, Matlab, Python with its libraries, spell checkers, etc. These tools reflect the range of tools available in the real world, so we welcome tools that can act to enhance or complement the learning in this class. We currently view LLMs as potentially equivalent tools if they are used to advance the fundamental goal of the class (learning).

Based on these Principles, students may use LLMs in class assignments subject to the following constraints:

  1. Students need to acknowledge/cite the use of the LLM tool when it is used.
  2. Students’ answers should include a reflection, expansion, condensation, etc. based on the

LLM output, not a verbatim quotation from the LLM tool.

Evaluation of Assignments using LLMs:

Any factual errors arising from the LLM that are not identified and corrected by the student will result in (at least partial, but potentially full) loss of points. You should work to understand the material sufficiently well to identify these errors.

Example #4 - ChatGPT/ generative AI guidance (COM 302/ NES 320/ JDS 308 - Lital Levy)

I permit use of ChatGPT and other generative AI tools for your writing in this course, as it can be helpful for students who have a hard time getting started on writing; but you must include an acknowledgement of how you used it, either in a footnote or at the end of the writing assignment. Using such tools without acknowledgement puts your academic integrity at risk. Please keep in mind that ChatGPT does not provide citations, is often incorrect, and tends to generate smooth but generic prose. In other words, use it with caution. It should be an assist, not a substitution for real thinking and writing.

Ethical and Other Risks

Ethical and Other Risks

We recommend that faculty consider and discuss with students the significant ethical considerations and risks of using generative AI. The most important concerns are:

Equity and Access

Students’ varying levels of AI literacy coupled with unequal access to technology and lack of exposure to AI tools exacerbates existing digital divides in education. Safer, more accurate AI tools are often locked behind paywalls, giving rise to concerns regarding affordability and equitable access. Even though today’s student population is often well-versed in digital technology, disparities in digital literacy education and skill acquisition can affect students’ performance in college.

Student data and privacy 

When students create an account in a program, they share personally identifiable information like their email address and phone number. Large language models such as ChatGPT or Bard can store conversations and uploaded content, which they might repurpose as training data. Princeton's Information Security Office has written the following position paper on the Prohibition of University Data in Artificial Intelligence (AI) Solutions

If you elect to use AI tools that require students to create accounts, we suggest that you highlight these risks and review the data usage policies with your students. Consider, in fact, making this a classroom exercise. You might also offer alternative options for students who are not comfortable creating their own accounts. 

Inaccuracies and fabrication

Generative AI fabricates data, invents facts, and produces persuasive but completely inaccurate arguments, according to researchers at Stanford. When used as a research aid, these programs can concoct citations. ChatGPT, for instance, incorrectly stated that Princeton’s Hal Foster had written an article called “The Case Against Art History” in October. The citation included volume number, year, and page references—all a fabrication. Making students aware of this tendency toward inaccuracy might help to deter them from relying on these tools. 

Cognitive Offloading

Cognitive offloading involves delegating the mental demands of a task to a technology or tool, such as relying on a calculator or smartphone reminders instead of one’s own knowledge and abilities. People may offload a task when they think the technology is more capable, they have a high degree of trust in the tools, and the tools are easily accessible. Offloading may improve a student’s short-term performance (i.e., getting good grades on an assignment) but diminish their long-term learning and cognition. We suggest that faculty encourage students to use AI to enhance their learning, not as a replacement for their own cognition.

Bias and stereotypes 

Generative AI is fed and trained on data that can be biased and inaccurate, or geographically and racially skewed. It has a tendency to reproduce stereotypes. If prompted to depict a “Native American,” for instance, image-making software like DALL-E 2 and Stable Diffusion tend to produce images of people with traditional headdresses. Or, if asked to illustrate a profession using an adjective like “emotional” or “sensitive,” the program is more likely to produce an image of a woman as this article by the MIT Technology Review demonstrates.  

Labor concerns with how AI tools are trained

Companies like OpenAI have relied on labor from the Global South to train their models, requiring workers to read and categorize graphic texts to identify hate speech, violence, and sexual abuse. This source offers a fuller account.

Environmental Impact

The computational requirements associated with large language models like ChatGPT contribute to high rates of energy consumption, carbon emissions, and electronic waste. Researchers at the University of Massachusetts found that training large AI models can produce nearly five times the lifetime emissions of an average car (including fuel). As AI datasets and models grow in complexity, so do their environmental impact.

Implications for Assignments

Implications for Assignments

Generative AI requires us to be very intentional about assignment design -- to maximize students’ opportunity to engage critically with course material and minimize their risk of overusing GAI. 

Regardless of whether you permit the use of GAI tools, we encourage you to:

  • Define your course learning goals, and share them with students.
    • Explain to students what they will learn by completing your assignments. In what ways will it help them develop the skills or master the content of your discipline? 
  • Include a generative AI policy on your syllabus.
  • Test your prompts.
    • To understand more about the strengths and limitations of GAI tools, experiment with your assignment prompts and evaluate the results. For guidance on how to effectively prompt Chat GPT, see Open AI’s resource on Prompt Engineering
  • Scaffold assignments.
    • Scaffold students’ work with draft and revision deadlines that offer you opportunities to give feedback. 
  • Incorporate reflection into assignments.
    • Ask students to demonstrate their thought processes and reflect on their work. For example, they might annotate their solution to a problem, write an artist’s statement to accompany a submission, or write a cover letter for an essay. 
  • Assign “creative critical” assignments.
    • Design assignments that ask students to engage creatively as well as critically with course material. This might take multiple forms, including digital assignments like digital exhibitions, podcasts, or story maps. Even without the use of digital technology, consider assignments that ask students to riff on, mix up, or playfully and purposefully engage course material. We have many ideas to share with you; feel free to consult with us.
  • Try oral assignments, especially if you do not permit the use of GAI.
    • Devise oral assignments such as presentations, simulations, or role plays. These can be low-stakes activities—for example, asking a student to talk through their response to a problem or share ideas as part of a “fishbowl” discussion—or higher-stakes activities that require advanced planning and preparation. 
  • Make an appointment for a consultation with us.
    • We’re very happy to help you think through how generative AI may affect your teaching. We offer consultations in person and over Zoom; be in touch with us at [email protected].

Assigning Generative AI:

If navigating AI is a skill you think is important for students to develop, you might design activities and assignments that embrace it. If you do ask students to use GAI tools, be mindful of the ethical concerns and other risks that they present. Remember that some students may have access to subscription-based tools like Chat GPTPlus, while others will only have access to the less powerful free versions. Be prepared to offer alternative assignments or other workarounds to students who don’t feel comfortable using these tools–which often require students to create an account–themselves.

  • Ask students to analyze its output. For example, after they complete their own drafts of an assigned essay, you might ask students to request a draft of the assignment from a generative AI tool and analyze and/or critique the work it produces. Jacob Shapiro, Professor of Politics and International Affairs, requires students to prompt Chat GPT and then share the responses with classmates to revise them. Associate Professor Alexander Glaser from Mechanical and Aerospace Engineering asks students to compare their answers to those composed by ChatGPT and reflect on the differences between responses from a human and those from a machine. Steven Strauss, Visiting Professor in SPIA, asks graduate students to grade ChatGPT’s response to a prompt and then reverses the process, asking students to submit their draft answers to ChatGPT (with the appropriate context) so it can give them feedback.
  • Allow students to use the tool for one part of a larger assignment. For example, Heather Thieringer, University Lecturer in Molecular Biology, allows students to use ChatGPT to create a potential introduction to their lab report, which they include as an appendix. The students critique and correct the response as part of the assignment.
  • Emphasize the skill of prompt engineering. Assign students to use the tool and to turn in the prompts they use to get their responses. Ask them to write a short paper reflecting on how altering their prompt changed the output. 
  • Use the tools to enhance students' creativity. In his Storytelling course, Professor of Slavic Languages and Literatures Yuri Leving asks students to illustrate their writing projects with images produced by an AI generator. He also asks students to write stories inspired by images he has generated using the tool, giving them experience both creating images from text and generating text from images.
  • Ask students to analyze the benefits and drawbacks of generative AI for certain tasks in class discussions, debates, or written assignments. For example, Steven Strauss, Visiting Professor in SPIA, devotes class time to what he calls “GAI housekeeping” before requiring students to use the tools, addressing topics such as student accountability, algorithmic bias, the potential for hallucinations, ethical dilemmas, and replicability concerns. Once students understand the challenges and limitations inherent in GAI technology, Strauss asks them to make and support an argument about how ChatGPT and similar tools might be used to improve productivity on an everyday task. 

Faculty have expressed interest in hearing about how colleagues are using AI tools in their courses. If you are assigning GAI in your course and would be willing to share your assignment, please reach out to us at [email protected].

Detection Software and Academic Integrity

Detection Software and Academic Integrity

Though companies like Turnitin, ZeroGPT, and OpenAI have all developed AI detection capabilities, we do not recommend you use such software to attempt to determine if student work is AI-generated. Our recommendation against using these tools is based both on Princeton’s standards for academic integrity and the practical limits of these tools. Detection tools seem unreliable at best and biased at worst. The creators of these tools have warned against using them to make decisions about academic honesty. Research has also demonstrated that the software consistently misclassifies writing samples by non-native English writing as AI-generated. 

Instead, we encourage you to emphasize your learning goals, consider our guidance on assignment design, and include a clearly stated GAI policy on your syllabus

If you suspect a student has used an unauthorized GAI tool in an assignment, please contact Joyce Chen at [email protected] or 609-258-3054. If you suspect a student has used an unauthorized GAI tool in an exam, please contact the Honor Committee at [email protected].


Resources and Readings