
Balancing Technology and Integrity, ETEducation
This fall, Harvard students returned to a campus that felt part digital, part retro: in-person exams, handwritten assignments, and no-laptop policies in certain classrooms. These changes are not nostalgic quirks—they are part of Harvard’s broader effort to confront a new challenge reshaping higher education: the rise of artificial intelligence (AI) in academic work. Tools like ChatGPT have made it possible to generate essays, summarize readings, write code, and even produce research papers on demand, leaving faculty to rethink how students learn, complete assignments, and demonstrate mastery.
AI everywhere: A campus transformed
AI has quickly become ubiquitous at Harvard. According to The Crimson’s 2025 Faculty of Arts and Sciences survey, nearly 80% of instructors reported encountering student work they suspected was AI-generated—a dramatic jump from just two years ago.
Despite this, faculty confidence in identifying AI output remains low. Only 14% of respondents felt “very confident” in their ability to distinguish human from AI work. Research from Pennsylvania State University underscores this challenge: humans can correctly detect AI-generated text roughly 53% of the time, only slightly better than flipping a coin.
The rapid proliferation of AI has forced faculty to rethink not just assignments, but the very methods of teaching. Harvard instructors now face a delicate balancing act: integrate AI in ways that enhance learning, while safeguarding academic integrity and critical thinking.
A flexible, faculty-led approach
Unlike some institutions that have imposed strict AI bans, Harvard has deliberately avoided a blanket policy. Submitting AI-generated work without attribution violates the University’s Honor Code, but faculty retain broad discretion over enforcement.
In 2023, the Faculty of Arts and Sciences introduced three draft AI policies—maximally restrictive, fully permissive, and a middle-ground approach—allowing instructors to decide how much AI use is appropriate in their courses.
By fall 2025, nearly all of the 20 most popular undergraduate courses had explicit AI policies in place, compared with none in 2022, according to The Crimson.
Dean of Undergraduate Education Amanda Claybaugh explained the philosophy behind this approach: “AI is a powerful tool in the hands of someone who knows how to evaluate its work—and that means someone who knows how to do that work themselves. We need to make sure that students are learning that.”
Diverging approaches: Restricting vs. embracing AI
Faculty responses to AI vary widely across disciplines.
AI-proofing assignments
Some professors have opted to reduce AI’s influence entirely. History professor Jesse Hoffnung-Garskof replaced final research papers with oral exams, citing the ease with which large language models can generate written work. Physics professor Matthew Schwartz similarly moved from take-home finals to in-person exams, prioritizing memorization, problem-solving, and timed assessment.
In humanities courses, some faculty worry that overreliance on AI could dilute the intellectual rigor that defines their fields. English professor Deidre Lynch cautioned, “Giving AI a central role in education, especially in the humanities, seems like a denial of everything that makes human beings human.”
Harnessing AI
Other instructors encourage students to use AI as a learning companion. Computer Science 50, Harvard’s popular introductory course, offers a custom chatbot to answer coding questions. Economics 1010a introduced a course-specific AI assistant, and East Asian studies students use AI to translate centuries-old texts, then engage in class discussions to deepen understanding.
Statistics lecturer James Xenakis noted that AI accelerates research by quickly processing complex datasets, but stressed that students must still grasp the underlying concepts themselves.
Peter K. Bol, a professor of East Asian Languages and Civilisations, assigns weekly AI exercises that involve translation and follow-up questions. “Everyone is going off and doing something slightly different, and they get exposed to each other’s ideas,” Bol said, highlighting AI’s potential to foster collaborative learning.
Preparing students for an AI-driven world
Harvard’s leadership emphasises that learning to wield AI responsibly is a critical skill for the future. Dean David J. Deming, speaking at Convocation, reminded freshmen that young, educated people are already among the heaviest AI users. “You are creative and open-minded enough to figure out the best ways to use it,” he said, underscoring the need to harness AI thoughtfully.
The Bok Center for Teaching and Learning has supported faculty by developing course-specific AI chatbots, designing AI-resilient assignments, and running workshops on integrating AI into pedagogy. Faculty increasingly request specialised tools, such as AI to debug code or transcribe oral exams, rather than generic, all-purpose assistants.
Balancing ethics, learning, and time pressures
While AI use raises concerns about cheating, many faculty attribute its adoption to student workload pressures rather than a lack of diligence. Hoffnung-Garskof noted that most Harvard students “don’t rely on AI to write better papers than they could themselves—they are too committed to their own excellence.”
AI has also prompted reflection on teaching goals. In-person exams, oral assessments, and AI-resilient assignments aim to test not only content knowledge, but also critical thinking, creativity, and problem-solving—the skills students will need in an AI-driven world.
Harvard’s AI-resilient future
Three years after ChatGPT’s debut, Harvard’s approach to AI balances caution with opportunity. By combining AI integration with carefully structured, AI-resilient assessments, the university is equipping students to think critically, adapt creatively, and leverage technology effectively. The goal is clear: in the age of AI, mastery now means understanding both the content and the tools that can shape it.
This article is based on reporting by The Harvard Crimson.
Source link