
College Students’ Views on AI

Faculty and administrators’ opinions about generative artificial intelligence abound. But students—path breakers in their own right in this new era of learning and teaching—have opinions, too. That’s why Inside Higher Ed is dedicating the second installment of its 2025–26 Student Voice survey series to generative AI.
About the Survey
Student Voice is an ongoing survey and reporting series that seeks to elevate the student perspective in institutional student success efforts and in broader conversations about college.
Some 1,047 students from 166 two- and four-year institutions, public and private nonprofit, responded to this flash survey about generative artificial intelligence and higher education, conducted in July. Explore the data, captured by our survey partner Generation Lab, at this link. The margin of error is plus or minus three percentage points.
See what students have to say about trust in colleges and universities here, and look out for future student polls and reporting from our 2025–26 survey cycle, Student Voice: Amplified.
Some of the results are perhaps surprising: Relatively few students say that generative AI has diminished the value of college, in their view, and nearly all of them want their institutions to address academic integrity concerns—albeit via a proactive approach rather than a punitive one. Another standout: Half of students who use AI for coursework say it’s having mixed effects on their critical thinking abilities, while a quarter report it’s helping them learn better.
Here are seven things to know from the survey, plus some expert takes on what it all means, as higher education enters its fourth year of this new era and continues to struggle to lead on AI.
- Most students are using generative AI for coursework, but many are doing so in ways that can support, not outsource, their learning.
The majority of students, some 85 percent, indicate they’ve used generative AI for coursework in the last year. The top three uses from a long list of options are: brainstorming ideas (55 percent), asking it questions like a tutor (50 percent) and studying for exams or quizzes (46 percent). Treating it like an advanced search engine also ranks high. Some other options present more of a gray area for supporting authentic learning, such as editing work and generating summaries. (Questions for educators include: Did the student first read what was summarized? How substantial were the edits?)
Fewer students report using generative AI to complete assignments for them (25 percent) or write full essays (19 percent). But elsewhere in the survey, students who report using AI to write essays are somewhat more likely than those using it to study to say AI has negatively impacted their critical thinking (12 percent versus 6 percent, respectively). Still, the responses taken as a whole add nuance to ongoing discussions about the potential rewards, not just risks, of AI. One difference: Community college students are less likely to report using AI for coursework, for specific use cases and over all. Twenty-one percent of two-year students say they haven’t used it in the last year, compared to 14 percent of four-year students.
- Performance pressures, among other factors, are driving cheating.
The top reason students say some of their peers use generative AI in ways that violate academic integrity policies is pressure to get good grades (37 percent over all). Being pressed for time (27 percent) and not really caring about academic integrity policies (26 percent) are other reasons students chose. There are some differences across student subgroups, including by age: Adult learners over 25 are more likely than younger peers to cite lack of time due to work, family or other obligations, as well as lack of confidence in their abilities, for example. Younger students, meanwhile, are more likely to say that peers don’t really care about such policies, or don’t connect with course content. Despite the patchwork of academic integrity policies within and across institutions, few students—just 6 percent over all—blame unclear policies or expectations from professors about what constitutes cheating with AI.
- Nearly all students want action on academic integrity, but most reject policing.
Some 97 percent believe that institutions should respond to academic integrity threats in the age of generative AI. Yet approaches such as AI-detection software and limiting technology use in classrooms are relatively unpopular options, selected by 21 percent and 18 percent of students, respectively. Instead, more students want education on ethical AI use (53 percent) and—somewhat contradicting the prior set of responses about what’s driving cheating—clearer, standardized policies on when and how AI tools can be used. Transparency seems to be a value: Nearly half of students want their institutions to allow more flexibility in using AI tools, as long as students are transparent about it.
Fewer support a return to handwritten tests or bluebooks for some courses, though this option is more popular among students at private nonprofit institutions than among their public institution peers, at 33 percent versus 22 percent. Those at private nonprofit institutions are also much more in favor of assessments that are generally harder to complete with AI, such as oral exams and in-class essays.
- Students have mixed views on faculty use of generative AI for teaching.
The slight plurality of students (29 percent) is somewhat positive about faculty use of AI for creating assignments and other tasks, as long as it’s used thoughtfully and transparently. This of course parallels the stance that many students want from their institutions on student AI use, flexibility underpinned by transparency.
Another 14 percent are very positive about faculty use of AI, saying it could make instruction more relevant or efficient. But 39 percent of students feel somewhat or very negatively about it, raising concerns about quality and overreliance—the same concerns faculty members and administrators tend to have about student use. The remainder, 15 percent, are neutral on this point.
- Generative AI is influencing students’ learning and critical thinking abilities.
More than half of students (55 percent) who have used AI for coursework in the last year say it’s had mixed effects on their learning and critical thinking skills: It helps sometimes but can also make them think less deeply. Another 27 percent say that the effects have actually been positive. Fewer, 7 percent, estimate that the net effect has been negative, and they’re concerned about overreliance. Men—who also report using generative AI for things like brainstorming ideas and completing assignments at higher rates than their women and nonbinary peers—are also more likely to indicate that the net effect has been positive: More than a third of men say generative AI is improving their thinking, compared to closer to one in five women.
- Students want information and support in preparing for a world shaped by AI.
When thinking about their futures, not just academic integrity in the present, students again say they want their institutions to offer—but not necessarily require—training on how to use AI tools professionally and ethically, and to provide clearer guidance on ethical versus misuse of AI tools. Many students also say they want space to openly discuss AI’s risks and benefits. Just 16 percent say preparing them for a future shaped by generative AI should be left up to individual professors or departments, underscoring the importance of an institutional response. And just 5 percent say colleges don’t need to take any specific action at all here. Adult students—many of whom are already working—are most likely to say that institutions should offer training on how to use AI tools professionally and ethically, at 57 percent.
Less popular options from the full list:
- Integrate AI-related content into courses across majors: 18 percent
- Leave it up to individual professors or departments: 16 percent
- Create new majors or academic programs focused on AI: 11 percent
- Connect students with employers or internships that involve AI: 9 percent
- Colleges don’t need to take any specific actions around AI: 5 percent
- On the whole, generative AI isn’t devaluing college for students—and it’s increasing its value for some.
Students have mixed views on whether generative AI has influenced how they think of the value of college. But 35 percent say there’s been no change, and 23 percent say it’s more valuable now. Fewer, 18 percent, say they now question the value of college more than they used to. Roughly another quarter of students say it has changed how they think about college value, they’re just not sure in what way. So college value hasn’t plummeted in students’ eyes due to generative AI—but the technology is influencing how they think about it.
‘There Is No Instruction Manual’
Student Voice poll respondent Daisy Partey, 22, agreed with her peers that institutions should take action on student use of generative AI—and said that faculty members and other leaders need to understand how accessible and potent it is.

Daisy Partey
“I’d stress that it’s super easy to use,” she said in an interview. “It’s just so simple to get what you need from it.”
Partey, who graduated from the University of Nevada at Reno in May with a major in communications and minor in public health, said using generative AI became the default for some peers—even for something as simple as a personal introduction statement. That dynamic, coupled with fear of false positives from AI-detection tools, generally chilled her own use of AI throughout college.
She did sometimes use ChatGPT as a study partner or search tool, but tried to limit her use: “Sometimes I’d find myself thinking, ‘Well, I could just ChatGPT it.’ But in reality, figuring it out on my own or talking to another physical human being—that’s good for you,’” she said.
As for how institutions should address generative AI, Partey—like many Student Voice respondents—advocated a consistent, education-based approach, versus contradictory policies from class to class and policing student use. Similarly, Partey said, students need to know how and when to use AI responsibly for work, even as it’s still unknown how the technology will impact fields she’s interested in, such as social media marketing. (As for AI’s impact on the job market for new graduates, the picture is starting to form.)
“Provide training so that students know what they’re going into and the expectations for AI use in the workplace,” she emphasized.
Another Student Voice respondent at a community college in Texas, who asked to remain anonymous to speak about AI, said she uses generative AI to stay organized with tasks, create flash cards for tests and exams, and come up with new ideas.
“AI isn’t just about cheating,” she said. “For some students, it’s like having a 24-7 tutor.”
Jason Gulya, a professor of English and media communications at Berkeley College who reviewed the survey results, said they challenge what he called the “AI is going to kill college and democratize all knowledge” messaging pervading social media.
That the majority of students say AI has made their degree equally or more valuable means that this topic is “extremely nuanced” and “AI might not change the perceived value of a college degrees in the ways we expect,” he added.
Relatedly, Gulya called the link between pressure to get good grades and overreliance on AI “essential.” AI tools that have been “marketed to students as quick and efficient ways to get the highest grades” play into a “model of education that places point-getting and grade-earning over learning,” he said. One possible implication for faculty? Using alternative assessment practices “that take pressure away from earning a grade and that instead recenter learning.”
Jill Abney, associate director of the Center for the Enhancement of Learning and Teaching at the University of Kentucky, said it makes “total sense” that students also report that time constraints are fueling academic dishonesty, since many are “stretched to the limits with jobs and other responsibilities on top of schoolwork.” To this point, one of the main interventions she and colleagues recommend to concerned instructors is “scaffolding assignments so students are making gradual progress and not waiting until the last minute.”
On clarity of guidelines around AI use, Abney said that most instructors she works with have, in fact, “put a lot of time into crafting clear AI policies.” Some have even moved beyond course-level policies toward an assignment-by-assignment labeling approach, “to ensure clear communication with students.” Tools to this end include the university’s own Student AI Use Scale.
Mark Watkins, assistant director of academic innovation and lecturer of writing and rhetoric at the University of Mississippi, underscored that both faculty-set policies for student use of AI and expectations for faculty use of AI have implications for faculty academic freedom, which “should be respected.”
At the same time, he said, “there needs to be leadership and a sense of direction from institutions about AI integration that is guided. To me, that means institutions should invest in consensus-building around what use cases are appropriate and publish frameworks for all stakeholders,” including faculty, staff and administrators.” Watkins has proposed his own “VALUES” framework for faculty use of AI in education, which addresses such topics as validating and assessing student learning.
Ultimately, Abney said, it’s a good thing students are thinking about how AI is impacting their cognition—a developing area of research—adding that students tend to “crave shared spaces of conversation where they can have open dialogues about AI with their instructors and peers.”
That’s what learning about generative AI and establishing effective approaches requires, she said, “since there is no instruction manual.”
This independent editorial project is produced with the Generation Lab and supported by the Gates Foundation.
Source link