
Faculty Often Missing From University Decisions on AI
More colleges and universities are adopting artificial intelligence tools, yet faculty are rarely part of their institution’s decision-making process, according to a survey on AI and academic professions the American Association of University Professors published Tuesday.
And, as the report shows, that breakdown of shared governance surrounding the implementation of AI and other education technology services has implications for the future of teaching, learning and job security.
The findings come more than a year after a group of practitioners and researchers wrote the AI Bill of Rights for Educators, which included a call for faculty agency in making choices about “whether, when, and how to use AI based on learning goals, student populations/learning contexts and pedagogical judgment.” But the AAUP’s survey suggests that most colleges and universities aren’t upholding that tenet of AI governance.
“Many colleges and universities currently have no meaningful shared governance mechanisms around technology,” read the AAUP’s report. “[T]he explosion of AI has highlighted the need for such mechanisms among faculty members at individual institutions and across the higher education workforce.”
Although 90 percent of the 500 AAUP members who responded to the survey last December said their institutions are integrating AI into teaching and research, 71 percent said administrators “overwhelmingly” lead conversations about introducing AI into research, teaching, policy and professional development, but gather “little meaningful input” from faculty members, staff or students. According to the report, that has contributed to a deficit of clear policies on AI implementation and use; that aligns with Inside Higher Ed’s own 2024 survey of chief academic officers, which found that only 20 percent of colleges and universities have published a policy or policies governing the use of AI.
Yet many professors may be using AI without even realizing it.
While only 15 percent of faculty respondents said their college or university mandates the use of AI, 81 percent said they are required to use education technology systems, such as the learning management platforms Canvas and Google Suite. But AI-powered predictive analytics are now embedded in both of those systems—even when users turn off AI features. In response, the AAUP report suggests that colleges and universities should offer “better and more critically informed, holistic professional development around AI, including what it is and is not and how it has been incorporated already.”
Of the faculty who knowingly use AI, one quarter said they employ it to help with the “undervalued aspects of academic labor,” such as writing emails, letters of recommendations, internal reports and reviews of grant applications and manuscripts. While many also use it to assist in the detection of student plagiarism—91 percent said they had concerns about preventing academic dishonesty—some said they were more concerned about how generative AI tools may be devaluing critical thinking.
“It is now more difficult for [students] to develop their thoughts on a topic because they don’t have to spend time with it while they work through writing about it,” one respondent wrote. “I am worried that they will never again get the chance to change their opinion as they expose themselves to ideas over the long term.”
AI Leading to Worse Outcomes
Despite claims by tech companies that AI has the power to improve education, the AAUP’s report captures some faculty’s fears about the motives and consequences of embracing the new technology.
“People are terrified of the onslaught of uncritical AI narratives and partnerships across many sectors, and what it means for the future,” Britt Paris, co-author of the report and associate professor of library and information science at Rutgers University, said in an email to Inside Higher Ed. “But in talking with higher education workers across the country, we on the committee have seen that AI in higher education is barely even functional and tech companies view higher education as a cash cow to exploit.”
Overall, faculty don’t believe AI is making their jobs any easier or reducing longstanding inequities, according to the survey. In fact, 76 percent of respondents said it’s deflating job enthusiasm; 69 percent said it’s hurting student success; 62 percent said it has created worse outcomes in the teaching environment; 40 percent said it’s eroding academic freedom, and 30 percent said it’s weakened pay equity.
With colleges and universities ramping up AI uses, at least 95 percent of faculty stressed the importance of each of the following: protecting intellectual property rights and academic freedom, implementing meaningful opt-out policies, maintaining data privacy, improving job security and wages, preserving workplace autonomy, and supporting accessibility.
“There is ample evidence for the damage done to individuals and to society by many tech products, including generative AI, but not limited to it,” one respondent wrote. “However, it is treated as an unqualified good in almost all circumstances and one is required to learn and use certain technologies, even when non-tech options would be better for the workplace environment, student learning, and personal quality of life.”
To avoid some of those potential downsides, the report recommended that universities gather input from faculty, staff and students before making deals with tech companies—and write liability clauses into the contracts they do procure. It also called for institutions to give faculty members, staff and students the right to opt-out of technology use without negatively affecting their working or learning conditions, as well as allowing faculty to challenge the implementation of certain ed tech products if the benefit isn’t clear.
But even if some faculty want to opt out of integrating technology such as generative AI into their curricula, they shouldn’t ignore the rise of AI altogether, said Marc Watkins, director of the AI Institute for Teachers and assistant director of academic innovation at the University of Mississippi.
“People have to learn about how this technology works and what the effect is on society,” he said, echoing one of the report’s recommendations: that institutions protect faculty and staff from technology’s potential to both intensify their workloads and justify reducing pay—or jobs. “That’s something we need to be aware of and how this is going to start affecting our day-to-day lives.”
The threat that widespread and uncritical adoption of AI poses to the job security of higher education professionals is more profound than ever as the Trump administration and its allies continue their financial and ideological attacks on colleges and universities.
“A lot of faculty are aware that if you start letting a technology like AI dictate the material conditions of your work, you can then have that technology, administrator or state legislature decide to pay you more, less or assign you more work on top of it,” Watkins said. “Or potentially replace us.”
Source link