
AI Replaces Expertise With Synthetic Authority (opinion)
The scene is familiar to anyone working in a contemporary university: A department chair sits in front of a glowing screen, tasked with drafting a strategic plan, a tenure evaluation or a grant proposal. The cursor blinks. The exhaustion is palpable. It is not physical fatigue, but a particular kind of epistemic weariness. After a moment’s hesitation, the chair opens a generative AI tool, pastes in a handful of bullet points and asks for a draft aligned with the institution’s core values and strategic priorities.
Seconds later, the text appears. It is fluent, coherent and perfectly calibrated to the administrative register. The chair makes a few cosmetic edits and submits the document. The task is complete. The institution is satisfied.
Yet something fundamental has been lost.
Current anxieties about artificial intelligence in higher education focus overwhelmingly on students. Faculty worry that AI tools will allow undergraduates to bypass the struggle of learning by producing essays without understanding. This concern is not misplaced, but it obscures a more consequential transformation occurring on the other side of the classroom. The deeper risk is not that students will fake their way to degrees. The risk is that faculty and administrators are increasingly adopting a form of synthetic authority that preserves institutional power while hollowing out the intellectual substance that once justified it.
Historically, academic expertise was defined by asymmetry and risk. As sociologist Andrew Abbott has shown, professions established authority by claiming jurisdiction over domains of uncertain knowledge. To be an expert was to exercise judgment under conditions where outcomes were not guaranteed and error was possible. Academic authority rested on the willingness to stake one’s reputation on a particular interpretation, argument or decision. Whether defending a controversial thesis, publishing an unpopular finding or denying tenure to a colleague, judgment was personal and accountable.
That linkage between authority and epistemic risk is now eroding. In the contemporary university, authority is migrating away from individual scholars and toward the infrastructural systems that mediate academic life. Metrics, rankings, assessment frameworks and compliance protocols increasingly determine what counts as legitimate knowledge and successful performance. Generative AI accelerates this shift by offering a new form of fluency that satisfies institutional demands without requiring deep engagement with substance.
When faculty use AI tools to generate syllabi, summarize literature or draft administrative language, they are not merely saving time. They are participating in a regime of synthetic fluency, producing outputs that conform to procedural expectations of coherence, tone and completeness. The resulting documents look authoritative, but their authority derives from stylistic alignment rather than epistemic depth. The expert becomes a relay point through which institutional legitimacy flows, rather than a source from which it originates.
This transformation is inseparable from what Michael Power famously described as the “audit society.” In audit-driven systems, organizations prioritize the production of evidence that proper processes have been followed over the substantive quality of outcomes. The verification of procedure replaces the verification of truth. What matters is not whether something is well understood, but whether it is demonstrably compliant.
Artificial intelligence is uniquely suited to this environment. It excels at producing legible artifacts. It can generate learning outcomes, diversity statements, policy rationales and strategic narratives that meet every formal requirement. As a result, universities now operate under a paradox of plausibility. Their documents have never been more polished, their policies never more comprehensive and their visions never more internally consistent. At the same time, the collective epistemic clarity of the institution is weakening.
Consider the contemporary grant application. Once framed as an opportunity to advance a distinctive hypothesis, it increasingly functions as a test of one’s ability to navigate highly specific stylistic, conceptual and rhetorical constraints imposed by funding bodies. Success depends less on the originality of an idea than on its alignment with predefined categories, keywords and evaluative rubrics. AI tools can optimize this alignment with remarkable efficiency. Authority flows to those who master the infrastructure, not necessarily to those who deepen understanding.
The consequences of this shift are not merely institutional. They are deeply personal. Across higher education, faculty report unprecedented levels of burnout, cynicism and disengagement. These symptoms cannot be explained solely by workload, funding cuts or administrative bloat. There is a moral and epistemic dimension to this fatigue.
Philosopher Byung-Chul Han has described the modern individual as an “achievement-subject,” compelled to constant self-optimization and performance. In academia, this pressure manifests as the demand to be perpetually productive, visible and impactful. When faculty meet these demands through synthetic fluency, allowing algorithms to smooth their prose, organize their thinking and generate compliant outputs, a subtle estrangement sets in. One continues to perform authority without fully inhabiting it.
The professor who relies on AI-generated lesson plans may feel detached from the classroom. The administrator who delegates policy drafting to language models may feel disconnected from the governance they oversee. Titles, publications and decisions remain, but the lived experience of judgment and responsibility thins. Authority persists outwardly while eroding inwardly.
If the academic profession is to survive as more than an interface layer for algorithmic systems, it must confront this transformation directly. Policing student plagiarism will not address the deeper problem. The challenge lies in our own practices and incentives.
Synthetic authority is seductive because it promises efficiency. It offers relief from administrative overload and the anxiety of the blank page. Yet the friction it removes was often where genuine thinking occurred. The difficulty of articulating a complex argument, the discomfort of making a defensible but contestable judgment and the slowness of writing were not incidental burdens. They were constitutive of expertise.
Resisting the hollowing out of academic authority requires a renewed commitment to friction. Universities must defend spaces where inefficiency is not a failure, but a condition for judgment. This means questioning metrics that demand constant output, valuing intellectual risk over procedural smoothness and tolerating forms of work that resist easy audit.
The danger is not that artificial intelligence will replace professors. The danger is that it will enable universities to function without anyone needing to understand, judge or take responsibility. Authority has become increasingly synthetic. Faculty must now decide whether they are content to serve as its relay, or whether they are willing to reclaim the difficult, imperfect work of being experts again.
Source link


