
We Don’t Need to Retreat From the Challenge of AI in Schools
One of the chief pleasures of traveling to schools and campuses to talk about More Than Words: How to Think About Writing in the Age of AI and my approaches to how we should approach the teaching of writing is getting the chance to see what other places are doing with the challenge of working in a world of generative AI technology.
My travels so far this semester have been very encouraging. It seems clear that we are in a new phase of reasoned consideration following on an earlier period of worry and uncertainty. I never saw outright panic, but there was a whiff of doom in the air.
There may be a selection bias in terms of the institutions that would invite someone like me to come work with them, but there is a clear impulse to figure out how to move forward according to institutional values, rather than being stuck in a defensive posture.
As I declared way back in December 2022, “ChatGPT can’t kill anything worth preserving.” The work of what must be preserved and how is definitely underway.
I want to share some impressions of what I think is working well at the institutions that are moving forward, so others may consider how they might want to do this work on their own campuses.
Going on Offense by Living Your Values
One clear commonality for successfully addressing the current challenges is by identifying the core institutional values and then making them central to the ongoing discussions about how instruction and institutional operations must evolve.
As one example, at my recent visit to Iona University, I was introduced to their framework of agency, expression and responsibility.
“Agency” is one of my favorite words when talking about learning, period, and in this case it means communicating to students that it is ultimately the students themselves who must choose the path of their own educations, including the use of AI technology. I’ve recently been speaking more and more about AI in education as a demand-side issue, where students need to see the pitfalls of outsourcing their learning. Agency puts the responsibility where it belongs: on students themselves.
Expression represents a belief that the ultimate goal of one’s education is to develop our unique voice as part of the larger world in which we work and live. Writing isn’t just producing text but using the tools of expression—including text—to convey our points of view to the world. Where LLMs use substitutes for or obscure our personal expression, they should be avoided.
Responsibility is related to agency in the “with great power comes great responsibility” sense. Students are encouraged to consider the practical and ethical dimensions of using the technology.
At other stops I’ve seen similar orientations, though often with wrinkles unique to local contexts. One common value is rather than retreating to assessments that can be monitored in order to prevent cheating, the goal is to figure out how to give life to the kinds of educational experiences we know to be meaningful to learning.
If you start with the values, things like policy can be evaluated against something meaningful and enduring. The conversations become more productive because everyone is working from a shared base.
I know this can be done, because I’ve been visiting institutions working on this problem for more than 18 months, and the progress is real.
Collective Spirit and Collaborative Action
Another common sign of progress is institutional leadership that communicates a desire to take a collective approach to tackle the issues and then puts specific, tangible resources behind this call to make collaborative action more possible and effective.
Several institutions I’ve visited have carved out spots for some version of AI faculty fellows, where these fellows are given freedom to explore the technology and its specific implications to their disciplines, before coming back to a group and institutional setting where this learning is shared.
To work, these must be more than groups tasked to figure out how to integrate AI technology into the university. I have not visited any institution that has done this—they are unlikely to invite someone like me—but I have been corresponding with people whose institutions are doing this who are looking for advice, and it seems like a sure route to a divided institution.
At my Iona visit, they took this approach to the next level by putting on a one-day conference and inviting community educators from all walks to hear not just yours truly, but also the AI fellows and other faculty discuss a variety of issues.
These conferences don’t solve every problem in a day, but simply demonstrating to the broader public that you’re working the problem is deeply encouraging.
Room and Respect for Difference
One of my favorite parts of my visits is the chance to talk with the faculty on a campus who have been wrestling with the same challenges I’m spending my time on. At the base level, we share the same values when it comes to what learning looks like and the importance of things like agency and transparency to achieving those things.
But when it comes to the application and use of generative AI technology to achieve these outcomes, there are often significant differences. I share my perspective, they share theirs, and while I don’t think we necessarily change each other’s minds, a great appreciation for a different perspective is achieved.
It’s a model of what I always based my courses in, the academic conversation, where the goal of writing and speaking is to gradually increase the amount of illumination on the subject at hand. We’re having a discussion, not a “debate.”
I am far more skeptical and circumspect about the utility of generative AI when it comes to teaching and learning than many. I often point out that anyone who is using the technology productively today established a whole host of capacities (or what I call a “practice”) in the absence of this technology, so it stands to reason that we should still be educated primarily without interacting with or using the technology.
But I’ve also seen tangible demonstrations of integrating the capacities of generative AI tools in ways that seem to genuinely open potential new avenues. These people need to keep experimenting, just as those of us who want to find ways to do our work in the absence of AI should be empowered to do so.
Do More Than ‘Doing School’
Maybe this belongs as part of the first point of “going on offense,” but the successes I’ve seen have come from a willingness to fundamentally question the system of schooling that has resulted in students primarily viewing their educations through a transactional lens.
In many cases, generative AI outputs satisfy the transaction of school in ways that mean students learn literally nothing. We’ve all read the viral articles about students using AI for everything they do.
But I can report from my visits to many different institutions and talking to people working at many more that this is not universally true. Many students are eager to engage in activities that help them learn. It then becomes the responsibility of schools and instructors to give students something worth doing.
Retreating to analog forms because they can be policed is a missed opportunity to rethink and redo things we know were not working particularly well.
There is not endpoint to this rethinking. Frankly, I find this energizing, and it’s clear lots of others do, too. This energy is something we can use to help students.
Source link