
Q&A With Jeff Young on His “Learning Curve” Podcast
One of my consistent themes about figuring out how to adapt the work of higher education in a world where AI exists is that you have to be prepared to outsource some of the information and fact-finding to others. There is simply too much for any individual to sort through.
One of my recent go-tos is Jeff Young’s Learning Curve podcast. Jeff is a journalist in both experience and disposition, and by bringing a journalist’s sensibility to the task, he reveals insights that are different from what I get from others like myself, people coming from a practitioner mindset.
Previously, I’ve been on the receiving end of Jeff Young’s questions when he was host and producer of the EdSurge podcast. This time I got to turn the tables.
Before EdSurge and Learning Curve, Jeff was a reporter and senior editor at The Chronicle of Higher Education for long enough to cover the rise of the web at colleges. He has also taught classes and workshops on digital journalism, including serving as an adjunct lecturer at the University of Maryland at College Park for seven years and as an adjunct in the journalism school at the University of Minnesota.
In 2014 he spent a year as a Nieman Fellow at Harvard University, where he was also a fellow at the Berkman Klein Center for Internet and Society. Young has written for national publications including The New York Times, USA Today, Fast Company, New Scientist, Slate and The Wall Street Journal.
Q: You have eight episodes of Learning Curve, mostly exploring the intersection of AI and higher ed, in the can. I’m going to give you an impossible task before we get into more details, but give me one (or two at most) sentences that capture what you’ve learned.
A: My biggest question coming into this podcast is: Can AI really free up time somewhere for professors and students so that more time can be spent on the most human parts of education, or do these generative AI tools do more to get in the way of those human connections? I’m learning that the lure of hitting the “easy button” and just asking the bot is pretty strong, and it seems to take a huge amount of effort to design a use of AI that brings students together or opens space that is truly used for teaching and mentoring.
AI may seem like a shortcut (and it’s often sold that way), but the effective uses I’m seeing in education take more time by dedicated professors or students to get the real value of the AI tool. One promise of investing that extra time is that projects can be more ambitious and engaging with the AI assist.
Q: For a recent episode, you enlisted help from journalists at the University of Minnesota and went straight to students to ask them directly about how they view and use AI tools in school. What did you find?
A: Many students seem genuinely stressed and anxious about AI when it comes to using it as a tool for classwork, even if they aren’t taking shortcuts with the tools. They’re hearing about students getting falsely accused of AI use, and they worry that could happen to them. Some worry that maybe they should use AI more to prepare for a future job if that is coming to the workplace. And some students worry that if they do use AI as a study tool, maybe it is too much of a crutch and they aren’t getting their money’s worth out of college.
Meanwhile, AI seems to be giving students a way to essentially “skip” any homework they don’t believe is valuable, especially if they perceive the assignment as busywork or if they aren’t convinced the course is relevant to their lives. Many students seem to be acting like they are auditing all their courses, even though they are taking them for credit and getting high grades thanks to AI. They lock in for what seems interesting but then tune out the rest so they can focus on other things in their lives—extracurriculars, sports, friends, family and jobs that they need to work to pay for it all.
Q: My sense in my travels speaking to faculty and students is that students are actually eager to dive into the challenges, and would like to do it in some kind of partnership with their instructors, but neither side really knows how to do it. I have ideas, but I’m doing the interviewing here, so I’ll let you answer. What could you recommend at this point in order to bring these two parties together?
A: I’m hearing that some professors are involving students in crafting the AI policies for their classes at the outset rather than just sticking something on the syllabus and never talking about AI. Students seem to respond to being invited into the process of working through what to do about this new tech, and it seems to help when professors admit they don’t have all the answers here. And sometimes that is leading to stricter policies on AI than professors would have adopted otherwise.
In many cases, students have spent more time using AI tools and thinking about these issues than faculty have, and they have strong feelings about it.
There are interesting examples of when students have been invited into broader campus efforts on AI as well, such as an interdisciplinary AI center at Babson College (which is a very business-focused institution), where students have been invited to help co-create the program and in some cases advise faculty about AI.
Q: I think one of the fundamentally interesting and enjoyable parts about writing, particularly journalism, is that each step of the way may deliver some answers, but at the same time, that work is always generating more questions. What questions do you have about AI and education right now?
A: It’s true that each interview seems to unlock new questions, which I love. One of my biggest curiosities right now is why so many teenagers spend time with AI “friends.” What do they get out of this, and how does it fit into their attitudes toward AI in education? Also, what are researchers finding works in using generative AI in education?
Q: I want to step back and ask about your experience as a professional in the world who has been subject to significant shifts in the underlying conditions for people with your education and experience. You had many years of working as a journalist for established outlets (Chronicle, EdSurge, etc. …) while now you’re (I think) self-employed, cobbling together a profession out of an array of activities. Students are entering a world where those institutional jobs may never be available to them. This is a long way of asking, based in your experience, what should students be worried about learning in school to be able to thrive in today’s world?
A: My advice would be to look for any opportunity to practice collaborating with others—take on class projects or work with classmates and professors, or start some student organization or project. Go to office hours, raise your hand to lead a group project, look for a way to enter that class project into some national or collegewide contest or fellowship. That will give you something to talk about when seeking any job, and it helps to start a professional network. Plus, it just makes it all more interesting.
Q: This is good practical advice for networking and connections, uncovering possible opportunities. I would sometimes tell students about how random the opportunities that led to paying work were, entirely unplanned and without any sense of where they might lead. But I’m also interested in the less tangible or practical stuff, how you keep yourself engaged and interested in this work, even as the structures which make the work possible have eroded over time. What allows you to keep going, to succeed in doing work that seems fundamentally interesting to you?
A: I think a lot of it is curiosity, and a mix of wanting to find out what’s going to happen next and wanting to find new ways to tell stories.
Q: I’m curious—are you using any AI tools in your day-to-day work?
A: My approach is to experiment with using AI for various parts of making the podcast and to talk about the experience as part of the story.
For a recent episode, for instance, I tried NotebookLM for the first time to help prepare for an interview. I talked to Bryan Alexander about his forthcoming book, and he sent me the PDF of a review copy. I sat down and read the relevant chapters about AI in full (the old-fashioned way), since those were the sections I focused my interview on. Since I didn’t have time to read the whole book before I did the interview, though, I uploaded it to my NotebookLM and had it generate a 12-minute podcast summary by these cheery robot podcasters, and I listened to that to get the highlights of the book as a whole.
During the interview I confessed all this to Bryan (I felt a bit bad that I hadn’t read more of his book), and it turns out he said he did the same thing—he ran a draft through NotebookLM to get a podcast summary—and he treated that like a first reader giving him feedback on what struck them. He said he then tweaked his text to make sure the points he cared about the most were coming through for the AI robot. I found that fascinating, and I wouldn’t have learned about that if I hadn’t shared my own experience.
Also, I have AI make the artwork for each episode. The experience is sometimes rewarding, sometimes frustrating. Often I feel like I’m arguing with the bot to try to convince it to add in or remove certain details, and I would much rather work with a human. But these tools—I’ve tried Midjourney and Craiyon, and the Nano-Banana tool in Gemini, and will keep trying others—can spit out new images in seconds, and they cost little or nothing. And I’m trying to keep reflecting on the implications of what having these new bots could mean for creative professionals and for human expression.
Q: Lastly, I imagine you’re reading a lot of stuff related to these subjects. Is there a book or newsletter or writer or podcast or anything you’d recommend to help others be more thoughtful about today’s challenges around education and AI?
A: I definitely recommend the podcast Shell Game by Evan Ratliff, which just started its second season. This longtime journalist explores what it means to live and work among chatbots by diving in and trying the things that AI enthusiasts describe, but with a magazine writer’s gift of letting us see the good and the bad of what that looks like in practice. In the new season he built a couple of AI agents that are helping him start a company, and you get to hear what these bots come up with during brainstorming sessions and what it is like to bring in these AI collaborators.
I’d say my favorite book on AI isn’t even new, but dates back to 2011. It’s The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive, by Brian Christian. I think the fact that it was written back before the generative AI hype helps present the ideas we’re now grappling with more directly.
Source link



