Colby College Goes All In on AI
Since 2022, there’s been a surge in the number and types of applications using generative AI, but not all tools are the same. So how can faculty, staff and students learn to identify the differences and determine when it’s appropriate to leverage these tools?
Colby College developed a platform, called Mule Chat, that allows users to explore several large language models, including ChatGPT, Gemini, Claude and LLaMA. The platform provides a safe on-ramp into generative AI usage and relies on student tutors to disseminate information to peers.
In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with David Watts, the director of Colby College’s Davis Institute for Artificial Intelligence, and Michael Yankoski, Davis AI research and teaching scientist, to learn about the college’s AI institute and how Mule Chat works.
An edited version of the podcast appears below.
Inside Higher Ed: Can we start the conversation by talking a bit about what AI at Colby College looks like? What is the landscape you’re working with and how are you thinking about AI when it comes to teaching and learning?
David Watts: I am new to Davis AI, as we call it at Colby, but the [Davis AI] Institute has actually been around since before ChatGPT, so Colby kind of had a pioneering approach.

David Watts, director of the Davis Institute for AI at Colby College
Colby is a small liberal arts college, and they had the vision that this was going to be around for a while. And rather than, as most institutions were doing, sort of keep it at bay or ban it from campus, Colby dove in and wanted to engage with it and understand how it is going to impact education.
I spent most of my career in industry, mostly in research and development, and so I when I wanted to make the jump over to academia, I wasn’t expecting to find that small liberal arts colleges had done this, and when I saw what Colby had done, I was really drawn to it and came over. So I’ve really loved what has been going on and what continues to go on at Colby with the Davis Institute for Artificial Intelligence.
Inside Higher Ed: Michael, your role puts you directly in connection with faculty when it comes to integrating AI into their classrooms or into their programs. Can you talk about what that looks like and how maybe that looks different at a liberal arts institution?

Michael Yankoski, research and teaching scientist, Davis Institute for AI at Colby College
Michael Yankoski: One of the most amazing aspects of the Davis Institute for Artificial Intelligence here at a place like Colby is the liberal arts approach that the institution as a whole is able to engage with.
That means that we’re able to facilitate conversations from a multiplicity of different disciplines and bring faculty together from different approaches across the divisions in the college—from the STEM fields to the humanities to the social sciences. And have really productive, very generative conversations around ways to engage with artificial intelligence and the shared learning and shared knowledge of people who have been really pioneering in the area. To able to say, “How can I integrate generative artificial intelligence with my pedagogy? How can I help think with students about how to engage these technologies in a way that is beneficial for their education, help empower students in their education and then on the research side?”
Many faculty with whom we work at the Davis Institute are exploring ways to integrate artificial intelligence in their research program, and to say, “Is there a way that artificial intelligence can help me accelerate my research or take my research in new directions?” The opportunity to bring people together to discuss that and to facilitate those conversations across the disciplines is one of the best aspects of the liberal arts approach to artificial intelligence.
Inside Higher Ed: Does Colby have an institutional policy for AI use, or what appropriate AI use looks like?
Watts: It’s a moving target. Anyone who tells you they have it all figured out is probably embellishing. It is a moving target, but one of the things we did was make sure we engage faculty, and in fact, we started with faculty, then we engaged administrators, we engaged students and we engaged general counsel, and evaluated what the challenges are, what the downsides are. And we made sure that we built what we call guidelines rather than policy.
The guidelines talk through the dos and don’ts but also leave enough flexibility for our faculty to think through how they want to engage with AI, especially since AI is a moving target, too. As we grow and learn with our faculty, we adapt and adjust our guidelines and so they’re out there for everyone to see, and we will continue to evolve them as we move forward.
Inside Higher Ed: Can you introduce our listeners to Mule Chat? What is it and how does it work on campus?
Watts: Michael has been here and was one of the originators of creating Mule Chat on campus. And so he can tell you a lot of the details and how it’s been working.
But what I loved about what Michael and the team did, and it was a collaborative effort, was to create, I’ll call it an on-ramp. We were working towards moving the needle from banning AI, as one extreme, to engaging with AI and creating a tool that allowed faculty, students and staff to all easily engage with multiple tools through Mule Chat.
It lowered the activation barrier to entry to AI and allowed us to have an on-ramp for people to come in and start seeing what the possibilities are, and it has worked brilliantly.
Yankoski: The idea behind Mule Chat originally was to provide a place for students, faculty and staff to begin to get experience with and understanding around generative AI. To provide a space where folks could come and understand a bit more about, what are these tools? How do they work? What are they capable of? What are some of the areas we need to be aware of, the risks and the best practices, and how can we provide this on-ramp, as David described, for people to be able to engage with generative artificial intelligence?
This is about student success, empowering students to understand what these technologies are, what they’re good at, what they’re not good at. And then also, one of the key principles here was equity of access. We wanted to ensure that anybody on Colby’s campus, regardless of whether they could afford one of the premium subscription services, was able to get access to these frontier models and to understand how to then do the prompt engineering work, and to then compare the kinds of outputs and capabilities of some of the frontier models. And so really, the core sort of genesis and driving desire for the creation of Mule Chat was to provide this on-ramp that would empower student success, allow equity of access, and also would provide a safe and secure place for people to be able to engage these technologies and to learn.
Inside Higher Ed: Can you describe the functionality of Mule Chat? For someone who has never experimented with LLMs, what does it look like or feel like to engage with Mule Chat?
Watts: You touched on something really great there, because that was part of the idea. We introduced multiple models into Mule Chat so that people could compare and get an idea of what it’s capable of and what it’s not capable of.
I’ll give an example of a faculty member who we are working with right now who started with Mule Chat, engaged with it in their preparation—this is a professor of East Asian studies—how they prepare their classes, realized what the capabilities were, started doing more with it, with their students. The students then brought interesting ideas about what else we can do and pushed beyond even the limits of Mule Chat. And then Davis AI can go help them bring in, for example, they were looking at—not only just looking at old archives and using that in their teaching of East Asian studies, but also bringing in video capability, for example, and in fact, even creating new videos or some of the research that they’re doing now, bringing in more capabilities above and beyond Mule Chat. So it is exactly what Michael was saying, an on-ramp that then opens up the possibilities of what we can do with AI in higher education.
Yankoski: I think the real value of the Mule Chat interface is that it allows people to compare the different models.
Folks can use prompt engineering to compare the outputs of one model and then put that alongside the outputs of another model and be able to observe the way that different models might reason or might do their inference in different kinds of ways.
That side-by-side comparison is a really powerful opportunity for people to engage with the different models and to experience the different kinds of outputs that they create. To build on what David was saying, the ability to then put other tools [like videos] inside of the Mule Chat platform, that allows for deeper research into particular areas. For example, we have a tool that we built, which is called Echo Bot.
The Colby student newspaper is called the Colby Echo, so we’ve been able to bring all the archives of the Echo into a tool that allows students and faculty researchers to engage with those archives and chat with the entire archive of the Colby Echo. We’ve been working closely—and this goes back to the liberal arts approach—with different faculty across campus, as well as the college libraries, to bring this tool online and make it available within the Mule Chat system.
Inside Higher Ed: Let me know if you can build me an IHE bot, because I can never find anything in our archives. I could really benefit from something.
Watts: We can brainstorm on that.
Inside Higher Ed: Great, we’ll talk about licensing later.
I wanted to ask, it seems there’s a new AI tool that pops every other day. So when you’re talking about comparing different tools and thinking about what might be most relevant for students, how often are you scouting out the landscape to understand what’s out there and relevant?
Watts: That’s a great question, and actually extremely important that we do that.
Not only are we reaching out and finding, reading, learning, attending conferences, helping to create conferences ourselves that bring in people and experts who are different perspectives, but we also then have lots of people on campus who have their own ideas. People come to us regularly with, “Oh, look at this cool tool. We should use it for this thing on campus.”
And that’s when we use that for educating people about some of the potential pitfalls that we have to watch out for, talking about guardrails and when you’re bringing in new capability, just like you had to think about when you’re bringing in new software. But I think it’s even more imperative that we’re very careful about what AI tools we bring into campus. You’re absolutely right that there are tons of them that all have different capabilities. But one of the things we try to teach is that there’s a full spectrum: the great, the good, the bad and the ugly. You have to think about that entire spectrum. And that’s one of the beauties of what I loved about coming to a liberal arts college was that you have multiple perspectives, and coming from all forms of disciplines in the humanities, the arts, the natural sciences, the social sciences, and all are engaged and can be engaged across AI.
Yankoski: I think that’s what’s so unique and really powerful about the Davis Institute for Artificial Intelligence approach. When we work with faculty and students and really, if some faculty member or student has an idea that they want to explore, we have structures that allow for technology grants, for faculty to be able to come and to propose the use of a new tool, or to advance their teaching or to advance their research.
Then that’s a great opportunity to engage with that faculty member and perhaps their research assistants, and work with those students and that faculty member to explore the possibility of using that tool. Each faculty member knows their domain so much better than we do. As the core Davis AI team, we’re able to work with that faculty and those students to better understand the use case, better understand the tools that they want to engage, and then work with them to consult and to create a pathway forward. That’s an incredible opportunity as well for the students to understand, how do we think about the security of the data? How do we think about the processing pipeline? How do we think about the best practices with regards to utilizing artificial intelligence in this particular domain?
Really that’s about student empowerment and student success as they get ready to transition out of college into an economy where increasingly expectations around knowledge and the ability to utilize and to vet artificial intelligence are only going to increase.
Inside Higher Ed: How are students engaged in this work?
Yankoski: One the most intriguing aspects of Mule Chat has been that students have been really leading in teaching and empowering other students to utilize the tool and to understand the quantum engineering aspects and to understand the different models.
The student leaders have been working with Mule Chat and then actually teaching other students, teaching faculty and helping lead the sessions, as well as working on their own projects within Google Chat. So it’s been a really strong and quite incredible platform for student engagement and student empowerment as students learn from one another and then are able to learn how to teach about these tools to their peers.
Watts: That’s absolutely a huge part of what we did, and I mentioned that, even though students come first, we started working to move the needle with faculty first on purpose, with students in mind. And then we branched out into, now we can engage the students. Once you have enough buy-in from faculty, start engaging the students, and we’ve been doing a lot of that.
Then what’s beautiful, the magic happens when the students start coming up with thoughts and ideas that grow in ways that faculty haven’t thought of. Because remember that a lot of this is new to faculty as well.
So we actually then will identify key students that we have been working with and actually hire them on board as Davis AI research associates that then help us continue to move the needle, because there’s nothing better for students than to hear from other students about what’s possible. And the same goes for faculty, by the way. So, you know, Michael was mentioning a little bit about our strategy with faculty and how we engage them. But a part of what we do is faculty sessions. We give them creative names like “Bagels and bots,” and we include food and then we have those sessions where faculty talk to faculty. We do the same with the students, so students can talk to students. And it’s just wonderful to see the magic that happens when that begins to grow organically.
Inside Higher Ed: What has the reception been to Mule Chat?
Watts: Most people were skeptical [of AI] early on; most were in the mode of “push it away.” I think that drove some interesting behaviors in faculty and students.
So a big part of what we’ve been trying to do is essentially drive towards AI literacy for all. And when I say all, it’s an interdisciplinary approach. We’re looking across the entire campus, and so all students in all departments are what we’re driving towards. Now, you correctly point out that there will always be skeptics. I will strive for 100 percent, but if we asymptotically approach that into the future, I’ll live with that.
The goal is to prepare students, and that’s who we need to make sure that we’re preparing for the life they’re going to go into that’s been transformed by AI, that touches everybody. One of the cool things is we’re giving out grants to faculty to engage with AI and come up with ideas, and we’re doing that on multiple levels, and those faculty are now coming from all. We have art professors. We have writing professors. We have East Asian studies. We have professors from government, we have all of them engaging and so we’ve been able to, therefore, move the needle quite a bit so that a lot more people are a lot more receptive and open to it on campus, which is great.
Inside Higher Ed: You mentioned that Colby has a faculty-led approach, but sometimes that means that students from specific majors or disciplines might be less exposed to AI than others, depending on who their faculty are. It seems like you all are taking a balanced approach, not only encouraging enthusiastic AI entrepreneurs but also working with the skeptics.
Watts: It’s absolutely critical that we work on both ends of that spectrum, if that makes sense. We’re driving great innovation, and there’s great examples of research right here on campus that are doing wonderful things in an interdisciplinary way.
We just won an NSF grant for ARIA, an NSF institute looking at AI assistance in mental health, because that’s one of the most challenging spaces for how the models interact with people with mental and behavioral health challenges. It’s a perfect example of our interdisciplinary approach, with a professor from psychology working with a professor from computer science to go tackle these challenging areas. And I think that’s one of the things that Colby has done well, is to take that broader, interdisciplinary approach. Many people say that word now, but I think the liberal arts are primed for leading the charge on what that’s going to look like, because AI, by its nature, is interdisciplinary.
Inside Higher Ed: What’s next on campus? Is there any area that you’re all exploring or looking to do some more research in, or new tools and initiatives that our listeners should know about for the future?
Watts: We’re consistently evaluating that and bringing them in. What we’re trying to do is let it grow based on need as people explore and come up with ideas.
I mentioned the video; we’re now enabling video capability so we can do some of that research. It also opens up more multimodal approaches.
One of the approaches to the ARIA research, for example, is we want to be able to detect and therefore build context-aware assistance to have better results for everyone. So if we can solve the mental and behavioral health challenges, it’s probably one of the most difficult ones. It can also solve some of the other areas of underrepresented people who are left out or underrepresented groups who are left out of training, for example, which can lead to challenging behaviors.
I’m really excited about all of those possibilities and the areas that allow us to enable. We talked about access, we can also talk about accessibility.
We have on campus the Colby College Museum of Art; one of the faculty in computer science is exploring accessibility options using AI with a robotic seeing-eye dog. If someone wanted to visit the museum who was blind or visually impaired, they could interact with a seeing-eye dog that they’re used to, but this seeing-eye dog now might have more capability to communicate with people about what they’re seeing and in a museum setting, for example.
So really excited about that type of research: how do we really benefit humanity with these types of tools.
Inside Higher Ed: One thing I wanted to ask about is resources allocated from the university to be able to access all these tools. What investment is the college making to ensure that students are able to stay on the cutting edge of AI initiatives?
Watts: That’s absolutely critical. We want to make it no cost to our students and accessible to our students, but it still costs. So [it’s vital to] make sure that we have funding.
We were very lucky that we got a Davis endowment that enabled us to build the Davis Institute. That was huge because, and you can think about some of the challenges with federal funding and all of that stuff, but to have an endowment that allowed us to draw on that and really build strong capabilities at Colby College was critical. But you’re touching on the fact that we’re going to need to continue to do that. And that’s where, for example, the NSF grant and other grants that we will continue to explore will help us with how we continue to grow our impact and grow our value as we head into the future.
Source link


