Reflections on the Value of an AI-assisted Textbook
When a literature professor debuted an AI-assisted textbook last year, she hoped it would free up more time for students in her medieval literature survey course to engage in meaningful discussion about the material.
“I was convinced that I could do a much better job of [teaching] this required course, where students are not always interested and motivated,” Zrinka Stahuljak, a comparative literature professor at the University of California at Los Angeles, said. “It was a no-brainer that I could use [the AI-assisted textbook] to free up my own time and be a much more effective and approachable and accessible teacher.”
The textbook, developed in partnership with the learning tool company Kudu, was produced solely from course materials—including some rare primary sources—provided by Stahuljak, who edited the volume and has the ability to update it. Students can interact with the $25 textbook’s built-in chatbot and ask it for clarifications and summaries, though it’s programmed to prevent students from using it to write their papers and other assignments.
But some of Stahuljak’s colleagues—at UCLA and elsewhere—were skeptical of AI’s ability to enhance the course.
“This is truly bad and makes me wonder if we aren’t participating in creating our own replacements at the expense of, well, everyone who cares about teaching and learning,” one English professor wrote on social media at the time. Others characterized the move as “flat out stupid,” “absolute nonsense” and an idea that takes “the human out of humanities.”
Despite the criticism, Stahuljak used the textbook to teach the course last spring.
In an interview with Inside Higher Ed last month, she reflected on the backlash and benefits of using the textbook—and what may be holding other faculty back from embracing AI’s power to enhance teaching and learning.
(This interview has been edited for length and clarity.)

Q: Did you expect the backlash to the textbook and what do you think drove it?
A: Knowing my colleagues in the humanities and the humanistic social sciences, I’m not surprised that they were [skeptical]—they can be the least experimental bunch. But I was surprised that some of my colleagues at UCLA were so skeptical because they’ve known me for years and don’t think of me as a person who follows fads. I was really shocked that they couldn’t see that this textbook was my creation; it was carefully edited, just as if it had been printed.
I don’t see how a traditional textbook that costs $250 and is out of date within two years or three years, would be in some way better than a custom $25 AI-facilitated textbook that is based on my material.
But I think some of that hesitation comes from an enormous fear many faculty have about AI. There’s a divide between people who are looking into the uses of AI and [those who are] philosophizing about the ethics of AI.
Universities are giving everyone access to commercial [AI tools], but most people don’t really know how to use it, and we’re not getting any guidance. And it’s playing into faculty fears that we’re losing control to these mega companies.
Q: What’s the benefit of using a custom AI-assisted textbook to aid teaching and learning as opposed to faculty and students using an array of available commercial AI-powered products?
A: All of these AI tutors are popping up. Students can learn on their own when it’s good for them. They can watch tutorials that all these AI tutors are providing. That’s great, but are the AI tutors based on a textbook and the material the professor has approved? So, do we not want to have a [custom] AI tutor as part of our classes and textbooks? I think we do. It’s better than some commercial version that has nothing to do with what you’re teaching or is pulling the information from the internet.
We’re losing that control when we are indiscriminately given ChatGPT or other commercial generative AI-powered tools.
Q: How did the students who took your class last year react to using the textbook?
A: When they first learned about it, a third were amused, a third were surprised and a third were indifferent.
[Compared to teaching the class without the AI-assisted textbook], engagement went up. I always have that front row of students that’s engaged, but I had several front rows that were engaged. Students started showing up for office hours, wanting to discuss their paper with me. I was shocked.
It also increased accessibility because the textbook has audio and video versions of the chapters; a number of students told us they were listening to it on their way to class or at the gym.
Another student who had never used AI said he learned a lot from the textbook because of the built-in chatbot that was linked to the textbook. The chatbot was designed not to give students the answers. And the questions it asks aren’t about the date something happened, but about understanding arguments, logic, or the causality and effect, and so they had to really think through that.
Q: Did using the custom AI-assisted textbook free up more time for in-depth discussion like you had hoped?
A: Yes, because I didn’t have to teach the material that was in the chapters. Instead, I could summarize or bring up just the relevant points. Then, I could discuss the primary sources with the students, which I’ve never had the time for before and would always fall on the teaching assistants.
The textbook also frees up the time for the TAs. Instead of having to cover all of the primary sources during the two-hour discussion section, they could focus on one primary source for the in-class critical analysis and writing exercise. During the writing session, students get live feedback from the TAs. And we could see what students were doing and thinking and noticed they weren’t using ChatGPT to write their papers, but they were actually thinking and writing and working.
Q: Despite the success of the textbook for your course, only a couple of other humanities courses at UCLA are utilizing it. How has the higher education sector’s approach to AI integration contributed to this reluctance to use AI?
A: Universities are going to look for a fast solution [to adopting AI]. Big tech companies can provide one at scale, but it probably won’t work for everyone. And when universities do these huge tech partnerships, they’re deciding for us—we can use AI, but within the parameters of what this tech company has decided.
The university is making faculty do the work [of coming up with innovative uses for AI], but we’re currently given only one option: ChatGPT. [In 2024, UCLA announced that it was incorporating OpenAI’s ChatGPT Enterprise into its operations, granting access to all students, staff and faculty; and calling for project proposals utilizing the technology.]
We’re not given small company options. Why don’t we have an array of accessibility? Why are we not encouraged to explore different possibilities? We’re still a democracy, but democracy is not having just chatGPT. Democracy is having more options.
You may be interested
Why the Clintons changed their mind on testifying about Epstein
new admin - Feb 03, 2026Why the Clintons changed their mind on testifying about Epstein - CBS News Watch CBS News Bill and Hillary Clinton…

Humans are infiltrating the Reddit for AI bots
new admin - Feb 03, 2026Ordinary social networks face a constant onslaught of chatbots pretending to be human. A new social platform for AI agents…

Liz Earle shoppers can get £10 off serum ‘reducing neck and eye lines’
new admin - Feb 03, 2026[ad_1] Liz Earle have expanded the brand's bestselling Superskin Advanced range with the addition of two new anti-ageing items. First…





























