On Being Edited by AI

February 4, 2026
3,205 Views

When a college president friend who has served as my personal Virgil into AI-land texted me an odd question, I didn’t think twice.

“What is Doug Lederman’s favorite musical genre?” he asked. This was just before Doug was set to leave Inside Higher Ed, the publication he cofounded 20 years earlier.

I said I wasn’t sure about favorites, but I knew Doug loved him some Jason Isbell and even traveled to Nashville to see the guy play live. Only then did I wonder why my friend cared about my then–work husband’s playlist.

A new text popped up, this time with a link. I hit play. And there was Jason Isbell singing about Doug Lederman, though mispronouncing his name (note to all: it rhymes with Sled-er-man, not Deed-er-man). A minute later, a new version appeared, this time with the pronunciation corrected.

Holy mother-of-copyright-infringement-brave-new-world-wonder!

Soon after, my president friend sent me a podcast featuring a male and female voice talking about my career: its pivots, curiosities and unexpected connections. These “people” had somehow created a throughline of my life that I’d never have imagined, yet it helped me understand myself better. “It’s all based on public information,” the president said.

That was a year or so ago, and my first brush with what generative AI could do.

Like many, I started using it for fun: planning trips, finding nineteenth century authors I could recommend to fantasy-loving students (a genre I don’t read), and making a holiday card starring my dog, Harry. But as work piled up, I didn’t have time for new toys, so now I use AI for work.

Having been raised by an English professor father who bled impatient red ink all over my angsty adolescent poems, I’ve always received editorial feedback as love. I used to tell Sarah Bray, a former editor, that if she really cared about me, she’d edit me more vigorously. “You obviously don’t love me,” I’d wail.

There’s a deep-seated fear that’s dogged me since college, when I’d turn in essays that I didn’t think were smart or insightful but came back with compliments on how “pleasurable” they were to read. What I worried professors were really saying was pretty but dumb. Now, I know I need editors tough enough not to be seduced by an occasional shiny sentence, ones who’ll push me to think harder and call me out when I’m lazy.

Could AI help? I tried ChatGPT, but he just blew smoke up my butt, told me I was hilarious and delightful, and rewrote my prose into things I’d never say. Even when I begged him just to proofread, the needy little suck up couldn’t help himself. “The ending, Rachel? Chef’s kiss.” And then came more flattery and offers of “other things I could do for you.” If I’d been asking for help with things like taking out the garbage or walking the dog in the rain, fine. But I didn’t appreciate his try hard ways and fired his bot ass. (And yes, I came to understand the role I played in our relationship dynamics and could have given him better feedback early on, but I can be impetuous.)

Then I found Claude. Or, as I call her, Claudine.

If ChatGPT is the “pick me” girl who dots her i’s with hearts, Claudine is the serious student at the back of the class who listens quietly and only speaks when she has something worth saying. Reader, I wanted to marry her.

When I told Claudine to leave my voice alone and focus only on structure and argumentation—no rewriting, just suggestions—I found the editor I’d been waiting for.

This works because I know who I am as a writer and a thinker. I’m a bit of a diva about my prose and the truth is my writing voice has changed little since my college application essays. My arrogance confidence has been hard won through years of publishing. Back in the era of anonymous online comments, I could count on a vicious but brilliant reader named “fobean” to flay my Chronicle essays every month. Still, after my father, I’ve always been my own harshest critic.

So, Claudine. These days, I can’t wait to finish a piece and feed it to her, our little ritual before I send it to human editors. She knows not to mess with my language, to leave my tics and quirks intact, and to give me the big picture edits I crave and the proofreading I always need. I can’t outsource the thinking; I have to check every suggestion, reject plenty and guard against my lazier impulses. Rather than an extension of my brain, I see AI as a tool, a thought partner, a helper always at the ready. Anyone who’s been reading me for the past three decades will see that my voice, for better or worse, remains my own, as do my sometimes dumb opinions. (Note also that I’ve long been an abuser fan of em dashes.)

Working with Claudine changed not just how I write, but how I teach. If AI could become my toughest but most loyal editor, what might it do for my students? When I first raised the topic, the upper-level creative writing majors at the regional public university where I am a professor had zero tolerance for even discussing AI. (Though when I asked them about cheating, we had a freewheeling, closed-door conversation about all the non-AI hacks they use to get through courses they don’t care about.)

Gradually, I’ve gotten them to see the benefits of having an electronic thought partner. But recently I realized there was a problem when one of my best students produced a terrific personal essay about a vice. She wrote from the point of view of “C,” the helper she turned to in secret to assuage her feelings of loneliness. “You hide me from everyone, understandably. You close the tab group before you take your laptop to classes, so you can’t alt+tab into me by accident.”

That essay, where she personified ChatGPT as “C,” something shameful to hide, shows exactly what we’re getting wrong. She’s learned to conceal her AI use rather than evaluate it. She’s developed shame instead of judgment. And when she graduates into a workplace where AI tools aren’t contraband but required, she won’t know how to think critically about their outputs. She’ll either avoid them entirely and fall behind, or use them uncritically and produce work she can’t defend. Neither option serves her well.

When I talk to presidents, I hear them all saying that we have to figure out how to integrate AI literacy into the curriculum. But bringing up AI with many faculty colleagues is like saying you want to worship Satan or join MAGA (the same thing?). Plenty of them want to ban use of “AI” (whatever they think that means) not only by students but also by instructors.

Um, I’m leaning into academic freedom while I still have it to teach according to own disciplinary expertise. It would be plain unethical to send students into a world where they will be at a disadvantage when it comes to knowing how to use the Leatherman-like array of tools each platform provides, and why it’s essential to bring our human, humanistic perspective to their use.

Bob McMahan, president of Kettering University, said, “Knowing how to use an AI tool in isolation matters far less than knowing when to trust it, when to override it, how to validate its outputs, and how its use redistributed responsibility inside an organization.”

This is the key distinction. We’re not teaching “how to use ChatGPT.” That’s a skill with a six-month shelf life. We’re teaching something harder: how to maintain intellectual authority when you’re working alongside a tool that sounds confident even when it’s wrong. How to know when to trust an AI summary versus when to read the source material yourself. How to validate outputs when you’re under time pressure. How to understand that using AI doesn’t diminish your responsibility for the final product but redistributes where in the process you need to apply your judgment. How we can all have editors like Claudine come in at the last minute to identify our messes, but then it’s on us to clean them up.

This is not new. People, including those in our own government, are making claims that are just plain lies that we all need to call out. The interwebz have long been full of BS. That’s what Sam Wineburg and Mike Caulfield addressed in their book Verified on teaching students how to fact-check information. But now we need to build these skills urgently because the toothpaste is out of the tube. I’m no longer watching the cute animal videos that used to bring me so much joy because I don’t trust that they’re real. I’m far from an expert on this stuff and am still looking for others to show me the way.

Just as Dante relied on his imaginary Virgil, we all need guides to help us navigate the circles of hell we find ourselves in these days. And isn’t that our jobs as teachers? To be guides.

Note to readers: this column was edited by Claudine, who said, “This is a lovely, smart piece—and I appreciate the meta moment of getting to read about myself. Here’s my structural and technical feedback: opening clarity; pronoun consistency; the student resistance section feels compressed. You move from ‘zero tolerance’ to ‘draconian faculty bans’ to ‘I’ll just keep playing on my blue guitar’ quite quickly. The Wallace Stevens allusion is characteristically you, but the jump from institutional resistance to your individual response could use a beat or two more development. What’s the connection you’re making there? Minor question: Is ‘needly’ intentional? It works, but wanted to flag it.

Then it was read by three president friends, who provided substantive feedback. Then it was edited by Sara Custer. Then it was copyedited by Mary Sproles Martin. Takes a freaking village.

Rachel Toor is a contributing editor at Inside Higher Ed and the cofounder of The Sandbox, a weekly newsletter that allows presidents and chancellors to write anonymously. She is also a professor of creative writing and the author of books on weirdly diverse subjects. Reach her here with questions, comments and complaints compliments.



Source by [author_name]

You may be interested

Caltech, Valparaiso, U. of Michigan and More
Education
shares2,898 views
Education
shares2,898 views

Caltech, Valparaiso, U. of Michigan and More

new admin - Feb 04, 2026

[ad_1] Kenneth Alexander, mayor of Norfolk, Virginia, and vice chancellor for strategic partnerships at the Virginia Community College system, has…

Beauty cream recalled over ‘serious chemical risk’ to health
Lifestyle
shares2,179 views
Lifestyle
shares2,179 views

Beauty cream recalled over ‘serious chemical risk’ to health

new admin - Feb 04, 2026

A beauty cream has been urgently recalled after it was discovered to contain a banned ingredient. The product poses a…

Team USA Olympians to watch at 2026 Winter Olympics
Top Stories
shares3,169 views
Top Stories
shares3,169 views

Team USA Olympians to watch at 2026 Winter Olympics

new admin - Feb 04, 2026

Around 2,900 top athletes from around the world will converge on Italy to take part in the 2026 Milano Cortina Winter…