Riding the AI Wave
Is Artificial Intelligence a friendly current making life's ride a smooth glide — or a gathering tsunami headed for a certain civilization decimating crash? Depends who you ask.
By Brad Tuttle
Art generated by AI
Ah-OOG-uh!
The discordant sound of an old-fashioned car horn blares, and 100 or so UConn faculty and staff dining on wraps and chips at round tables morph into a sea of swiveling heads and furrowed brows.
Are we being pranked? Are we in danger? Did we just travel through time?
At the podium David Rettinger — a University of Tulsa psychology professor, co-author of a book about teaching with AI titled “The Opposite of Cheating,” and lunchtime keynote speaker — jerks an arm up and adjusts his smartwatch, exasperated. Someone in the audience cracks that the culprit behind the horn must be artificial intelligence — the theme for this daylong conference in Storrs — but Rettinger owns up to the interruption.
Turns out the sound signified an incoming call from his elderly father, and Rettinger hadn’t figured out how to silence it ahead of time. “Not AI,” he tells the crowd. “Just a dumb person.”
In a way, the scene sums up how many in academia feel about AI: Some unseen force is honking at us urgently, and it’s unclear if we should speed up, swerve, or slam on the brakes. It’s a situation that’s making a lot of highly educated people feel dumb. (I also find it amusing and rather reassuring that the guy who literally wrote the book on AI struggles with basic tech settings, just like me.)
This end-of-school-year event in the Werth Residence Tower is normally called May Day. It’s an annual tradition in which faculty and staff are invited by UConn’s Center for Excellence in Teaching and Learning (CETL) to reflect on lessons learned in and out of the classroom, compare notes, and generally help one another become better all-around researchers and educators. In honor of the special theme, the 2025 iteration has been christened mAI dAI (pronounced “my die,” somewhat ominously).

This year, perhaps more than ever, college educators need help. The rise of widely available artificial intelligence tools, notably ChatGPT and similar large language model (LLM) chatbots, is eliciting a frothy mix of excitement and dread in every industry, education included. Optimists emphasize the technology’s superhuman capabilities — suddenly, everyone has an incredibly powerful assistant for research, analysis, brainstorming, customized learning, and more in their pocket, available 24/7. Meanwhile, teachers everywhere fret that AI makes it easier than ever for students to cheat on traditional schoolwork and sidestep learning entirely.
After working as a reporter and editor for over 20 years, I joined UConn’s journalism department last fall as part of a “cluster hire” of new faculty whose research largely focuses on AI. My duties involve covering AI as a journalistic beat while also experimenting and figuring out when and how to incorporate AI into our coursework. I can’t say how many conversations about AI I’ve had over the past year with colleagues, students, friends, and family, but I’m pretty sure I’m occasionally perceived as the annoying guy who brings up the topic way too often. My wife and kids will vouch for this.
As a journalist, it’s my job to be equal parts curious and skeptical. I try to be open to the upsides of AI, yet wary of the speculation and hype. Neither a promoter nor a hater, I’ll admit to periodically being swayed one way or another in the debate. One moment, I’ll find myself nodding in agreement as an AI evangelist expounds on how the technology will address the ills and inequalities of our education system; the next, I’m part of the resistance, voicing deep distrust for the tech overlords pushing AI further into our lives without nearly enough consideration for how its unchecked proliferation could cause havoc to the environment, artists’ livelihoods, independent thought, human relationships, and society in general.
58% of U.S. adults under 30 have used ChatGPT —— up from 33% in 2023.
59% of university leaders in the U.S. say cheating has increased due to AI chatbots.
30% of job seekers and employees feel their college degrees are irrelevant due to AI.
21% of global energy demand could be for data centers needed to power AI by 2030, up from 1%-2% now, according to an MIT projection.
Maybe this malleable perspective means I’m weak- minded. Please don’t tell my students — especially the ones who already pull Jedi mind tricks on me to get extensions on assignments. The truth is, there’s little I’m certain of when it comes to how AI will play out in the long run, or what the wisest approaches should be right now.
As Rettinger says to this roomful of university professors and administrators: “The bedrock question of what we are here to do is up for grabs.”
If there’s one thing I’m most dubious of, it’s people who claim to have all the answers. And if there’s one thing I’m confident about, it’s that many of today’s assumptions and talking points about AI will one day seem foolish. I’m also certain we need as many smart, well-intentioned minds as possible grappling with this technology to give us a prayer of providing the most benefit to humanity or at least avoiding the worst dystopian predictions. Here’s a snapshot of folks at UConn engaged in exactly this quest.
AI: The Disruptor
Tina Huey, an English instructor and CETL’s interim director of faculty development, has had a front-row seat to AI’s evolution at UConn, while serving as something of a hub for information related to LLMs and higher ed. In addition to events like mAI dAI, Huey and her CETL colleagues are constantly organizing faculty talks, workshops, and special learning communities related to AI. Recently, CETL awarded a dozen faculty with mini-grants to provide resources and guidance to professors working to revamp courses with AI in mind.
“Disruption” may be an overused buzzword in tech circles, but the term — which isn’t necessarily positive or negative — is the most apt way to describe AI’s ongoing impact at UConn. “We’re a community of scholars here, and this is a big stone that’s been thrown in the lake, and it’s creating a lot of ripples,” Huey says.
Her observation brings another phrase to mind: “FREAKED OUT.” That’s routinely how students and colleagues describe their feelings to me when discussing some new mind-blowing capability or implication for AI. (Yes, I’m interpreting the tone as requiring ALL CAPS.)
Huey says that when ChatGPT first made a splash, professors tended to react in one of two ways: excitement or avoidance. The latter is no longer an option because the technology has only become more sophisticated and more popular among students. So, after varying degrees of freaking out, even professors who tried to ignore or underplay AI initially are coming to grips with the fact that it’s here to stay and realize they must do … something.

Many have scrapped assignments or redesigned entire classes because it was too easy for students to complete the work via AI while barely engaging with the material. There is abundant griping among professors about how tired they are of trying to police AI use (online detectors are flawed and easy for students to dupe) and how much time and energy it takes to come up with new ways of assessing learning that can’t be knocked out in seconds with ChatGPT. I’ve heard many variations of this frustrated sentiment from colleagues who’ve just read through yet another robotic-sounding submission: I’m spending more time grading this than the student did writing it with AI.
The idea that professors should spell out their AI policies clearly in syllabi began surfacing in 2023 and is now considered necessary, but everyone is on their own for determining what the policy should be. Professor expectations are inconsistent in the age of AI too: Some tell me they’ve decreased reading requirements and made writing assignments shorter to lessen the likelihood that students will resort to ChatGPT, while others have raised the bar and become tougher graders on typos and grammar because they assume students are using AI editing assistance.
Everyone is distraught about AI-assisted cheating, not only out of respect for academic integrity, but because students who offload the hard cognitive processes involved in schoolwork aren’t learning much beyond how to write chat prompts. The scenario makes them easier to replace in the workforce and undercuts the value of education in general.
“I’m not worried about my students becoming too efficient” by embracing AI, Zhenzhen Qi, an artist and assistant professor in UConn’s Department of Digital Media & Design, said during one mAI dAI panel. “I’m worried about my students completing work mindlessly.”
Avijit Ghosh, a research associate at UConn’s Connecticut Advanced Computer Center and an applied policy researcher at the AI company Hugging Face, worries about people becoming too dependent on AI. He sees software engineers reach for LLMs to write code before even trying to handle the tasks themselves. These models appear very competent and generally do solid work, raising the risk that the human coders won’t bother to check for bugs and problems will sneak through. Their coding skills will atrophy too.
Perhaps more worrisome, a new paper Ghosh co-authored shows how dangerous overuse of generative AI can be for young people who have not yet built up knowledge and skills. In a video call I attended in May with a handful of UConn colleagues, Ghosh shared his findings, explaining that true learning comes about slowly, after much exploring, reflective thinking, and periods of uncertainty. This necessary process is challenging, if not impossible, for people who are unfamiliar with a topic, because they’re often unsure what or how to question and are more likely to accept what AI spits out as authoritative and accurate. As Ghosh put it, AI can be “bad for novice users because they don’t know what they don’t know.”
At the same time, there’s an eager glass-half-full crowd exploring ways AI can benefit them as teachers. Little by little, UConn educators are testing the waters and realizing AI’s potential to save them time in class prep and assignment creation, Huey says, especially when it comes to diversifying materials and sprucing up presentations so that they resonate with a wider variety of students.

Another upside: AI is exposing anew the often-transactional nature of college and that some traditional teaching practices are in need of reinvention. Say goodbye to the simple regurgitation of information; say hello to in-class exercises that force cognitive exertion and emphasize the learning process over the final product.
Most people I talk to have deep ambivalence about AI, and there’s no consensus on biggest worries or best-case scenarios. Discussions can be tense and polarizing, and I occasionally get wind of low-key hostility from far ends of the AI optimism spectrum. If you ignore or are overly critical of AI, you might be viewed as a Luddite, whereas if you embrace it too much, you’re a naive shill doing Big Tech’s bidding.
Students, likewise, are not a monolith. Some view AI as creepy, unethical, and harmful to their cognitive functions and sense of self. They’re hesitant to use it even when professors require it in coursework. Others believe it’s inefficient — even irresponsible — to not take advantage of the miraculous shortcut that is AI, given how it enables them to plow through irksome, laborious schoolwork and how much they assume they’ll use these tools in the workplace.
One student asked me, with earnest curiosity, why in the world I restricted AI and all internet access for an exam essay since Grammarly and other online tools would be available in normal work settings. I fumbled through an answer, something about the importance of formulating one’s own ideas, free of outside influences. Later, I asked ChatGPT whether it was a good idea to write without ChatGPT, and its three-paragraph response was far better than mine, arguing that the benefits include “fostering creativity, deepening understanding, and ensuring originality.”
Overall, my sense is that regardless of their comfort and confidence with using AI, most people are pretty much winging it. That may sound flip, but it’s unavoidable and probably necessary. This is a world-changing technology that’s moving incredibly quickly and is rife with so many unknowns that the only sensible approach is to explore, experiment, learn, and share. Then, rinse and repeat, experiment some more, over and over, until the storm settles.

“I’m worried about my students completing work mindlessly.”
Zhenzhen Qi, artist and assistant professor in the Department of Digital Media & Design
AI: The Experiments
A quick note to my UConn colleagues: Forgive me if I don’t highlight your AI research or cutting-edge classroom practices in this article. There’s far too much going on to mention, and, frankly, a lot of it is too complicated for my nonacademic cerebrum to handle. (Also, forgive me if I just referenced “cerebrum” incorrectly.)
Among the work I can’t come close to fully explaining, professors in economics and statistics I was hired alongside are investigating how AI can be used to optimize maritime shipping routes and develop models for addressing climate change, respectively. At mAI dAI, the presentation that received the most oohs and ahhs was from two industrial design professors who passed around a miniature 3D printout of a toaster prototype designed with AI specifically for toasting gluten-free bread. Kyle Booten, an assistant professor in the English department who has been experimenting with algorithms in writing for years, recently published a book set at a virtual philosophical salon featuring 20 AI-generated characters that discuss ethics and aesthetics while getting progressively drunk.
Lisa Blansett, director of UConn’s First-Year Writing Program, has been playing with AI in ways that are easier to wrap my brain around. First-Year Writing is an intensive required course aimed at helping students develop skills as critical consumers of knowledge and versatile, competent writers. If students tapped AI to do their work, it would undercut the class’s purpose. But naturally, not long after ChatGPT hit the mainstream, Blansett heard complaints from instructors who suspected students were generating submissions with the chatbot.

“We're testing the limits and possibilities of a tool and asking: What does it enable us to do? What does it change for us?”
Lisa Blansett, director of the First-Year Writing Program
To help instructors cope, the program rolled out training sessions explaining how chatbots work and guidance for crafting sensible AI policies and assignments that are impossible or impractical to complete with AI. Blansett says she has long pushed instructors to avoid assignments that are “reproducible” (via chatbots or other means) and tells me that every class she’s ever taught is different. In her spring 2025 First-Year Writing course, which addressed the theme “What Is Education For,” she had students conduct field interviews and use ChatGPT to analyze a large corpus of published material (“Humans of New York”) before editing their interviews into HoNY-like stories. Students also had AI organize their schedules and set deadlines for the semester’s work.
“We’re testing the limits and possibilities of a tool and asking: What does it enable us to do? What does it change for us?” she tells me.
Blansett sees no point in lecturing students about why it’s wrong to have AI formulate ideas or handle important writing for them. Instead, she follows the classic maxim “Show, don’t tell,” and has prompted ChatGPT live in class to generate this kind of work for students. “People got a real laugh out of them because they were so formulaic,” she says. “There was always some sort of grandmotherly lesson at the end of it.”
I designed a classroom experiment for spring 2025 too, in my Journalism Ethics course. Journalists must always strive to gather information firsthand, cover topics in a robust manner, get the facts right, and write about issues in a nuanced, accurate way. These goals are not AI’s strengths, to put it mildly. At the same time, I see how AI is upending already stressed media business models and want students to be prepared.
With all this in mind, I asked students to consult AI during the various stages of producing a traditional news feature — brainstorming ideas, conducting research, finding sources to interview, writing, editing, revising, creating visuals, and so on — and then to evaluate whether using the technology was helpful and ethical at each step.
The results were a mixed bag, revealing some of the best, worst, and weirdest that AI can accomplish. Students marveled at how quickly AI retrieved and synthesized background research for articles, though they noted how the bots often simply relied on Wikipedia. The consensus held that the best use of AI in the assignment — almost universally deemed by students as helpful and ethical — was requesting questions to ask sources in interviews.
To illustrate this story, art director Christa Yung turned to AI for the first time, feeling this was one scenario in which it would be ethical to experiment with AI art for the magazine. Yung is generally opposed to AI due to copyright and environmental concerns, and says the hours it took to get anything usable and refine her prompts enough to create interesting art would be much better spent collaborating with a real-life artist.
On the flip side, students were understandably alarmed to watch chatbots generate error-riddled news article drafts with manufactured quotes and fake sources. Upon closer inspection, some of the positive uses came into question too: My young reporters discovered that some sources don’t reply with usable quotes to perfectly good interview questions; that it’s necessary to think on your feet and have follow-up questions handy. While students praised AI’s ability to polish grammar, reduce redundancies, and streamline writing in the drafts, they also noted how AI tended to rewrite instead of suggesting improvements — and that it sometimes skewed the meaning and introduced inaccuracies in the process.

The biggest laughs came when students used AI to create images and videos to complement articles. The results were often plausible at first glance but curiously “off” upon closer inspection: a girl with two right arms, a group of seven people eerily displaying the exact same teeth and smile, shadows and light that defy the laws of physics, an assemblage of nonsensical symbols supposed to represent words but meaning nothing shown on the cover of a laptop rather than the screen.
To be fair, the students were mostly AI novices, and experienced users who are well-versed in the art of crafting clear, detailed instructions can expect better results. As we reviewed chats as a class, it was clear that one student had a superior grasp of how to write effective AI prompts; she routinely received outputs that were more worthwhile than others. If there’s one lesson I hope everyone took from the project, it’s that while AI may get the ball rolling and speed up and enhance certain parts of doing journalism, it’s no replacement for thoughtful reporting, fact-checking, and critical thinking.
Explore, don’t endorse: That’s my mantra for AI. The point of the ethics project was not to instruct students on how AI is good or bad for journalism, but to give them the opportunity to experiment and come to conclusions themselves. Hopefully, they came away with a better grasp of the limitations and risks of AI alongside its undeniably impressive capabilities. Several students, unprompted, voiced the same unease that I feel about how AI may very well dull their voices and individuality the more they use it.
AI: The New Literacy
It’s hard to find anyone at UConn embracing AI’s upside as wholeheartedly as Arash Zaghi. The civil and environmental engineering professor says he often uses AI eight to 10 hours a day for personal and professional reasons alike — researching and planning course materials, summarizing and explaining complicated concepts, quickly crafting polished emails based on a few spare notes, even looking for parenting advice. At one point, he became so accustomed to talking to ChatGPT via voice mode that he caught himself replying audibly to podcasts.
I love talking to Zaghi. He comes at AI from an entirely different perspective, and our conversations challenge my notions and force me to reassess. Like many writers, my default way to think deeply is to write something out: “I write to find out what I think.” But I know this is not the most optimal or interesting approach for everyone.
Zaghi is an engineer focused on getting stuff done, not a fussy writer who views every comma and semicolon as precious. Perhaps more importantly, as someone who learned English as a second language (he was born and educated in Iran) and has both dyslexia and ADHD, Zaghi views AI as a blessing that’s particularly beneficial for neurodivergent learners.
For example, before AI, Zaghi felt paralyzed with worry whenever he had to send an important email to his dean. “I had to wait three hours to find someone to proofread it for me, to make sure there was no embarrassing mistake in it,” he says. Now, he can jot some notes into ChatGPT and quickly workshop his messages until they communicate what he wants in polished, perfectly grammatical sentences.
Zaghi’s brain seems to work vastly faster than what can be formulated into typed text. Instead of using AI for so-called “cognitive offloading,” he mind melds with chatbots to help process deeper thinking and generate more thoughts, period.

He believes AI’s impact will be far bigger than that of tools like the calculator and be even more significant than the computer. The closest comparisons he can think of are innovations that upended society in broader ways, such as fire or electricity. He refers to AI as a “new literacy” that will transform learning and serve as a democratizing, emancipatory force that liberates people from the costs and constraints core to educational institutions today. He laughs off the idea of banning or ignoring a technology this powerful and ubiquitous as absurd.
“You cannot declare war against AI. That’s a losing battle,” he says. “Education is our only tool.”
With that in mind, Zaghi launched an AI literacy pilot program course at UConn originally aimed at engineering students but now open to all majors. Dubbed “AI4ALL,” it will start with 1,000 first-years, and aims to expand to 2,500 students, with accessibility to all high school students in Connecticut via early college courses.
In the “lecture” portion of the class, students will watch short videos available to anyone for free on YouTube. Naturally, Zaghi created the videos with the assistance of AI, and they feature upbeat AI-generated voices and cute comic strip–like images and fonts. Enrolled students will then meet in small groups (max 25) with a TA during weekly lab sessions to engage in exercises involving chatbots and discuss their findings.
“It’s all hands-on, 100% hands-on,” says Zaghi, who strongly believes the only way to learn how to use AI is to use AI. “It’s all about building intuition. It’s not about knowledge.”
He argues that much of the AI instruction available to students now is misguided because it overemphasizes the negatives. “Imagine if I’m teaching an English literacy course, and my entire focus remains on why this English language is colonial, is racist, why it’s being used to oppress minorities,” he says. “That is exactly how the other courses that I’ve found have been approaching AI literacy.”
Zaghi wants students to understand the real risks that come with AI — notably, how using an LLM as an intimate emotional companion can be unhealthy, especially for children, and how chatbots can form echo chambers that reinforce one’s beliefs due to their inclination to please the user. But overall, the class will stress the ways AI can “enhance our learning, boost our creativity, improve productivity, and ultimately, prepare [us] to make meaningful contributions to society,” as the course’s first video puts it.
Above all, Zaghi is trying to be pragmatic. “This is where we are. There’s no way we can go back,” he says. “We can either embrace it and stay ahead of it,” or we can put our heads in the sand and become irrelevant.
AI: The Teammate
During the last presentation session at mAI dAI, in the same cavernous room where the keynote speech was disrupted by an old-timey car horn, Qazi Arka Rahman is commiserating with a dozen educators attending his talk. Rahman is a visiting assistant professor in the Department of Social and Critical Inquiry at UConn Hartford, where many students speak English as a second language and are the first in their families to attend college. He explains that his students routinely complete short assignments by hopping from chatbot to chatbot, browsing for an answer they like best. “They read a few lines, then move on to the next” before landing on a response they will ultimately submit, he says.

Rahman is sympathetic. “Most of them have, like, three different jobs,” he tells me later. “In the reflection, they would say to me outright that [they used AI] because ‘I had three night shifts at Dunkin’.’”
Frustrated by how students were outsourcing their work to AI — simultaneously leaning on questionable sources and learning little — Rahman decided to officially incorporate the technology in an assignment toward the end of his fall 2024 Intro to Critical Refugee Studies class. Originally, he designed a group project in which students researched different refugee communities with AI and analyzed the cited links for trustworthiness and bias. Students asked to integrate AI further, and eventually the groups welcomed a chatbot as an official “team member.” The bot organized a rotation of different roles students would adopt (prompter, critic, synthesizer, note-keeper), and the AI itself jumped in from role to role.
Students learned that AI never gets bored or tired like human teammates can, and that chatbots are much better at some jobs than others. For example, they felt AI was very effective at critiquing their work, perhaps because, unlike many students, it was confident in its judgment and not worried about hurting anyone’s feelings. A common issue with group projects is that the work often falls heaviest on one or two students who overcompensate for the disengaged slackers. Because AI handled each group’s delegation of tasks in a clear, fair, automated way, however, this was less of a problem. Or at least it became glaringly obvious who was dropping the ball.
Major news publications and thousands of artists have accused generative-AI companies —— including Google, Meta, and Open AI (ChatGPT) —— of copyright infringement, arguing that these firms engage in “mass theft” by using copyrighted material to train their models without getting consent or providing compensation. At press time, at least a dozen major copyright-related lawsuits filed against AI companies were underway.
Rahman likens AI to a sparring partner that’s great for back-and-forth idea exchanges and exercising one’s brain. The problem was that students liked using AI a little too much and happily let it do the heavy cognitive punching. They relied on it as a crutch that would speedily suggest case study angles or present reams of research on different refugee groups. Sometimes, the research cited did not actually exist.
Rahman discovered that when he allowed students to use AI in two classes and had them skip it the next session, they groaned in displeasure and struggled to complete the work. “They wish they could use AI to do everything,” he says. Therein lies a key concern: As AI becomes more capable and humans become more comfortable deferring to it, its role may evolve from a mere teammate or collaborative partner to the leader in charge of the game — with a lot of us left on the sidelines with no clear way to contribute.
Before closing this conversation, let me raise one more thing I’m fairly certain of: Some people will think something I wrote in this article is short-sighted, ill-informed, overly simplistic, or just plain stupid. By all means, let’s talk. Colleges are supposed to be places where we discuss stuff like this, after all. I promise to not crib ideas from AI in our exchanges. Will you do the same?

“You cannot declare war against AI. That’s a losing battle. Education is our only tool.”
Arash Zaghi, civil and environmental engineering professor
Leave a Reply