By now, most North Carolinians are at least somewhat familiar with Generative AI (GenAI). As tech journalist George Lawton explains, GenAI “uses sophisticated algorithms to organize large, complex data sets into meaningful clusters of information in order to create new content, including text, images and audio, in response to a query or prompt.” It is the foundation of numerous platforms, including Open AI’s ChatGPT and Dall-E, as well as Google’s Gemini. And it is either a bane or a boon, depending on one’s perspective—especially, perhaps, in the field of education.
GenAI is either a bane or a boon, depending on one’s perspective.
Since OpenAI publicly released ChatGPT-3.5 in November 2022, students have increasingly relied on GenAI to complete assignments. According to a recent survey, 88 percent of full-time undergraduates admitted to using GenAI for assessments. Administrators and instructors are still struggling to meet the challenges that GenAI presents. (When) is it acceptable for students to use GenAI? (How) should students be permitted to use it? (How) should we address GenAI in our classes? (What) should we teach our students about it?
Responses at the institutional level in North Carolina seem to have been, generally speaking, prudently cautious.
Responses at the institutional level in North Carolina seem to have been, generally speaking, prudently cautious: providing overviews of the technology, recognizing its shortcomings, situating it within the context of academic integrity, and ultimately deferring to individual instructors to make their own specific policies. See, for example, Duke University’s statement on Artificial Intelligence Policies, UNC-Chapel Hill’s Research Generative AI Guidance, Wake Forest University’s Academic Integrity FAQ, and Wake Tech’s Generative Artificial Intelligence policy.
Instructors’ attitudes toward student use of GenAI run the gamut, but they seem to fall into either of two broad categories.
- The Alarmist Attitude: This response is grounded in the view that GenAI is more than merely disruptive of current practices but is potentially apocalyptic in its consequences for education as a field. Policies and approaches revolve around preventative and punitive measures, the underlying goal being, essentially, to criminalize GenAI.
- The Accommodationist Attitude: This response is grounded in the view that GenAI is a revolutionary breakthrough that not only can and will but even should be used by students for greater efficiency and increased productivity. Policies and approaches revolve around incorporating GenAI training into the curriculum, the underlying goal being, essentially, to embrace GenAI.
As things currently stand, a strictly alarmist approach is untenable, a never-ending game of whack-a-mole that the instructor is destined to lose. On the other hand, rushing to embrace GenAI through an overly optimistic, accommodationist approach can lead only to unintended consequences that we can neither predict nor even imagine. At this time, the most prudent approach to take with students would seem to be this: Recognize that GenAI is a potentially beneficial tool but actively discourage students from using it by focusing on the very real costs of such reliance.
Educating students on the limitations and liabilities of GenAI is a good way to start. So-called AI hallucinations are the most glaring example, but they are far from the most significant. Even more consequential are the various biases baked into many GenAI platforms. A 2024 UNESCO analysis suggests that “AI-based systems often perpetuate (and even scale and amplify) human, structural and social biases,” particularly with respect to gender. A 2024 paper published in the journal Nature similarly argues that certain platforms “are known to perpetuate systematic racial prejudices, making their judgements biased in problematic ways about groups such as African Americans.”
On the other hand, as the authors of a widely-reported U.K.-based study observe, “OpenAI’s wildly popular ChatGPT artificial-intelligence service has showed a clear bias toward the Democratic Party” in the U.S. and leftist political parties in other countries. And a 2023 study conducted by German researchers similarly found that ChatGPT has a “pro-environmental, left-libertarian orientation.”
The agonizingly direct approach that GenAI takes in responding to prompts often results in painfully formulaic presentations.
There are other, less easily measurable shortcomings, as well. The agonizingly direct approach that GenAI takes in responding to prompts often results in painfully formulaic presentations that follow predictable patterns and routinely employ the same, often slightly unusual, rote terms and phrases. These presentations also tend to be essentially expository in nature, even when the platform is prompted to “analyze” or “evaluate,” and are frustratingly superficial. ChatGPT’s “arguments,” for example, often comprise broad, vague generalizations with little to no context, support, or insight. In other words, the biggest problem with GenAI for students isn’t that it’s a serial fabulist, a sexist, a racist, or a leftist partisan: It’s that it’s a predictable writer and a shallow “thinker.” And it’s training our students to become the same—or worse.
The recent past offers an analogous situation—and a cautionary tale—that we might learn from.
The recent past offers an analogous situation—and a cautionary tale—that we might learn from. When Google really took off circa 2000, we were told with breathless excitement that it was “democratizing access to knowledge” and “putting information at our fingertips,” so that answers were never more than “a click away.” Students, it was said, had been liberated from the drudgery of rote memorization and arduous expeditions through library stacks. Now, the cheerleaders trumpeted, students and their teachers could focus on the real point of education: critical thinking.
The problem is that this has simply not been borne out. This is because, as tutor and author Erica Meltzer noted in a 2013 blog post, “factual knowledge is actually the basis for higher level thinking. […] Critical thinking emerges from the scaffolding provided by rote knowledge; it can’t be divorced from it.” Accordingly, students haven’t become stronger, deeper, more insightful critical thinkers simply because they have immediate, virtually unfettered access to information.
But it’s worse than that. In 2011, Betsy Sparrow, et al., published a seminal study in which they found that one of the “cognitive consequences of having information at our fingertips” through Google is that we don’t remember the information that we use Google to access. A 2024 meta-analysis of such studies provides further evidence that this so-called “Google effect” (or “digital amnesia”) “may lead to changes in cognitive and memory mechanisms.”
In other words, Google disincentivizes and even impedes actual learning.
There’s already mounting evidence that something similar is at work with GenAI. Students and “knowledge workers” are increasingly relying on GenAI for “cognitive offloading.” That is, they are “outsourcing” to ChatGPT and similar platforms the tasks of acquiring and applying knowledge: the tasks, in other words, of critical thinking. GenAI cheerleaders, like the Google prophets of yesteryear, tell us that this is a good thing, because now students will be able to engage in even higher-order critical thinking. History doesn’t repeat itself, but it rhymes.
Once again, the opposite seems to be the case. A recent study by Dr. Michael Gerlich “revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading.” Furthermore, “younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants.”
Nor, according to Gerlich, are young people using “the cognitive resources freed up by AI for innovative tasks.” They’re passively consuming other content, mostly for entertainment. These findings are in keeping with those of a recent German study, which concluded that undergraduates who used GenAI “large language models” (e.g., ChatGPT) for information-gathering “demonstrated lower-quality reasoning and argumentation in their final recommendations compared to those who used traditional search engines.”
GenAI disincentivizes and impedes critical thinking.
In other words, GenAI disincentivizes and even impedes critical thinking.
This is what students need to be taught about GenAI before they’re taught anything else—if, for the time being, they are taught anything else. But for this to mean anything to them, they must also be taught the value of critical thinking. They must be taught, for example (to take a 10,000-foot view), that
- Strong critical thinkers are better able to evaluate the credibility and reliability of information sources and to distinguish between accurate and inaccurate information;
- They are better able to recognize logical fallacies and cognitive biases—including their own—and are more immune to manipulation;
- They are more insightful, more self-aware, and more creative;
- They are more effective communicators and are better able to make persuasive, convincing arguments;
- They are better problem-solvers and decision-makers;
- They are more capable, more confident, and more independent.
Nor are the benefits of these things confined to the classroom or the workplace. They are far more important in our daily lives than students realize. Critical thinking enables us to do more than earn higher grades and find more lucrative employment; it empowers us to live easier, freer, more autonomous, more productive, more satisfying, more fulfilling, more fully human lives.
If there is a solution to the problem of GenAI, it doesn’t seem to lie in finding ever more creative ways of short-circuiting students’ use of it or exacting ever more severe penalties. It also doesn’t seem to lie in teaching students how to use it for “cognitive offloading” so that they engage in the kind of “better” critical thinking that they’ve never been equipped or trained to do. The solution is persuading students that the costs of relying on GenAI to minimize their cognitive load are more far-reaching than they know—and that they far outweigh the benefits.
David C. Phillips is an English teacher who lives in Greensboro, North Carolina.
This article was originally published at jamesgmartin.center