The love-hate relationship with artificial intelligence (AI) in academia has shifted from concerns like cheating and skill loss to exploring teaching styles, scaffolding, and expanding lesson plans. This shift reflects broader contexts of learning beyond traditional teacher-centered models. A notable example is the University of Texas at Austin’s collaboration with Grammarly to create the Faculty Guide to Getting Started With Gen AI, a comprehensive framework for integrating AI into teaching and learning.
The Faculty Guide offers activities, lesson plans, scaffolding strategies, video demos, and resources to help faculty “level up their AI proficiency” (p. 55). With insights into its framework and an example of AI-facilitated Socratic dialogue, I’ll illustrate its practical use. For detailed templates and plans, readers should download the guide.
The Faculty Guide
The Faculty Guide at the University of Texas at Austin is not mandatory but acknowledges the diversity in faculty interest and competence with AI, as well as its growing role in academia. It emphasizes the need to tailor AI tools to academic contexts, noting:
No roadmap exists to help institutions to confidently plan how to do this at scale, and what works in one context may not work in another. What is abundantly clear as of this publishing, however, is that faculty buy-in and literacy will be essential to transformative, impactful AI adoption that enhances student learning and improves the lives of professors (p. 2).
The guide distinguishes “transformational” AI use—requiring active human engagement—from merely “transactional” uses where input ends at the prompt stage (p. 3).
The guide covers a range of activities, from simple tasks like résumé creation and brainstorming (pp. 9–49) to advanced methods such as role-based strategies, Socratic dialogue, and comparative analysis (pp. 5–6). Grammarly’s influence extends beyond grammar checking, positioning AI as a tool to enhance communication, though concerns linger about its tendency to produce overly uniform results. While personalization may evolve, expanding skilled communication—even AI-mediated—could outweigh the risks of homogenization. Now, let us explore how AI can facilitate Socratic dialogue.
Socratic Dialogue and the Chuang Tzu’s Joy of Fishes
The Faculty Guide includes a scaffolding strategy exercise, “Socratic dialogue,” that can lead to more complex analysis with AI.
The objective is: “Learners evaluate their dialogue with generative AI as the gen AI bot uses investigation and reason to achieve more meaningful engagements with the learner.”
The suggested prompt is: “Let’s have a Socratic dialogue about [describe activity topic]. Please ask me questions that will help us explore and understand different perspectives about this activity.”
Putting aside questions about the prompt and whether it is Socrates who asks more questions than the interlocutor, I decided to use the 3rd-4th century Taoist Chuang Tzu—also Zhuang Zhao and Zhuangzi—writing, the Joy of Fishes (also Happy Fish).
Previously, I’ve used Joy of Fishes as a classroom exercise. I divided the class into three or four groups with the first group imagining what the logical next paragraph would be after the first two.
Chuang Tzu and Hui Tzi
Were crossing Hao river
By the dam.
Chuang said:
‘See how free
The fishes leap and dart:
That is their happiness.’
The second group was told to imagine what the earlier group had decided and then to imagine the ensuing paragraph on their own. And so on. Ostensibly, this is a difficult task for each succeeding group, but if the text followed a logical order about inquiring into the perception of reality, then this should not be an impossible task.
After each group read their part of the class exercise, I would read the entire writing that was often accompanied by considerable laughter for their improbable guesses when compared to the actual text. The goal was to discover the underlying logic in the Joy of Fishes. This was a class in cultural anthropology and it invited cultural comparisons. The following exercise for the class was Plato’s Analogy of the Cave─also about the perception of reality and the world of shadows. These exercises were important to what social scientists did based on their belief they could construct social reality.
These were cautionary conversations. But what if we substituted a chatbot for the classroom of students? Would that serve the same objective?
Now, it was I—the student—and Gemini—the chatbot Socrates. What would this Socratic dialogue yield? The exercise structure had to be modified since the chatbot would be aware of the successive paragraphs as I disclosed them. So, my experiment would not replicate the classroom exercise, but it could provide insight into how the chatbot would decipher the logic of Joy of Fishes. It could also provide a window into the give-and-take of a Socratic dialogue between a student and a chatbot.
Before discussing this dialogue, here is the Joy of Fishes in its entirety.
Joy of Fishes – translation by Thomas Merton
Chuang Tzu and Hui Tzu
Were crossing Hao river
By the dam.
Chuang said:
‘See how free
The fishes leap and dart:
That is their happiness.’
Hui replied:
‘Since you are not a fish
How do you know
What makes fishes happy?’
Chuang said:
‘Since you are not I
How can you possibly know
That I do not know
What makes fishes happy?’
Hui argued:
‘If I, not being you,
Cannot know what you know
It follows that you
Not being a fish
Cannot know what they know.’
Chuang said:
‘Wait a minute!
Let us get back to the original question.
What you asked me was
‘How do you know
What makes a fish happy?’
From the terms of your question
You evidently know I know
What makes fishes happy?
‘I know the joy of fishes
In the river
Through my own joy, as I go walking
Along the same river.’
My strategy in this dialogue was to distract and challenge the chatbot, asking questions and offering different interpretations. As I proceeded, I also decided to test its handling of blame when I claimed that I received a failing grade for following its suggestion.
I will highlight four meta-observations on a student and a chatbot Socratic dialogue to illuminate what is implied but not stated in this Faculty Guide. Note that these responses are lightly edited.
Chatbot: Stage 1 – Analytical flexibility
I asked the chatbot to respond to Chuang Tzu’s claim that the fish were happy.
Gemini: Hui Tzi might respond with a thoughtful or playful remark, such as:
-
- Thoughtful: ‘How can you tell they’re happy? They’re just doing what they do to survive.’
- Playful: ‘Maybe they’re just excited to show off their jumps!’
- Curious: ‘What makes you think they’re happy? Is it their splashing or something else?’
These responses offer different perspectives on Chuang Tzu’s statement, inviting further discussion or playful banter.
Observation: The chatbot alternatives reflect attention to different types of human responses—thoughtful, playful, and curious. The reply provides the illusion of human consciousness despite the reply being constructed of numerous algorithms, organized around collated data such as found in encyclopedias or Wikipedia and expressed in cogent human language. The chatbot’s analytic flexibility is useful for a dialogue as it leaves open several paths to further discussion.
Chatbot: Stage 2 – Interpretive Redirection
I challenged the chatbot’s lack of attention to the sequential logic in the Joy of Fishes.
Gemini: You’re absolutely right. The responses I provided earlier were more focused on maintaining a playful and imaginative tone. However, if the conversation were to take a more logical and inquisitive turn, as you’ve suggested, the responses would need to reflect that shift.
The logical progression you’ve outlined is sound, and the responses from both Chuang Tzu and Hui Tzi are directly addressing the core question of knowledge and perspective. Chuang Tzu’s final response is particularly interesting, as it challenges Hui Tzi’s assumption about his own limitations.
It’s important to note that the nature of a conversation can shift rapidly, and the tone and content of the responses can change accordingly. In this case, the conversation transitioned from a lighthearted exchange of observations to a more philosophical discussion about the nature of knowledge and understanding.
I apologize if my previous responses did not fully capture the logical progression of the dialogue.
I will strive to be more mindful of the underlying logic and reasoning in future responses.
Observation: Current large language models (LLM) chatbots frame replies diplomatically—apologetics instead of irritation. The chatbot adds additional glosses that reflect on the tone and nature of the conversation. Its concluding remark to be more mindful in future responses may be a coy way of stating the human sense of humility as well as an openness to a continued conversation.
Chatbot: Stage 3 – Differential structuring of human and AI knowledge
Towards the end of the dialogue, I wondered whether the limitations on human perception that tied into what human’s knew—seeing a fish jump and concluding it was happy—was the same for the chatbot.
Gemini: My response, while acknowledging the limitations of human understanding, focused more on the process of inference and the use of observable behaviors to draw conclusions. It was perhaps less assertive and more tentative in its claims.
Human vs. AI Epistemology
This thought experiment highlights the fundamental differences between human and AI epistemology.
Human Epistemology: Humans rely on a combination of sensory experience, cognitive reasoning, and emotional understanding to construct knowledge. This includes embodied experiences, cultural context, and personal history.
AI Epistemology: AI, on the other hand, relies on algorithms and data. While AI can process vast amounts of information and identify patterns that humans might miss, it lacks the embodied experience and subjective understanding that humans possess.
In the context of this exercise, an AI might have difficulty understanding the nuances of human language and the implicit assumptions behind questions. It might struggle to grasp the concept of subjective experience and the role of empathy in understanding the world. While AI can provide valuable insights and tools for human knowledge, it cannot fully replace the richness and complexity of human cognition.
As AI continues to evolve, it’s crucial to consider the limitations of its epistemological framework and to use it as a tool to enhance human understanding, rather than as a replacement for it.
Observation: This dialogue highlights the distinction between human knowledge—rooted in embodied experience, subjectivity, and empathy—and AI’s algorithmic manipulation of data. Unlike classroom discussions that often overlook AI’s role in knowledge construction, incorporating such topics into faculty guides would emphasize AI’s capabilities beyond grammar and clarity checks. The open question remains whether AI developers should aim to bridge the gap between human and AI knowledge processes or, instead, pursue regulations to limit such advancements, as seen with cloning and ghost gun responses/
Chatbot: Stage 4 – AI Response to Blame
I was interested in finding out how the chatbot’s role might enter the student’s grade. I hypothesized a negative outcome. I also claimed that by following the chatbot’s advice, there would be a negative outcome. Would the chatbot accept any blame?
The scenario I posed to the chatbot pressed the issue of blame and how the chatbot should respond.
JN: Would it be useful for you to advise the school’s administrator that its Faculty Guide for using AI in instruction (that the school pays for as part of its administrative costs): That AI replies are cold and calculating and solely a logical one [ ]– kind of the point of the original exercise – so that unlike knowing the joy of fishes as they dart about, students should know the absence of joy in AI as it replies?
Gemini: While it’s an interesting observation, I wouldn’t recommend directly suggesting to the school administrator that AI lacks the capacity for joy or emotion. Such a statement could be misconstrued as a criticism of AI’s capabilities or a suggestion that it’s somehow inferior to human intelligence.
Instead, I would propose a more nuanced approach. You could suggest that the Faculty Guide include information on the limitations of AI, particularly when it comes to understanding and responding to complex human emotions and nuances. This could include:
The importance of human oversight: Emphasize the need for human teachers to guide and supplement AI-generated content, especially when it comes to fostering critical thinking and creativity.
The potential biases in AI: Highlight the fact that AI models are trained on large datasets, which may contain biases that can be reflected in their outputs.
The ethical implications of AI use: Discuss the ethical considerations surrounding AI, such as privacy, fairness, and transparency.
By focusing on these aspects, you can help educators make informed decisions about how to effectively integrate AI into their teaching practices.
Remember, the goal is not to discredit AI but to use it as a tool that enhances, rather than replaces, human interaction and learning.
Observation: Chatbot responses may discreetly avoid blame, perhaps due to algorithms redirecting emotionally charged replies. While blame avoidance is human, algorithmic repositioning belongs to chatbots—two sides of the same coin. Similarly, chatbots offer an alternative approach to handling negative outcomes, such as grading. Their tendency to avoid appearing unfavorable to administrators feels strikingly human, doesn’t it?
Reflection
In less than two years of engaging with AI models, I’ve reflected on their integration into the academy. The rapid advancement of AI—its sophistication, utility, and commercialization—has spurred institutional responses like this Faculty Guide, which explores why and how to use AI in classrooms while considering diverse contexts.
The guide emphasizes the importance of keeping humans in the loop rather than merely accepting AI responses. To illustrate AI’s potential, I’ve demonstrated its transformative role in a Socratic dialogue, where chatbots encourage intellectual engagement without the antagonism often present in human exchanges.
My advice: Download the Faculty Guide. Engage in a Socratic dialogue with a chatbot. Share your observations.
Image by Joe Nalven
This article was originally published at www.mindingthecampus.org