The AI-lephant in the Room: Discussing AI as a way of knowing with UVU Students

I hope that addressing the “AI-lephant in the room” helped build trust with students so that they can come to me with questions about using ChatGPT and continue exploring how AI impacts teaching and learning together.

Late last year, OpenAI publicly released its new large language model ChatGPT (or GPT-3). This was met with an explosion of articles (see articles in Vox, New York Post, Inside Higher Ed), social media conversations, and even passionate phone call discussion between family members (or maybe that was just me). Among academics, especially those focused on teaching, discussions continue relating to how our pedagogy and teaching strategies need to shift considering more powerful and accessible AI tools available to students. Ideas range from calls to return to pen-and-paper in-class exams and oral examinations, to using assessment types that are less possible for AI to complete (multimodal assessments, highly contextualized assignments, assignments that require real peer-reviewed sources – for now), to embracing AI tools and directly instructing students how to use them to improve their work and learning. I’ve been a part of conversations questioning the entire purpose of what we do in our professions (or what we even do as humans) – definitely restful thoughts for winter break!

With the spring semester fast approaching, I decided I wanted to bring my students into the conversation about ChatGPT. Each spring, I teach our psychology research methods course and I thought this topic would be a fascinating addition to our usual discussion of “ways of knowing.” Before the class dives into the depths of methods for empirical psychological research, I begin the class by introducing multiple ways that we understand the world and lead us in discussing the advantages and limitations of each. So, in our first class session after going through intuition, personal experiences, authority, reason, and research I paused and dramatically added “AI” to the list of ways of knowing.

a smiling robot holding a tablet screen
After acknowledging that AI isn’t typically considered a way of knowing and might better fit into another category, I defined AI. A student almost immediately raised her hand and said, “Have you seen ChatGPT? I asked it to create an annotated bibliography and IT DID!” I then switched gears and demoed ChatGPT to the students who were unfamiliar with it. We had it write a song, add chords, and answer a question I asked in our lecture. I was asked if plagiarism detectors catch it and I said not yet, but there are other tools that can detect AI writing (and later showed an example, https://writer.com/ai-content-detector/). I then had the class discuss the advantages and disadvantages of relying on tools like this for knowledge. They brought up several advantages:

  • Using AI can help you if you get stuck. Can work as a jumping-off point if you have a creative block.

  • It’s entertaining! The uses outside of schoolwork are interesting and fun to play with – like creating meal plans, songs, etc.

  • It has access to a lot of different kinds of information all in one place

  • It is free access to lots of information. So, if you do not know an expert or have the resources to get training in a certain area it can help and reach more people.

I then asked the class if they saw the speed at which it responds as an advantage (something I’d thought of and I’d heard in their small group discussions). Instead of seeing this as a total advantage, it led us into a discussion of weaknesses.

  • It is fast to respond, but since we cannot fully “trust” the response we then have to spend more time looking things up and fact-checking the response. If the AI isn’t completely accurate, then what is the point of using it since we need to still go do the research? Reading the AI responses and making sense of things can be difficult.
  • AI can replicate human biases since they are trained on human materials. Meta’s chatbot and antisemitism were brought up as an example.
  • AI may not be able to converse/share information using appropriate social cues for specific cultures.
  • Culture itself contains many “unspoken” rules and ideas which cannot necessarily be used for training a large language model since it may not pick them up from text alone.
  • AI can be trained to do bad things or spread misinformation.
  • Humans can “think outside of the box” and come up with new ideas, whereas AI is only drawing from what humans give it.
  • Since AI is created by humans and human patterns, it should be trusted like talking to any other person that you should fact check.

I was surprised to see students share so many disadvantages and express skepticism about relying on AI. Perhaps it was the presence of a professor in the room or an already healthy critical mindset among research methods students. As the semester continues, we will see how much ChatGPT impacts our own classroom activities. After class, multiple students wanted to discuss the tool privately with me, how they had used it, and their ethical concerns. I hope that addressing the “AI-lephant in the room” helped build trust with students so that they can come to me with questions about using ChatGPT and continue exploring how AI impacts teaching and learning together.

 
 
 
 
Headshot of Wilson the Wolverine

Questions? Ask Wilson