During the COVID-19 pandemic, school buildings across the country closed. Students had to learn through virtual instruction. One fear teachers had was knowing if students were doing their work at home or if someone else was doing it for them or helping them too much.
Although students have been back in school in person for a while, fear of who is actually doing the work remains. This time, the helper isn’t even human.
Enter ChatGPT.
OpenAI, the organization behind ChatGPT, stated, “We’ve trained a model called ChatGPT, which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.”
According to Business Insider, people have used ChatGPT to write real estate listings, write code, and even lesson plans. Who doesn’t want a shortcut?
The real question we should ask is: What are the implications for ChatGPT in academia?
Using what AI wrote instead of doing the cognitive lift and the work could be more of a barrier than a tool.
Recently, I read the book “Feed” by M.T. Anderson. The novel was published in 2002. It is a science fiction novel set in a dystopian future. Humans have figured out how to travel to the moon and other planets. When children are born, a feed is implanted into their heads to give them the necessary information. Also, the feed provides suggestions to the person based on their interests. However, all is not well in the society.
On page 47 of the novel, Titus, the main character, says, “That’s one of the great things about the feed – that you can be super smart now. You can look things up automatically.”
Is ChatGPT our version of the feed?
This novel asks readers to consider these questions:
These are the same questions we must ask ourselves in academia about ChatGPT. Although ChatGPT has garnered a lot of conversation among adults, Pew Research notes that only 14% of adults have even tried it.
For a while, I was in that 14%. What moved me to action was thinking about the conversation around Wikipedia. Wikipedia came into existence in 2001, the same year I graduated from high school and entered college to become an English teacher. Professors told us we couldn’t use Wikipedia as a source because it wasn’t reputable, as anyone could change the entries.
My first job after college was as an English teacher. I wondered how I could communicate expectations about Wikipedia. During my first few years, I said what my professor had said at Purdue University and discouraged students from using it. Then, I thought about how I was using Wikipedia.
I would Google it if I needed to know more about a topic. The first entry, many times, was Wikipedia. Not only did I click the link, I read the entry. I didn't stop there. I clicked the sources hyperlinked in. Many of those sources were reputable sources. Then, I changed my stance. Instead of telling students not to use Wikipedia, I taught them the best way to use it. We should do the same for ChatGPT.
My first venture into using ChatGPT was asking it to compile a list of places in my city where I could get one of my rugs cleaned in my home. I was sent a decent list. I used a company from that list. Then I wonder how the AI would tighten up a paragraph. After writing a paragraph on a syllabus for a course I’m teaching, I asked the AI to condense it and make it more direct. I couldn’t take the changes and use them; however, it gave me some ideas for different words I could use. Last, I asked ChatGPT to write a summary that would tell me about the Black Americans of Achievement series. I acquired a set from my friend’s grandmother. It was clear that this was an older series, so I wanted to know who made it and what led to its creation. The AI compiled information I could use to go back to Google and learn more.
I could use ChatGPT because I already had some knowledge. Regarding the rug list, I knew I couldn’t just hire a random company from the list. I had to look up that company, read reviews, and check the prices. Regarding the syllabus, ChatGPT writing is a bit robotic. I know I can’t use it verbatim. This is where I worry about students. Many of them don’t have enough writing skills to understand that they won’t have a good essay from having AI write it. Additionally, AI could have facts wrong. Using what AI wrote instead of doing the cognitive lift and the work could be more of a barrier than a tool.
ChatGPT is a tool, and banning it isn’t the answer. Like any tool, it’s our responsibility to teach students how to use it effectively to increase their output and expand intellectual growth.