Artificial Intelligence or just text prediction? How ChatGPT and other Open AI programs function
An AI-generated New College student. Photo courtesy of Basil Pursley and DALL-E

Artificial Intelligence or just text prediction? How ChatGPT and other Open AI programs function

OpenAI, the renowned artificial intelligence research laboratory, has made groundbreaking progress in the field of natural language processing with its state-of-the-art language model, ChatGPT. As one of the largest language models in the world, ChatGPT has been able to generate coherent and realistic text in response to a wide range of prompts, from casual conversation to technical queries. The potential of this technology has already been demonstrated in various applications, and experts predict that it could revolutionize the way we interact with machines and each other.

That first paragraph wasn’t written by a Catalyst reporter. Rather, it was generated by ChatGPT through the prompt: “Write a lede for a news story about OpenAI and ChatGPT.”

Artificial intelligence (AI) has gone from science fiction to being a reality…or has it? Social media is flooded with discussions on how ChatGPT, a natural language text generating tool, has come up with crazy scenarios—or people showing off their AI art made using the image generation program DALL-E. But how do these programs actually function? 

Open AI is a company which seeks to “ensure that artificial general intelligence (AGI)[…] benefits all of humanity.” The company has had a rise in popularity as its image and text generation tools, DALL-E 2 and ChatGPT respectively, have become popular resources for online content creation. 

While the term “artificial general intelligence” may evoke imagery of iRobot or the singularity, Open AI’s tools are much closer to sophisticated versions of your phone’s text prediction program. The GPT-3 model—which is what ChatGPT is based on—is able to accurately and impressively predict what text should follow or respond to inquiries. While the technology is certainly in its infancy, companies like Microsoft have shown immense interest in the programs through investments. So as we venture into the unknown of AI, how can we learn to work alongside these tools? Or are they coming for our jobs altogether?

GPT-3 stands for “the third generation Generative Pre-trained Transformer.” Tech Target describes GPT-3 as “a neural network machine learning model trained using internet data to generate any type of text. Developed by OpenAI, it requires a small amount of input text to generate large volumes of relevant and sophisticated machine-generated text.”

While GPT-3 does not “think” in the same way a human might, it is able to pull from a large amount of data-sets to create comprehensive sets of writing which feel human, and follow standard language conventions. GPT-3 is also the largest neural network to exist with 175 billion parameters influencing its output. 

This tool has incredible potential. One particular project called AI Dungeon uses the GPT-3 model to provide a unique role-playing experience and can be used to aid games like Dungeons and Dragons and Pathfinder.

While using it for a fantasy role-playing game is innocent enough, GPT has an alarming consistency to confidently lie about information, and at its current stage, is not ready to reliably use for journalism or academic writing. 

Another AI-generated student posing next to the phrase “FOIANO.” Photo courtesy of Basil Pursley and DALL-E

Business Insider attempted to have the program write a news article that the reporter had already written about a Jeep factory in Illinois. While the program wrote a predictable and readable story, it also came up with direct quotes from a source in the story.

“Aside from some predictable writing (which I can be guilty of, anyway), the story was nearly pitch-perfect, except for one glaring issue: It contained fake quotes from Jeep-maker Stellantis’ CEO Carlos Tavares,” Senior Business news reporters Samantha Delouya wrote. “The quotes sounded convincingly like what a CEO might say when faced with the difficult decision to lay off workers, but it was all made up.”

With its ability to confidently write false articles and essays, there is already concern from educators that students may begin turning to ChatGPT to write their essays. However, despite this concern, some teachers are welcoming the new technology into their classrooms in order to prepare their students for the future. 

For Donnie Piercey’s fifth grade class in Kentucky, learning how to recognize AI-generated writing from human-written pieces is an essential part of their curriculum. 

“As educators, we haven’t figured out the best way to use artificial intelligence yet,”  Piercey told ABC News. “But it’s coming, whether we want it to or not.” 

At New College, Professor of Physical Chemistry Steven Shipman is offering a tutorial class on ChatGPT. 

“We’re hoping to put together a class-created ‘User’s Guide’ to ChatGPT,” Shipman said of the course’s objectives over email. “We’d also like to host a panel discussion at the end of the semester (open to the whole campus community) to share our experiences.” 

Shipman explained their excitement over GPT’s potential for education. 

“I really like the idea of something like an individualized tutor that could provide explanations in different words or at different levels of sophistication to someone who’s having trouble grasping a concept,” they said.

An AI-generated New College student in front of an AI-generated New College building. Photo courtesy of Basil Pursley and DALL-E

One student in the class, third-year Scruffs O’Neil, mirrored Shipman’s response and explained that they hope ChatGPT could be used for lesson plans and teaching students the English language in a personalized way.

“What excites me is being able to use [ChatGPT] as an accessibility tool,” O’Neil explained. “Allowing people with language difficulties, whether that’s an expressive language disorder, or learning English as a second language or dyslexia. To be able to use it as a tool to guide them through the process of being able to acquire and master the English language on their own.” 

O’Neil explained that ChatGPT could be used to empower students and learners uniquely, but also acknowledged that there are some concerns—especially that of plagiarism. 

O’Neil is concerned that “teachers are going to fear the program too much,” and even have ChatGPT “outright banned from classrooms.” 

They also expressed concerns with students using the program to cheat, and expressed interest in seeing the program specifically deter plagiarism. O’Neil wants to see ChatGPT used more as a writing tutor, rather than an explicit tool for cheating. 

“If someone were to put in for example, write a five-paragraph essay explaining the presence of racism within To Kill a Mockingbird, I would want it to say something along the lines of, ‘While I can write a five-paragraph essay, it is more important for you to understand the principles behind creating an argument, and so I will guide you through the process of writing your own essay, then write it for you,’” O’Neil said. 

Additionally, Associate Professor of Computer Science Matt Lepinski said that he’s not worried about NCF students violating academic integrity through ChatGPT. 

“My experience has been that New College students adhere to policies for academic integrity, not only because students typically possess a strong intrinsic set of ethics, but also because they want to maximize their learning so that they can become well-prepared to achieve their career goals,” Lepinski said. “I don’t think ChatGPT changes any of this.”

Leave a Reply