Feature left bracketright bracket Summer 2023

ChatGPT: Revolutionary Tech or Pandora’s Box?

WPI experts weigh in on what should excite and alarm us about large-language models.

ChatGPT illustration

When you hear both exciting promises and dire warnings about ChatGPT and other large-language models, the dichotomy can be confusing.

Is ChatGPT groundbreaking technology as revolutionary as the steam engine or electricity that allows machines to efficiently handle tedious, mindless work, opening new horizons for untethered human creativity? Or does it create a sly trap, tricking and manipulating the uninformed, marching humanity toward self-aware technology that will inevitably take over the world, à la HAL 9000 from the 1968 film 2001: A Space Odyssey?

In short, is it a tool to be embraced—disruptive in a good way—or something to be feared as a Pandora’s Box that, once fully opened, poses “a profound risk to society and humanity,” as an open letter signed by more than 1,000 tech industry experts and academics warned in early 2023?

Five WPI experts weigh in on the current and future impact of ChatGPT—in society in general and in the classroom in particular—as the world adjusts to this new landscape.

First, recognize ChatGPT for what it is—and isn’t.

Experts agree that taking the mystery out of this emerging technology is the first step to understanding it. While the most talked-about version of large-language models might be ChatGPT, which made a splash when OpenAI launched it in November 2022, other options such as Microsoft’s Bing, Google’s Bard, and Meta’s LLaMa are gaining traction, with more sprouting up seemingly overnight.

“There’s no magic to ChatGPT,” says Xiaozhong Liu, associate professor of computer science, whose research focuses on natural language processing and content generation. “It’s an AI model that predicts the next word, given the context and known interactions, using a super large database—in the case of ChatGPT3—of 200 billion parameters. ChatGPT can’t understand semantics within words. However, it can generate words in a very beautiful way.”

Unmistakable benefits include the ability to summarize information for efficient decision making or to order a series of facts into a reasonable-looking narrative. “For instance, if you go to Amazon and read reviews, there might be 1,000 of them and you won’t have time to read them all,” says Liu. “We can use an algorithm to summarize those reviews to help you make a decision. Commercially, ChatGPT is really promising.”

The dark side involves the limitations of the training databases ChatGPT uses, with potential bias purposefully or mistakenly swept in, and the ethical issues that arise when the potentially flawed narrative is presented as fact, sometimes with results so far off the rails they’re called hallucinations.

ChatGPT Explained by ChatGPT

ChatGPT is a large-language model that was trained by OpenAI. It is based on the GPT (Generative Pre-trained Transformer) architecture, specifically the GPT-3.5 model, which is an improved version of the GPT-3 model.

The purpose of ChatGPT is to generate human-like text responses to user input. It does this by using a process called unsupervised learning, where it analyzes vast amounts of text data to learn patterns and relationships between words and phrases. This allows it to generate coherent and contextually appropriate responses to a wide range of queries.

ChatGPT can be used for a variety of purposes, including chatbots, question-answering systems, language translation, and more. It has a vast knowledge base and can understand and respond to queries on a wide range of topics.

Overall, ChatGPT represents a significant advancement in natural language processing and has the potential to revolutionize the way we interact with computers and artificial intelligence.

Text generated by ChatGPT in response to the prompt: “Explain ChatGPT.”

“If you asked a ChatGPT to solve an algorithmic problem similar to what it was trained on, it can often give you a correct answer or something that’s convincing, even if it’s wrong. In contrast, if you ask it something completely outside of what it’s been trained on, it will give you convincing-sounding, utter nonsense,” says Jacob Whitehill, associate professor of computer science. “It’s dangerous because if you don’t have the metacognitive awareness about how to be skeptical and how to drill down to make sure you’re not believing the wrong parts, it can really lead you astray.”

Jacob Whitehill

Jacob Whitehill

Gillian Smith, associate professor of computer science and director of the Interactive Media and Game Development program, says human nature’s tendency to read intelligence into non-intelligent technology only compounds the problem.

“AI is not sentient. It’s software created by humans, embedded with human biases, designed in an intentional way,” says Smith. “When we start to think of software as having agency, we give it a dangerous level of power. Some people interact with ChatGPT in the same way they talk with a person, and that imagined person feels like an authority. But it’s not a person, it’s not an authority, and it’s not intelligent.”

And left to its own devices, large-language models may get worse before they get better, says Kenny Ching, an economist and assistant professor in The Business School.

“So much goes on in a black box. I consider myself fairly learned, but I don’t understand these models and what data they are pulling from,” Ching says. “ChatGPT is the one we’re talking about now. But within a year, we’ll have thousands available—all using their own databases and biases. We don’t know where the errors can be.”

Ethical Issues Abound and Guardrails Slow to Emerge

A mayor in Australia threatens to sue OpenAI for defamation unless it corrects a false claim by ChatGPT that he served prison time for bribery. A George Washington University professor objects because he is wrongly included on a ChatGPT-generated list of legal scholars who have been accused of sexual harassment. Artists who post their portfolios online have their work unknowingly swept up into AI text-to-image generators without consent or compensation. Private health information or other personal data may be collected—through legal or illegal means—and used nefariously. Ethical issues are breaking new ground in a legal system not prepared for this new technology.

Gillian Smith

Gillian Smith

This past D-Term, Smith introduced a special topics seminar called “Ethics of Creative AI” that filled up so fast she had to expand the course’s capacity—from 12 to 42—over the course of a week. Students chose individual topics to research and discuss, such as how ChatGPT easily generates propaganda or the consequences of questionable data that introduces gender bias, racial stereotypes, or hateful speech into the results.

The impact is great in the creative world, where AI can take the talents of artists, songwriters, even choreographers, to manufacture similar works that easily circumvent copyright protection. 

“It’s one thing to be creatively inspired by another artist, or even to sample—music especially has a really strong tradition of sampling. But it’s always an intentional, attributed nod to the artist. Now there’s no affirmative consent process,” says Smith.

At a U.S. Senate hearing in May 2023, OpenAI CEO Sam Altman acknowledged the need for some kind of government regulation, suggesting a federal licensing agency to oversee new liability rules or safety requirements. But the wheels of government turn slowly, and no such oversight plan is on the horizon.

In the meantime, other countries such as China are creating strong regulations on their training models that reflect certain political beliefs, says Liu.

“Right now, ChatGPT provides a centralized service from Microsoft, which is good,” he says. “In the very near future, ChatGPT will be decentralized and there will be lots of other choices. In the U.S., I wouldn’t be surprised if we soon see a liberal ChatGPT or a conservative ChatGPT. You will find the bubble that supports your viewpoint. People will have fewer and fewer chances to learn something comprehensively.”

Xiaozhong Liu

Xiaozhong Liu

Despite the dangers, ChatGPT is a game changer for developing countries.

While the developed world wrestles with issues of privacy and accuracy of large-language models, the benefits outweigh the risks in countries struggling with access to higher-level education and healthcare, according to Xiaozhong Liu.

“Under-resourced countries need three things: food, education, and medical care. ChatGPT can help with the last two. It opens the door to a personalized delivery of knowledge that they really need,” he says. He can record a lecture on a topic, but if the viewer doesn’t speak the same language or have the same educational foundation, the effectiveness suffers. ChatGPT can break down concepts based on the background and knowledge of the audience.

In terms of healthcare, in countries where there might be one doctor per 20,000 people, “privacy and ethics of AI are luxury words,” he says. “If my kid is dying, and someone told me that they won’t provide me help ‘because it will violate your privacy,’ that’s a misleading concept. ChatGPT can provide basic knowledge and critical resources to save lives.”

Educators encounter challenges—and opportunities.

ChatGPT’s ability to generate intelligent-sounding prose has caused waves of concern across all levels of education. Faculty are reevaluating assessment techniques and establishing ground rules for disclosing AI’s use, but also embracing opportunities to help students use the tool critically, especially since they will probably be using large-language models after they graduate.

Kenny Ching

Kenny Ching

“One of the benefits of ChatGPT is that it has exposed a shaky foundation of what a lot of modern education has been built upon, which is regurgitation rather than real inquiry,” says Ching. “This wasn’t always the case. Not too long ago, research was an essential part of the educational system, even at the undergraduate level. Since then, teaching and research have become so delineated that they became two activities altogether. Our graduates need to be knowledge producers—and WPI’s educational philosophy of theory and practice fits this well.”

“It’s my job as a professor to design take-home assignments to help students practice skills above and beyond what they can get from ChatGPT for free,” says Whitehill. “Children still have to know how to add and subtract numbers despite having calculators. It’s still important for students to learn the basic skills so they can achieve higher-level knowledge building.”

Some believe AI itself can uncover ChatGPT-generated text—becoming the solution to the problem it creates—but more sophisticated iterations may make that detection harder. Instead, our faculty experts agree, educators should teach students how to coexist with the tool and learn from it.

“Our students actually do struggle with how to use AI,” says Yunus Telliel, assistant professor of anthropology and rhetoric. In his Introduction to Rhetoric class, he asks students to create assignments with ChatGPT and then critically analyze the output, comparing it to text humans have generated. He thinks cultivating such metacognitive skills as part of a community of learners is what makes universities special and exciting places in the age of AI.

And he is convinced that ChatGPT shows obvious limitations, especially with writing and thinking.

Yunis Telliel

Yunus Telliel

“If we really want to differentiate human writing from AI text, we need to think about the process—writing is about making mistakes and learning from those mistakes. This is certainly a manifestation of our limitations: our short memories, our attention spans, our desires, our obsessions,” says Telliel. “That’s not necessarily a bad thing. We connect to each other through these limitations and vulnerabilities. If human writing is special, it’s because it is not perfect. This ‘imperfection’ allows new horizons to emerge in our thinking. To me, that’s the past of humanity—and will be its future.”

Rather than police the use of the technology, Ching says he assumes AI will be used by students, which allows him to dive into higher-level concepts. For instance, in his sports analytics class, he now skips over foundational coding prerequisites. “I’m making the assumption that students will level up quicker in terms of the coding knowledge needed to do this analysis. You can get AI to do that.”

Long before ChatGPT became popular, progressive faculty were embracing “ungrading,” a concept that emphasizes feedback rather than scores or letter grades for better learning outcomes, says Smith, which should make AI-generated output irrelevant. “Homework shouldn’t be about getting the right answer. Professors should be giving students room to make mistakes on assignments in a way they are not implicitly punished for, and also for students to see it as a place to make mistakes instead of completing assignments based on a rubric,” she says.

“The bigger question is, ‘What is the meaning of higher education?’” says Telliel, who with Professor Robert Krueger represents WPI in the nationwide Public Interest Technology University Network. “What we should be teaching is how to work with a community, learn for the good of society, and create technologies that help us and others flourish. Like many colleagues at WPI, I work to be a better educator and scholar by building on conversations and the understanding that we exist in a bigger place,” he says, adding that none of these essential concepts can be taught or assessed by ChatGPT.

Ignore, or discount at your own peril.

Although some have called to somehow ban, or at least pause, the further development of large-language AI models until the implications are better understood, the reality is that students will be using the technology in many professions after they graduate.

“I do know that if we ban the use of ChatGPT from a college campus, students will feel the experience they get here will be more displaced when they go out in the world of business,” says Ching. “It’s on the educator to find a way of integrating the tools. If we say you can’t use it now, and you’re going to use it in the future, are we shooting ourselves in the foot?”

Let’s think of AI as just a decision tool, and specifically a decision tool to lower the cost of making decisions.

Kenny Ching


He says with any general-purpose technology such as the steam engine or the computer, jobs are inevitably affected—some more than others. “If we look at the pushback, it’s really been loud from people who think their jobs are directly being threatened,” he says, adding that rather than the blue-collar jobs that were replaced with earlier technologies, the jobs that AI might do more efficiently are white-collar ones such as a data analyst. On the other hand, “coders recognize it may take away the tedium of their job, but not the creativity.

“Let’s think of AI as just a decision tool, and specifically a decision tool to lower the cost of making decisions,” he says. “The corollary is that the value that you as a human add to the decision just got a lot higher.”

The speed of recent developments seems alarming—but is it?

“The development seems to be happening fast in the public sphere, but these tools have been a long time coming,” says Smith, whose dissertation a decade ago focused on the implications of generative AI. “It’s gotten more attention now because anyone can use it—you don’t have to install a software package, it’s being integrated into Microsoft Office products, and the output is getting better and more believable than prior iterations. But the fundamental underlying tech has been around for a long time.”

Smith is optimistic that, as in all emerging technologies, boundaries will eventually be discovered that will show ChatGPT’s obvious limitations.

“My worry is that the boundaries are so far away that it might take us too long to find them,” she says. “In the meantime, it’s believable enough that we’ll let ourselves keep getting tricked.”

Reader Comments

1 Comments

  1. C
    Chat GPT Deutsch

    “Your article on the debate surrounding revolutionary tech like ChatGPT is thought-provoking. It raises essential questions about the potential benefits and risks of such technology. As we embrace AI advancements, it’s crucial to consider the ethical and societal implications. Your insightful exploration of this topic sheds light on the complexities involved.

Post a Comment

Your email address will not be published. Please fill in all required fields marked *

When posting a comment, you are stating that you have viewed and agree to the posting guidelines.

All comments will be reviewed prior to posting and any comments that violate these guidelines will not be posted.

Other Stories

Revealing the Ocean’s Mysteries Jonathan Bird filming sharks

Revealing the Ocean’s Mysteries

Underwater cinematographer Jonathan Bird ’90 tells the story of life below the waves.

Read Story
Nothing Lasts Forever Julie Bliss Mullen at Aclarity

Nothing Lasts Forever

Julie Bliss Mullen ’12 uses electrochemistry to sever the nearly unbreakable bond in PFAS forever chemicals.

Read Story
Click on this switch to toggle between day and night modes.