What Are You Talking About with ChatGPT?

How you talk to ChatGTP has an enormous impact on what it spits out. This presents many opportunities, but also some dangers, on the path to widespread and successful use of this AI.

What Are You Talking About with ChatGPT?

These days, everyone is talking about (or to) ChatGPT. OpenAI’s revolutionary natural language processing system GPT-3.5 represents the latest advancement in natural language processing (NLP). People can use it through different applications. One of them is OpenAI’s chatbot ChatGPT. It lets you engage with the AI through a chat conversation. It not only seems to understand what you’re saying but can deliver precise answers to your questions. That’s possible because the model was trained with a wealth of data from the internet. The AI “sounds” human because it learned from content created by humans. So, engaging with ChatGPT can feel like talking to a human encyclopedia.

It’s fascinating to get answers to all sorts of questions. But can you trust those answers? Will people take every response at face value? 

I set out to test how ChatGPT would answer questions about scientific topics that I know are somewhat controversial. For example, when I asked about the concept of learning styles, ChatGPT told me that some people might prefer to learn by seeing, others by hearing, etc. If I had taken this as an answer and stopped inquiring, I would have missed the important context.

When I asked whether the idea of learning styles was backed by recent research, ChatGPT’s answer became more critical. It told me that the idea of learning styles was popular for decades, but has recently been the subject of debate. Evidence from newer research suggests learning styles are not as influential on learning as previously thought. This was more in line with what I was expecting as an answer. But who would pose that critical follow-up question, giving ChatGPT a chance to elaborate? 

Image of: A.I. Is Mastering Language. Should We Trust What It Says?
Article Summary

A.I. Is Mastering Language. Should We Trust What It Says?

Machines are getting smarter, but is that a good thing?

Steven Johnson New York Times Magazine
Read Summary

This experience made me wonder what kind of answers we can expect from this AI. Could it be biased toward popular belief and misconceptions? Who will fact-check the output to see whether it’s truthful? There is an old saying, “garbage in, garbage out” – the AI model will only be able to come up with excellent results if it is trained with correct and unbiased content. But inevitably, its training data will also include popular misconceptions and false information. It’s already hard enough for people to know what sources they can trust. If more content creators come to rely on this technology without involving human experts for verification, some of the misinformation will spread even more effectively. And what if someone uses this technology for deliberate manipulation? This is reminiscent of what we have seen happen on social media platforms. 

With GPT, anyone can easily access a machine that produces professional text on virtually any topic. Only experts can tell whether its output is truthful.

A layperson often cannot identify what is false or incomplete. It’s important to understand that those AI models have been trained with much information, some of it true, some of it false. So, among the wealth of useful and correct information, there will also be content that is outdated or simply wrong. People have always repeated false information. But now it has become easier to reproduce it and harder to identify it. 

Image of: The Supply of Disinformation Soon Will Be Infinite
Article Summary

The Supply of Disinformation Soon Will Be Infinite

In the old days, humans created propaganda, but now robots can do it just as well.

Renee DiResta The Atlantic
Read Summary

Using a web search or even an encyclopedia to find answers to your questions suddenly seems old-fashioned. Engaging with ChatGPT to get your questions answered feels so much more natural. But something important is missing: the references to the original sources of information. An encyclopedia article, for example, may provide you with links to websites and scientific research papers that it drew from. A scientific paper lists the scientists and institutions involved in the research and explains how the research was done. So, you know where the information is coming from. To ensure quality, a paper undergoes critical peer review by subject matter experts. A paper’s authors describe how they did their research so that others can replicate it. They present their findings and talk about how they relate to the work of others.

Such sources of knowledge present a body of work on a given topic with many viewpoints. 

Wouldn’t it be great if instead of going through all these sources of information, we could get a single, easy-to-understand, all-encompassing answer? And isn’t this what we want as humans? Don’t we seek easy explanations for a complex world? GPT offers exactly that. And I’m convinced it can be a force for good. Think of all the novel ways education could spread if this tool was accessible to all!

Image of: OpenAI CEO Sam Altman | AI for the Next Era
Article Summary

OpenAI CEO Sam Altman | AI for the Next Era

Humanity balances on the precipice of a global AI revolution, but some fear rather than embrace it.

Sam Altman Greylock
Read Summary

The challenge, however, will be to ensure that people understand the context and potential limitations of this technology. And perhaps the AI will have to do a better job explaining where it got its insights from. More than ever, people need trusted sources. 

AI Resources from getAbstract:

How the Journal Saves You Time
Reading Time
5 min.
Reading time for this article is about 5 minutes.
Researched Abstracts
3 We have curated the most actionable insights from 3 summaries for this feature.
3 3 Articles
Share this Story