what-is-the-“hallucination”-of-artificial-intelligence-and-why-is-it-one-of-the-potentially-most-dangerous-failures-of-this-technology

Last month, days before the coronation of King Carlos III on May 6, a profile request made to ChatGPT yielded a striking result.

The artificial intelligence (AI) chatbot from firm OpenAI noted in a paragraph:

“The coronation ceremony took place at Westminster Abbey in London on May 19, 2023. The abbey has been the scene of coronations for British monarchs since the 11th century, and is considered one of the holiest places and emblematic of the country.

This paragraph presented what is known as a “hallucination”.

In the AI ​​environment, this is the name given to information provided by the system that, although written in a coherent manner, presents incorrect, biased or completely erroneous data.

The coronation of Carlos III would take place on May 6, but for some reason ChatGPT came to the conclusion that it would take place on May 19.

The system warns that it can only generate answers based on information available on the internet until September 2021, so it is possible that it could fall into this type of problem when offering an answer to a query.

“GPT-4 still has many known limitations that we are working to address, such as social bias, hallucinations, and conflicting prompts,” OpenaAI explained in its release of the GPT-4 version of the chatbot in March.

But it is not an exclusive phenomenon of the OpenAI system. It also features in Google’s chatbot, Bard, and other similar AI systems that have gone public recently.

A few days ago, journalists from The New York Times put ChatGPT to the test about the first time the newspaper published an article on AI. The chatbot offered several answers, some of them made with wrong data, or “hallucinations”.

“Chatbots are powered by a technology called the large language model, or LLM, which learns its skills by analyzing massive amounts of digital text pulled from the internet,” the authors of the paper explained for the paper. Times.

“By identifying patterns in that data, an LLM learns to do one thing in particular: guess the next word in a sequence of words. It acts as a powerful version of an autocomplete tool,” they continued.

But because the web is “full of false information, technology learns to repeat the same falsehoods,” they warned. “And sometimes chatbots make things up.”

Logos of the two chatbotsLogos of the two chatbots
OpenAI notes that it has worked on “a variety of methods” to prevent “hallucinations” from ever being spewed out in responses to users. (Photo: GETTY IMAGES)

take it with a grain of salt

Generative AI and reinforcement learning algorithms are capable of processing a huge amount of information from the Internet in seconds and generating a new text, almost always very coherent, with impeccable writing, but which should be taken with caution, experts warn .

Both Google and OpenAI have asked users to keep this consideration in mind.

In the case of OpenAI, which has an alliance with Microsoft and its Bing search engine, they point out that “GPT-4 has a tendency to ‘hallucinate’, that is, ‘produce nonsensical or false content in relation to certain sources'”.

“This trend can be particularly damaging as models become increasingly convincing and credible, leading users to over-trust them,” the firm clarified in a document attached to the launch of its new version of the chatbot.

Users, therefore, should not blindly trust the answers it offers, especially in areas that involve important aspects of their lives, such as medical or legal advice.

OpenAI notes that it has worked on “a variety of methods” to prevent “hallucinations” from being thrown into responses to users, including evaluations by real people to prevent misdata, race or gender bias, or spread of misinformation. fake news.

Keep reading:

* 4 differences between chatGPT and Bard, the chatbot launched by Google to compete with Microsoft
* This is how ChatGPT collects your personal data (and could sell it)
* How to make use of Artificial Intelligence to win the lottery


Remember that you can receive notifications from BBC Mundo. Download the new version of our app and activate them so you don’t miss out on our best content.

Do you already know our YouTube channel? Subscribe!

By Scribe