ChatGPT: Creators expose dark secret

You probably use – or at least have heard of – ChatGPT. The Artificial Intelligence tool has had a “boom” in recent months and has been a “hand on the wheel” to answer questions or even perform more complex jobs.

However, the system hides a dark secret – and its creators know it. Not only do they know, but they are shouldering the responsibility for it.

see more

Google develops AI tool to help journalists in…

Unopened original 2007 iPhone sells for nearly $200,000; know...

According to the OpenAI, company that created the tool, ChatGPT suffers from “hallucinations”. That is, it can give false information at times.

The situation was announced by the company last Wednesday, May 31. In addition, they have also announced a newer and more powerful method for training the AI.

Why is it important to fix this ChatGPT misbehavior?

The basic explanation is that people use the tool a lot in their daily lives. Therefore, it is imperative that ChatGPT pass reliable information to those who use it.

For example, if you use the

AIto improve a school work text, or to look for specific recipes, it is essential that the information there is correct.

More than that, with the growing number of chatbot users, false information can be spread easily as if it were the truth.

And this could cause a huge headache next year, when the United States Elections begin – and this could also spill over here in Brazil!

“Even state-of-the-art models have a tendency to produce falsehoods — they exhibit a tendency to make up facts in moments of uncertainty. These hallucinations are particularly problematic in domains that require multi-step reasoning, a since a single logic error is enough to derail a much larger solution," said OpenAI in communicated.

What to do to fix?

OpenAI reported that it will use a new strategy to combat the “hallucinations” of the ChatGPT: train AI models to reward themselves for every correct step. Instead of just rewarding her at the end with the complete answer.

“Detecting and mitigating a model's logical errors or hallucinations is a critical step towards building an overall aligned AI. The motivation behind this research is to address hallucinations to make models more capable of solve complex reasoning problems,” explained Karl Cobbe, a mathematics researcher at company.

Graduated in Social Communication at the Federal University of Goiás. Passionate about digital media, pop culture, technology, politics and psychoanalysis.

Asian tigers. Asian Tigers and the New Asian Tigers

Asian tigers. Asian Tigers and the New Asian Tigers

In the 1970s, four countries in Asia (Singapore, Hong Kong, South Korea and Taiwan) experienced a...

read more

King Louis II of Bavaria

King of Bavaria or Bavaria (1864-1886) born in Nymphenburg, Germany, who had little interest in p...

read more
Encceja 2017 will use questions from previous exams due to lack of funds

Encceja 2017 will use questions from previous exams due to lack of funds

The National Examination for Certification of Skills for Young People and Adults (Encence) 2017 w...

read more