OpenAI is said to have discovered the cause of hallucinations in artificial intelligence (AI) chatbots and is offering a solution to make the technology more trustworthy.
Despite their widespread use in various fields, AI chatbots still have a serious weakness: their tendency to provide false answers that appear convincing, often referred to as hallucinations.
As reported by Gizmochina on Friday, in a 36-page paper co-authored with Georgia Tech researcher Santosh Vempala, OpenAI assessed that hallucinations are not solely the result of poor model design, but rather the way AI systems are tested and ranked.
The current benchmark, they argue, actually encourages chatbots to answer all questions, even if they're wrong, and penalizes models that choose to hold back when unsure. This is like a multiple-choice exam that rewards guesswork over leaving blank answers.
.webp)