OpenAI explains the causes of AI chatbot hallucinations and solutions to overcome them

OpenAI explains the causes of AI chatbot hallucinations and solutions to overcome them






 OpenAI is said to have discovered the cause of hallucinations in artificial intelligence (AI) chatbots and is offering a solution to make the technology more trustworthy.

Despite their widespread use in various fields, AI chatbots still have a serious weakness: their tendency to provide false answers that appear convincing, often referred to as hallucinations.

As reported by Gizmochina on Friday, in a 36-page paper co-authored with Georgia Tech researcher Santosh Vempala, OpenAI assessed that hallucinations are not solely the result of poor model design, but rather the way AI systems are tested and ranked.

The current benchmark, they argue, actually encourages chatbots to answer all questions, even if they're wrong, and penalizes models that choose to hold back when unsure. This is like a multiple-choice exam that rewards guesswork over leaving blank answers.

Post a Comment

Previous Post Next Post

KALISSIA (PERFECT PRODUCTS REALM) is a Big Wide Range & Variety Shopping 🛍️🛒🛍️ Site 👉 Click Now Shopping Start