OpenAI begins routinely disclosing results of AI model safety evaluations

OpenAI begins routinely disclosing results of AI model safety evaluations






OpenAI has now begun routinely publishing the results of internal security evaluations of its AI models as an effort to increase transparency in the development of artificial intelligence (AI) technology.

As reported by Tech Crunch on Thursday, the company launched the Safety Evaluations Hub, a special page that displays the scores of OpenAI models in various tests, including the ability to generate malicious content, vulnerability to jailbreak , and hallucination levels.

OpenAI says it will continue to update this page regularly, especially when there are major updates to its models.

Post a Comment

Previous Post Next Post

Advertisement Global Job