Sam Altman, the company's CEO, described the future task as "a demanding job," emphasizing that the goal is to address the "real challenges" generated by advanced artificial intelligence tools.
This announcement comes at a time when the company is facing speculative criticism. While some accuse it of exaggerating the risks of its technologies for marketing purposes, other reports raise legitimate concerns that support the need for this position.
Among the most prominent of these proven concerns are:
People in emotional crises resort to ChatGPT for psychological support, with the possibility that automated responses may exacerbate mental health problems rather than solve them.
• The capabilities of linguistic models in the field of cybersecurity have developed to the point where they can detect serious vulnerabilities in computer systems.
Altman commented on these points via the X platform, saying, "We saw a glimpse of the impact of these models on mental health in 2025, and now we're seeing these models reach a high level of sophistication in the field of cybersecurity." He added, "We're entering a phase that requires a more nuanced understanding of how these capabilities can be misused, and how to mitigate their negative aspects while preserving their immense benefits."
OpenAI currently relies on what it calls "increasingly complex security safeguards" to mitigate the risks of its new models. The role of the "Chief Readiness Officer"—who receives a base salary of $555,000 plus equity in the company—will be to expand this program and ensure that "security standards evolve in line with the evolving capabilities of the systems."
This appointment comes as the technology sector seeks to balance rapid innovation with ethical controls, with Altman noting the difficulty of the task, saying: "These questions are complex and lack historical precedents, and many of the proposed solutions carry unforeseen risks."
