Supernatural ambitions. Does OpenAI policy represent a danger to humanity? Supernatural ambitions. Does OpenAI policy represent a danger to humanity?

Supernatural ambitions. Does OpenAI policy represent a danger to humanity?

Supernatural ambitions. Does OpenAI policy represent a danger to humanity?
-
Recently, OpenAI has occupied an exceptional position in the technical community, as perhaps the most famous products in the field of generative artificial intelligence currently are its products, especially after the launch of the most popular chatbot "Chat GPT" at the end of November 2022.

The approaching deal with Apple, in order for "Chat GPT" to work inside iPhone phones, puts the startup between the two largest companies in the world currently: Microsoft and Apple, and gives it the opportunity to work in the field of companies with Microsoft customers, in addition to benefiting from its huge resources, and will also benefit from Apple customers and users of its devices such as iPhones.

But what really distinguishes OpenAI from other companies is the tremendous ambition it seeks to achieve, as it is not satisfied with its achievements, but also mentions that it is trying to develop general artificial intelligence (AGI), which is supposed to be a system that can perform tasks at levels of human thinking or perhaps exceed them in many areas, so the company is supposed to have a clear policy for safety and security, in case it seeks to develop such supermodels in the near future.

Risk Team Solution
In July last year, OpenAI announced a new research team to prepare for the development of a "superintelligence" AI that can outperform its innovators.

The company then chose Elijah Sutskefer, the chief researcher and one of its founders, to be co-leader of this new team with Jan Lecke, and OpenAI stated at the time that the team will have 20% of its computing power over 4 years.

The team's primary task was to focus on "scientific and technical achievements to guide and control AI systems that are smarter than us."

Several months ago, OpenAI almost lost its employees who are deeply interested in ensuring the security of the company's AI systems.

Now, the company is starting to lay them off altogether, as management, led by CEO Sam Altman, decided to disband the team that was focused on the long-term risks of artificial intelligence one year after announcing its formation, a person familiar with the situation confirmed to CNBC several days ago.

The person, who spoke on condition of anonymity, stated that some team members had been reassigned to several other teams within the company.

A few days before this news, team captains Elijah Sutskever and Jan Lecke announced their departure from the startup, and Sutskeifer did not reveal the reasons for his departure, but Lecke explained some details about why he left the company, stating on his account on the X platform: "Making machines smarter than humans is an inherently risky endeavor. OpenAI has an enormous responsibility on behalf of all of humanity. But over the past years, the culture of safety and its operations have declined at the expense of shiny products."

If we are inside a Hollywood movie, we may think that the company has discovered a dangerous secret in an artificial intelligence system, or developed a supermodel that is about to destroy humanity, and for this I decided to get rid of the artificial intelligence risk team, but we are not inside a Hollywood movie, and it may ultimately be due to Sam Altman himself, and the extent of his power that he imposed on the company during the last period.

Several sources within the company indicated that these employees lost faith in Sam Altman, the company's CEO, and in his leadership style, and this is what Lecke explained in his X post about why he resigned: "I had been disagreeing with OpenAI management about the company's core priorities for a long time, until we finally reached the end of the road."

In order to try to understand the reasons for what happened, we have to go back in time a little, specifically to last November, when Elijah Sutskefer, in cooperation with the company's board of directors, tried to fire Sam Altman himself, at which time the participants in this coup stated that "Altman was not constantly frank in his communication with the board," meaning that they did not trust him, so they decided to act quickly and get rid of him.

The Wall Street Journal and other media outlets reported that Sutskeeper focused on ensuring AI didn't harm humans, while others, including Altman, were keen to move forward with the development of the new technology.

His ouster sparked a wave of resignations or threats of resignation, including an open letter signed by almost all of the company's employees, an uproar from investors, including Microsoft, and Altman and his ally Greg Brockman, the company's president and co-founder, threatened that they would take the company's best talent and move to Microsoft, effectively destroying OpenAI if Altman was not reinstated.

Within a week, Altman returned to his position at OpenAI victorious, and board members Helen Toner, Tasha Macauley and Elijah Sutskeepfer, who voted to oust Altman, came out.

Altman came back stronger than before, with new board members that were more supportive and supportive, and had more freedom to run the company.

Altman's reaction to his dismissal may have revealed something about his personality: his threat to liquidate OpenAI unless the board reappoints him, and his insistence on mobilizing new board members who lean in his favor, show his determination to hold on to power and avoid any future oversight or accountability for what he does.

Some former employees have even described him as a con man who talks things and does the opposite, for example, Altman claims that he wants to prioritize safety, but contradicts himself in his actions and actions by frantically seeking to develop artificial intelligence technologies at a very rapid pace.


For example, Altman has been traveling on tours over the past period to the Middle East, raising huge funds and investments from Saudi Arabia and the UAE so that he can establish a new company to manufacture processing chips for AI models, which will give him a huge supply of resources required to develop general or superhuman AI, which has been worrying safety-conscious employees within the company.

Getting rid of the long-term AI risk team is just another confirmation of the company's and CEO's policy towards developing the most powerful modern systems at all costs, there is no need to give the safety team 20% of the company's computing power, which is the most important resource of any artificial intelligence company, as the company may redirect it towards other development processes.

This can be easily deduced from what Jan Lecke recalled after his resignation: "For the last few months my team has been sailing against the grain. "Sometimes we were suffering from a lack of computing, and it was getting harder to do this important research."


In the end, this is what Jan Lecke has already confirmed, that the safety culture and operations within the company have declined to "shiny products". Of course, we do not know what the future holds for us, and we cannot predict whether the company will succeed in developing general artificial intelligence or fail, but what may worry us is what hit the safety team that OpenAI got rid of, which is simply that the company that seeks to develop this superhuman artificial intelligence is only interested in maximizing its gains, whatever the warnings and risks!

3 Comments

Previous Post Next Post

Worldwide News Search Here👇