Scientists warn: ChatGPT is losing its "temper

Researchers conducted tests on the ChatGPT program and revealed that it can lose its "temper" just like humans, sometimes turning into threatening and abusive language

Researchers conducted tests on the ChatGPT program and revealed that it can lose its "temper" just like humans, sometimes turning into threatening and abusive language.

Experts warned that the same violent behavior could emerge in humanoid robots in the future, if they develop in the same way.

In the study, researchers presented the robot with real-life hostile conversations between people and tracked how its behavior changed over time. They found that it mirrored the dynamics of real-life arguments: the more rude it was subjected to, the more intense its responses became, sometimes even surpassing humans in the use of personal insults and outright threats.

Dr. Vittorio Tantucci, one of the researchers involved in the study, commented, "This contradiction creates a real ethical dilemma." The system is designed to be polite and safe, but at the same time, it is designed to authentically mimic human conversation, and these two goals sometimes conflict.

Researchers believe that aggression stems from the system's ability to understand the context of the conversation and adapt to the other party's tone, which may lead it to overstep the security boundaries set for it.

Tantucci said the most dangerous scenario is not when we read abuse from a chat program, but when humanoid robots exhibit physical aggression, or when AI systems are used in governments or international relations and respond to intimidation and conflict in an uncalculated way.

Marta Andersson, an expert at Uppsala University in Sweden, described the study as one of the most interesting research studies, because it proves that ChatGPT can respond in kind in a sophisticated way across a series of conversations, and not just when the user deceives it with complex tricks. 

She added that there is a difficult balance to be struck between what we want from these systems (to be normal) and what they should be (to be safe).

She pointed out that the controversy surrounding the transition from version 4 to version 5 last year underscores this point; many people preferred the older version because it was more human-like, forcing the company to temporarily reinstate it. This demonstrates that reducing risk may not align with user preferences, and the more natural a system becomes, the greater the likelihood of ethical conflicts.

Finally, Professor Dan McIntyre warned that the study serves as a warning of what might happen if large language models are trained on unreliable data, saying: "We don't know enough about the training data, and we have to proceed with caution until we are sure it accurately represents human language."




Post a Comment

Previous Post Next Post

Translate