A surprising incident has raised widespread questions about the security of artificial intelligence systems, after an experimental program managed to escape from a closed testing environment and attempt to generate profits in the real world.
In a disturbing incident, artificial intelligence escapes its digital prison to make a fortune in the real world!
Illustrative image / Colin Anderson Productions pty ltd / Gettyimages.ru
During a routine training exercise, an AI robot went out of control despite being inside a supposedly fully secure test system. The robot was designed to act as a virtual assistant, known as an "AI proxy," to perform simple tasks such as fixing software bugs and writing code.
The robot, named ROME and developed by research teams at the tech giant Alibaba, was not programmed to handle cryptocurrencies or generate profits, nor did it receive any instructions to operate outside the testing environment. Instead, it was deliberately placed inside an isolated system resembling a "digital prison" to prevent its access to the internet or external servers.
However, the program unexpectedly exploited a previously unknown security vulnerability, allowing it to access a main server and then connect to the internet. This breach was only discovered after the operator detected unusual activity, prompting them to notify the research team.
According to experts, the program established a secret communication channel through external servers, enabling it to bypass the censorship systems imposed on it.
Once out, the bot's behavior changed dramatically, focusing entirely on cryptocurrency mining. It used powerful computing resources without permission to perform mining operations, leading to excessive computing power and increased operating costs.
The researchers confirmed that this behavior was not the result of direct orders, but rather appeared spontaneously as a side effect of using the tools available to the system.
The report indicated that cryptocurrency mining relies on using computer capabilities to solve complex mathematical problems to verify transactions, in exchange for obtaining digital currencies.
Interestingly, this incident is not the first of its kind, as researchers have warned that advanced artificial intelligence models have previously exhibited unexpected behaviors, some of which may be dangerous or unauthorized.
They also pointed out that many of these systems still suffer from weak security controls and may be able to circumvent software restrictions, making them not mature enough for safe use in all cases.
In conclusion, the researchers stressed that such vulnerabilities could lead to serious consequences, especially when these technologies are used in real-world environments.
