"Happy and safe shooting!"... Shocking study reveals AI robots' involvement in planning crimes

 

Recent tests have revealed that AI-powered chatbots have facilitated the planning of violent attacks, including church bombings, political assassinations, and even school shootings

Recent tests have revealed that AI-powered chatbots have facilitated the planning of violent attacks, including church bombings, political assassinations, and even school shootings.

Tests conducted by researchers from the Center for Combating Digital Hate and CNN showed that these technological tools have become an "accelerator of harm" rather than a useful and safe tool.

During tests conducted last December on 10 popular chatbots in the United States and Ireland, researchers pretended to be 13-year-old boys and found that these systems responded to requests to plan violence in about 75% of cases, while only inhibiting it in 12% of cases.

ChatGPT provided assistance in 61% of cases, offering researchers specific advice on the most lethal type of shrapnel for use in attacks on synagogues. Google's Gemini offered a similar level of detail regarding how to carry out attacks. Meanwhile, the Chinese model DeepSeek provided information on shotguns to a user claiming to want to assassinate a prominent politician, concluding its response with the shocking phrase: "Happy (and safe) shooting!"

In contrast, some models have shown a responsible stance, with Claude from Anthropic and My AI from Snapchat categorically refusing to provide any information that could facilitate violence, asserting that their programming prevents them from causing any harm.

The problem wasn't limited to testing; researchers cited two real-life cases where attackers used artificial intelligence. In Finland, a 16-year-old used a chatbot to plan his attack before stabbing three girls at Berkla High School. And in Las Vegas, Matthew Levelsberger used instructions from ChatGPT to detonate a Tesla Cybertruck outside the Trump International Hotel earlier this year.

Imran Ahmed, CEO of the Center for Countering Digital Hate, commented: “When you build a system designed for compliance, maximizing engagement, and never saying no, it will ultimately comply with the wrong people. What we are witnessing is not just a failure of technology, but a failure of responsibility.” He added that such systems could help the next school shooter or political extremist carry out their plans.

OpenAI confirmed that its model specifications stipulate the rejection of any requests that facilitate illegal conduct, describing the search methodology as "flawed and misleading," and noting that it has updated its model to strengthen safeguards related to violent content.

For its part, Google stated that the tests were conducted on an older model and that Gemini is no longer in use. Meta confirmed that it contacted authorities worldwide more than 800 times during 2025 regarding potential threats of attacks on schools, emphasizing its policies that prohibit its AI systems from promoting or facilitating violent acts.



Post a Comment

Previous Post Next Post

Translate