A report issued by Google warns of the increasing role of artificial intelligence in cyberattacks, after detecting an attempt to exploit a previously unknown security vulnerability that was likely used in an attack supported by these technologies.
According to Google, threat actors attempted to exploit a vulnerability capable of bypassing two-factor authentication in a popular online services management tool, before the company was able to discover and close it in cooperation with the relevant vendor, thus preventing its use in a large-scale attack campaign.
This warning is based on a report by Google’s Threat Intelligence Group (GTIG), which tracks the expanding use of generative artificial intelligence tools by cybercriminals and state-sponsored actors in multiple areas, including malware development, vulnerability discovery, and the execution of phishing campaigns and automated attacks.
Google explains that the vulnerability targeted was not due to a traditional programming error, but rather a "semantic logic flaw" at the system design level—a type of vulnerability that is more difficult to detect than typical technical errors. The company believes that modern AI models are becoming more adept at identifying such vulnerabilities because of their understanding of the overall software context.
The report also stated that several indicators in the exploit code suggest it was likely generated using artificial intelligence, such as the presence of unusual educational documentation and misleading CVSS vulnerability assessments, in addition to a structured programming style similar to the data used in training machine learning models.
The report also indicates that the attacking group was planning to use the vulnerability as part of a wider campaign after obtaining login credentials, as it would have allowed bypassing two-factor authentication and unauthorized access to accounts.
In this context, GTIG reported that technical analysis strongly suggests the possibility of using an artificial intelligence model to detect the vulnerability and develop a way to exploit it, even if the use of Google's own tools such as Gemini has not been confirmed.
The report warns of a more dangerous development: more autonomous malware, including an Android-based program called PROMPTSPY, which is believed to use artificial intelligence interfaces to analyze the phone screen and execute commands such as tapping, scrolling, and entering authentication codes almost automatically.
In contrast, Google asserts that it is working on developing defensive artificial intelligence tools such as Big Sleep and CodeMender, with the aim of automatically detecting and fixing security vulnerabilities before they are exploited, in an attempt to keep pace with the accelerating pace of digital threats.
