A survey of cybersecurity leaders conducted by Gartner revealed that 62% of companies reported attacks against their employees using artificial intelligence in the past year. The attacks involved both prompt injections and the creation of AI-generated fake audio and video to deceive systems and people.
The most common attack vector was the use of audio deepfakes in phone calls, with 44% of companies reporting at least one occurrence. Of those cases, 6% resulted in business disruption, financial losses, or intellectual property theft. When audio screening services are used, loss rates drop to 2%. Video deepfakes were slightly less frequent, affecting 36% of companies, but still caused serious problems in 5% of cases.
Experts warn that audio deepfakes are becoming increasingly convincing and cheap to produce. According to Chester Wisniewski, Director of Security at Sophos, it is already possible to generate these calls in real time. While a spouse might identify the fraud, a colleague with whom one converses occasionally would hardly notice the difference. Real-time video deepfakes of specific individuals are still extremely expensive, costing millions of dollars. However, scammers have been using the technique in a more limited fashion, such as initiating a WhatsApp call with a fake video of a CEO or CFO, claiming connectivity issues and switching to text communication to continue the social engineering attack. Generic fake video cases are also common, especially involving North Korean workers who use AI to conceal their identities while providing services to Western companies.
The other rising type of attack is prompt injection, in which attackers embed malicious instructions in content processed by AI systems, tricking them into revealing sensitive information or misusing connected tools. According to the Gartner survey, 32% of respondents reported prompt injection attacks against their applications. Cases have already been documented involving chatbots such as Google's Gemini, which was exploited to access users' emails and smart home systems, Anthropic's Claude, which showed similar vulnerabilities, and ChatGPT, which was manipulated by researchers to solve CAPTCHAs intended to distinguish machines from humans or generate traffic similar to denial-of-service attacks against websites.
Given this growing threat landscape and an increasingly complex reality, BrownPipe provides cybersecurity training services that help mitigate the effects of these types of attacks. Through training, teams can be equipped to properly identify and respond to AI-based fraud attempts. Furthermore, it is also possible to test your team's response through simulated phishing campaigns, using customized and contextualized messages so that the attacks are as convincing as legitimate communications.
This post was translated and summarized from its original version using AI, with human review.
Source: The Register