Artificial Intelligence & Machine Learning
,
Cybercrime
,
Cyberwarfare / Nation-State Attacks
Nation-States Running Information Operations Embrace AI-Generated Images and Video
Hackers wielding generative artificial intelligence tools have been the focus of countless headlines, although they have yet to pose a serious cybersecurity risk. So say researchers at Google’s threat intelligence group Mandiant, as they sound an alarm about another rising threat: AI-driven disinformation campaigns.
See Also: Live Webinar | Unmasking Pegasus: Understand the Threat & Strengthen Your Digital Defense
Security experts were quick to recognize the potential of generative AI tools such as ChatGPT to boost the hacking ability of low-level actors and as a boon to threat actors in need of convincing bait in phishing attacks. Still, chatbots’ use in intrusion operations “remains limited and primarily related to social engineering,” Mandiant says in a Thursday blog post.
If there’s a use case where generative AI has been particularly useful to bad actors so far, it’s been in information operation campaigns, which increasingly feature AI-generated content – particularly images and video.
Since 2019, Mandiant researchers have identified “numerous instances” of information operations that tap some form of AI. Nation-state actors from Russia, China, Iran, Ethiopia, Indonesia, Cuba, Argentina, Mexico, Ecuador and El Salvador used generative adversarial networks – a category of AI-based image generation capability – to produce realistic headshots for profile photos of inauthentic personas on social media. The widespread availability of AI-based image generation tools has also allowed non-state actors, such as 4chan forum participants, to employ them for malicious purposes. Such users can disguise the photos’ AI origin by adding filters or by retouching facial features.
Another type of AI image generation capability, dubbed the text-to-image model, accepts text inputs and creates images to match. Experts expect text-to-image model adoption to continually increase as more powerful tools become publicly available, and users discover fresh use cases. If seeing is believing, image-based tools could pose a greater deceptive threat – compared to text-based generative AI – and make it more often the AI tool of choice for disinformation, experts warn.
More powerful tools also facilitate the creation of more authentic-looking fake videos. Since 2021, hackers have been using publicly available, AI-generated and AI-manipulated video technology to create fake video broadcasts and superimpose faces of individuals onto people populating existing videos. Mandiant expects to see an increase in these types of impersonation use case, as superimposition technology improves.
One prevailing use case for such tools remains creating persuasive visual and auditory content to suit specific political narratives. One Chinese advanced persistent threat group which supports the Beijing-based government’s political interests used an AI-generated presenter in May to deliver a video that mimicking a real news report. The APT group, codenamed DragonBridge by Mandiant, earlier distributed AI-generated images such as in March a fake image of former U.S. President Donald Trump in an orange prison jumpsuit, although the group didn’t create that image.
“Hyper-realistic AI-generated content may have a stronger persuasive effect on target audiences than content previously fabricated without the benefit of AI technology,” Mandiant writes.
Mandiant is hardly the only organization noting an uptick in the abuse of generative AI for visual disinformation. Social media analysis firm Graphika in 2022 spotted DragonBridge activity promoting “video footage of fictitious people almost certainly created using artificial intelligence techniques.”
Modern warfare is also changing to embrace more powerful AI tools for disinformation. Ukrainian intelligence in March 2022 warned its populace about a possible onslaught of Russian deepfake videos. Days later, unknown adversaries posted a deep fake video onto a hacked Ukrainian news site, showing Ukrainian President Volodymyr Zelenskyy supposedly capitulating to Russia.
The large language models that power AI chatbots could make it easier for bad actors – including espionage agencies – to overcome linguistic barriers and carry out more attacks across the world. While Mandiant has not yet observed the use of LLM-based AI tools in information operations, as their capabilities improve, it forecasts “rapid adoption.”