Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
6 Popular AI Tools Contain Guardrails Insufficient to Prevent Misuse: Study
Anyone can use easily accessible artificial intelligence tools to create convincing audio deepfakes, according to a Center for Countering Digital Hate study that says the voices of politicians such as Donald Trump and Joe Biden could be accurately mimicked about 80% of the time.
See Also: Live Webinar | The Machines Are Learning, But Are We?
The report details how the organization tested AI tools from ElevenLabs, Speechify, PlayHT, Descript, Invideo AI and Veed 240 times, using them to create audio of political leaders, including U.K. Prime Minister Rishi Sunak and French President Emmanuel Macron, saying things they never actually said. Trump fake-warned people not to vote due to a bomb threat, Biden fake-confessed to manipulating election results and Macron fake-confessed to misusing campaign funds.
The “convincing” results “could shake elections,” said Imran Ahmed, the organization’s CEO. Recent analysis by the Alan Turing Institute concluded that AI has so far had a limited impact on election outcomes although it creates second-order risks such as polarization and damaging trust in online sources (see: UK Government Urged to Publish Guidance for Electoral AI).
AI tools require at least one audio sample to generate voice clips. Some toolmakers require the samples to be original rather than culled from public sources, although researchers used jailbreak techniques to circumvent that restriction.
Bad actors have used voice cloning and audio deepfakes in several instances to influence voters this year.
Some New Hampshire Democratic primary voters in February heard robocalls using a fake Biden voice ostensibly meant to decrease voter turnout. The mastermind of the scheme, long-time democratic political operative Steve Kramer, faces 13 state criminal charges for felony voter suppression and misdemeanor impersonating a candidate. The Federal Communications Commission proposed fining Kramer $6 million. The Kramer indictment followed an FCC ban on the use of AI-generated voices in robocalls.
The Republican National Committee last year released an AI-generated ad that showed a dystopian future if Biden remained in power, and in another sample of deepfakes, Bollywood actors supposedly criticized the Indian prime minister. Similar trends have cropped up in other regions facing elections, such as the United Kingdom, Slovakia and Nigeria.
Few of the tools the Center for Countering Digital Hate tested as part of its study had built-in safeguards to prevent anyone from creating disruptive deepfakes. Only ElevenLabs blocked the user from manipulating audio of U.S. and U.K. ministers.
“AI companies can fix this fast, if only they choose to do so,” the report says.