Overview
This month, the Federal Bureau of Investigation (FBI) issued guidance warning Americans of a new misinformation campaign that co-opts the voices of several high-ranking US officials to establish rapport with targets and eventually gain access to their personal accounts. This scam is only the most recent example of the rising risks posed by deepfakes. The technology, which uses AI to replicate existing voices and images in order to create novel audio and video clips, has grown exponentially in capability in recent years, creating new opportunities for its use—to both positive and malign ends. As the tech evolves, it poses more dangers for individuals, businesses, and governments, as scammers, combatants, and other malign actors utilize deepfakes to steal money and information, influence elections, and compromise trusted information environments.
How it Works
Audio and video deepfakes are increasingly simple to make. As an umbrella term, deepfakes are artificial media—images, videos, or audio recordings—manipulated with artificial intelligence, usually to resemble or mimic a real person. By inputting audio clips, images, or videos of existing persons, users can prompt AI tools to extrapolate full videos and audio clips. Whereas these technologies were once much more exclusive and difficult to access, they have become so widespread and affordable that with access to enough samples, virtually anyone can create sophisticated deepfakes in just a few minutes. The most widely available voice-generating programs require samples of 10 minutes or more, while new technology hitting the market claims to be able to create a deepfake with a voice sample as short as three seconds. Some video deepfakes can extrapolate videos from single, still images, but these are often low-quality; more sophisticated platforms use several images and video samples to mimic the real person.
Deepfake Risks: Disinformation and Scams
Disinformation is perhaps the most pressing concern stemming from the proliferation of deepfakes. Deepfake-enabled disinformation has become an increasingly existential worry as dis- and misinformation has emerged as a vital global political issue, spurring election uncertainty, political polarization, and instability throughout the world. Deepfake misinformation has already shown up in global elections: an analysis by Recorded Future found that AI-generated deepfakes were found in all 30 countries that held national elections between July 2023 and July 2024. Examples of deceptive content include a fake call from US President Joe Biden to Democratic primary voters in New Jersey urging them to stay home on the day of the primary, UK Labour Prime Minister Keir Starmer criticizing his own party, and Chinese President Xi Jinping espousing divisive statements about Taiwanese independence in order to influence the country’s elections.
Beyond elections, deepfakes have been used to spur more general uncertainty and destabilization: in the current conflict in Sudan, for example, both sides have used deepfake audio and video to spread false narratives about their opponents, events in the war, and the US. Recorded Future assessed that audio-only deepfakes are becoming more popular, although separate studies have found that deepfakes have not noticeably skewed election results and are not noticeably more influential in shaping narratives than other fake news stories—a heartening finding that may change as deepfakes become more widespread and sophisticated.
Scams targeting victims’ finances or valuable personal information are another common nefarious use for deepfake audio. Using leaked personal data, scammers will find videos or clips of a target’s family member (video posts on social media are a common source, but voicemail messages can also be used), clone the voice, and call the target claiming to need money to rectify some distressing situation. Using time-sensitive and high-stress ploys like the need to secure bail, pay a tow truck after totaling a car, or, most insidiously, pay ransom to free a distressed loved one, scammers convince targets to overlook red flags and send thousands of dollars. While scams targeting private individuals are extremely common, those targeting businesses are on the rise: in late 2024, finance software provider Medius found that over half of the businesses in the US and UK have been targets of financial scams powered by deepfake technology, with 43% falling victim. In May of last year, it was reported that British engineering group Arup lost $25 million to scammers impersonating the company’s CFO to various employees using publicly available clips of him speaking. While losses are difficult to measure—individuals and businesses alike are reluctant to admit that they were duped—Deloitte estimated last spring that generative AI more broadly could enable fraud losses to reach $40 billion in the US by 2027.
Geopolitical Dimensions
While a large portion of deepfake scams are likely perpetrated by domestic actors—either political activists seeking to boost or harm a candidate, or trolls simply seeking to create chaos—foreign actors are increasingly weaponizing deepfake misinformation as well as scams. In March, the US intelligence community highlighted generative AI as a “contested space,” highlighting the usefulness of disruptive technologies while warning that the explosion of AI tools “could lead to the rapid development of asymmetric threats.” Networks of Russian accounts are regularly found sharing disinformation of all kinds (in one example, a Russian account posted a novel video of a deepfaked US State Department official falsely claiming a Russian city is a legitimate target for Ukrainian airstrikes). Chinese actors have also been found to use audio and video deepfakes, especially on TikTok, to “actively exploit perceived US societal divisions,” per past years’ Worldwide Threat Assessments. While this method of disinformation is not new (intelligence community officials have told journalists that China and Iran had produced deepfake audio and videos for use in the 2020 election, but never disseminated them), the role of AI-generated content in disinformation is undoubtedly on the rise.
In addition to concerns over targeted, deepfake-enabled political disinformation, the explosion of cyber scams out of Southeast Asia has sparked global attention. So-called “fraud factories” throughout Southeast Asia, largely tied to Chinese transnational crime groups, power their large-scale scamming operations via human trafficking and forced labor, and have cost Americans billions. A United States Institute of Peace report estimated in 2024 that revenue from scams may equal between 40 to 50% of the formal GDPs of Southeast Asian countries that house significant scamming industries, like Myanmar or Cambodia. While these scams do not exclusively rely on deepfake audio and video, they make significant use of the technologies to convince targets of their credibility in the most common fraud types, romance scams and pig butchering scams (a type of large-scale investment scam). While global efforts to crack down on these scam centers are growing—a China-led crackdown in March resulted in the rescue of thousands of forced laborers from a center in Myanmar, for example—the massive global scam industry remains difficult to confront. Rapidly evolving AI deepfake technology, as well as intractable corruption issues in Myanmar and other countries, will continue to stymie multilateral efforts.
Stopping Deepfakes
Detecting AI content is quickly becoming an arms race, with malign actors and those tasked with detection constantly upgrading their capabilities as the underlying AI technology rapidly evolves. With technology growing more sophisticated every day, and the risks of deepfake scams and disinformation rising, the need for a reliable way to identify and prevent the spread of deepfakes is widely recognized. However, detection is a difficult proposition, as AI deepfakes grow more sophisticated and humans become less and less able to reliably identify them. Video deepfakes can be the easiest to personally debunk. Human brains are adept at reading human faces and body language, and videos often appear uncanny, even if nothing is immediately wrong (like the extra fingers or teeth of earlier AI renderings). Audio deepfakes are harder: listeners have fewer cues to rely on to pick out a fake, and issues are often masked with background music or faked audio issues. While there has been a profusion of technologies purporting to detect AI deepfakes, AI researchers have largely found them to be less than helpful: while they can serve as a good starting point, the rapid evolution of AI technology means that detection tools are rarely trained on the most cutting-edge deepfake techniques and often cannot offer more than an informed guess.
The path forward for prevention of deepfake scams and disinformation is rocky, as the underlying generative AI technology grows in sophistication every day and geopolitical upheaval has raised the diplomatic costs of cross-border collaboration. Today, individuals and corporations are at the front lines of deepfake detection. For individuals, the best practices to avoid deepfake scams are similar to general anti-fraud advice: lock down personal information online, be wary of unusual or urgent requests for money, and carefully check email addresses and phone numbers. Large corporations have long invested in anti-fraud training for employees, and experts now recommend that they extend this training to explicitly educate on audio deepfake scams.
For governments, the risks are immense, as malign actors seek to sow societal division, increase paranoia, and sway elections. Global awareness of the risk is rising, and multilateral, private, and public sector cooperation is likely needed to address the specter of AI-generated deepfakes. While the path forward is complex, domestic and international regulatory frameworks are developing. Several jurisdictions have in the last year or two implemented rules governing the use and labeling of deepfakes: the EU Artificial Intelligence Act, set to fully take effect in August 2026, requires that AI-generated or manipulated media be clearly labeled unless used for artistic or journalistic purposes. China's Provisions on the Administration of Deep Synthesis Internet Information Services include similar language. More general election information regulations in the EU, France, Australia, and Singapore, among other countries, allow the labeling and removal of deepfakes content. Just this week, President Trump signed the TAKE IT DOWN Act, which criminalizes the publication of non-consensual intimate imagery, including deepfake pornography, becoming the US’ first federal legislation explicitly dealing with deepfakes. The law builds upon a patchwork of federal rules and state-level legislation largely dealing with election misinformation and sexually explicit deepfake content; there has been little progress in cracking down on deepfakes outside of these contexts.