In the ever-evolving landscape of technological advancements, deepfakes have emerged as a powerful tool with the potential to disrupt the very foundations of democracy. These manipulated videos, images, and audio files, created using artificial intelligence (AI), can seamlessly stitch together real and fake footage to fabricate audio and visual content that can be incredibly convincing. While deepfakes initially gained traction for their entertainment value, their misuse in the realm of politics and social discourse poses a grave threat to the integrity of democratic processes.
The Basics: A Weapon for Deception and Manipulation
- Deepfakes can be employed to spread misinformation, discredit individuals, and sway public opinion. Politicians can be portrayed saying or doing things they never did, creating false narratives that can damage their credibility and influence electoral outcomes.
- Political campaigns can be derailed by doctored videos that tarnish the reputation of candidates, while fabricated footage of public figures can incite violence and undermine social harmony. We are seeing this already in the creation of fake content targeting politicians.
- Creating a false narrative can not only derail elections, but also influence conflicts, through the manipulation of media coverage, aimed at increasing or decreasing public support.
In October and November 2023, two British politicians allegedly became the targets of persons or groups intent on discrediting them both. First to be targeted was British Labour leader Sir Keir Starmer, when an audio clip alleging to be a recording of Sir Keir swearing at a colleague was released on X (formerly Twitter). At the time of writing, the clip had been viewed 1.6 million times.
Since then, Sir Keir, his team, and several Tory politicians, Security Minister Tom Tugendhat and MP Simon Clarke, have publicly stated that the clip is fake.
Next, London Mayor Sadiq Kahn came under fire when an audio clip purporting to show him suggesting “Remembrance weekend” could be postponed in favour of a pro-Palestinian march.
As with Sir Keir, a spokesperson for the Mayor of London has said that the video is fake: “The Met and their counter terror experts are aware of this fake video that is being circulated and amplified on social media by far-right groups, and are actively investigating.”
A spokesperson for the Metropolitan Police Service said: “We can confirm that we have been made aware of a video featuring artificial audio of the Mayor, and that this is with specialist officers for assessment.”
However, no organisation, whether a law enforcement agency or specialist private business, has been able to prove that either video is genuine or fake because we do not yet have the technology to identify deepfake mistakes in audio clips.
Fact checking website FullFact, said of their attempts to confirm the videos’ authenticity: “We’ve not been able to determine whether the clip was generated with artificial intelligence, edited in some other way or is of an impersonator, but we’ve not seen any evidence to suggest it is real. There are no specific clues in the clip itself, such as identifiable background noise or names used, which would enable it to be verified, and we’ve found no credible reports verifying the clip’s authenticity.”
They conclude that “There is no evidence that [either clip] is genuine.”
Why is this important?
- Concerningly, audio deepfakes are the only deepfakes experts cannot confidently prove to be true or false. All other types of deepfakery are easily debunked – slowing down footage will often reveal small mistakes, such as lips not moving at the same time as the audio. AI technology used to create deepfakes is improving every day, however, and as this develops, our ability to identify fake news decreases.
- No expert has been able to verify the Sadiq Kahn or Sir Keir Starmer audio clips, exposing both politicians to reputational damage. This comes as Sir Keir prepares to run for Prime Minister in the UK’s 2024 general election.
- The proliferation of deepfakes can erode public trust in institutions, including the media, the judiciary, and law enforcement. When citizens can no longer discern fact from fiction, it becomes challenging to hold authorities accountable and maintain a functional democracy.
- These can be used to undermine national sovereignty by spreading disinformation that sows discord and destabilises governments.
What Can We Do? Addressing the Deepfake Challenge
- Tech: Technological solutions, such as deepfake detection tools, can help identify and flag manipulated content. These are in their infancy, however.
- Vigilance: As deepfakes become increasingly sophisticated, it is imperative for individuals to approach online content with caution. Verifying sources, considering the context of the video, and consulting multiple sources can help discern truth from fabrication. Critical thinking skills are essential to navigate the digital landscape with discernment and protect oneself from the manipulation of deepfakes.
- Media Literacy: Media literacy education can equip individuals with critical thinking skills to assess the authenticity of videos they encounter. Additionally, fostering responsible online behaviour and promoting ethical AI practices can help mitigate the harmful use of deepfake technology.
With two significant national elections coming up in 2024 in the UK and USA, the number of sophisticated deepfakes created by hostile foreign actors intent on manipulating election outcomes through social engineering is likely to increase. Businesses and individuals must remain vigilant to deepfake content’s ability to manipulate public discourse, undermine trust, and distort reality. By embracing technological solutions, promoting media literacy, and fostering responsible online behaviour, we can safeguard the integrity of democratic processes and preserve the foundations of a just and equitable society.