Seb Butcher is a Senior Associate at the international law firm Bird & Bird.
2024 isn’t any old year for politics. It may well be the year for politics.
And when the dust eventually settles after what will no doubt be a tumultuous year, we will look back and ask the question: How on earth did we get here? Yes, 2024 may be the year for politics, but in this newborn age of generative AI, it may also be the year online disinformation finally gets the better of us.
Key elections are scheduled to take place in the U.K., U.S., EU, India, South Africa, Mexico and a bevy of other countries, with nearly half of the world’s adult population heading to the polls this year. But even beyond the sheer number of elections taking place, there’s a feeling in the air that the stakes have never been higher. We’re seeing increasing political division and factional in-fighting within many nations, and there’s a firm sense — no thanks to the online world in which we all now live — that we’re becoming more polarized, more tribal and more intolerant of “the other side.”
Whichever way the political winds blow this year, make no mistake — the outcomes of these elections will have a deep impact on the trajectory of not just each nation’s future, but on the world’s.
And at this time when politics has never felt more precarious, the same can be said for technology. Never before have AI-powered tools been more sophisticated, widespread and accessible to the public.
Generative AI, in its broadest sense, refers to deep learning models that can generate sophisticated text, video, audio, images and other content based on the data they were trained on. And the recent introduction of these tools into the mainstream — including language models and image creators — has made the creation of fake or misleading content incredibly easy, even for those with the most basic tech skills.
We have now entered a new technological era that will change our lives forever — hopefully for the better. But despite the widespread public awe of its capabilities, we must also be aware that this powerful technology has the potential to do incredible damage if mismanaged and abused.
For bad actors, generative AI has supercharged the disinformation and propaganda playbook. False and deceptive content can now be effortlessly produced by these tools, either for free or at low cost, and deployed on a mass scale online. Increasingly, the online ecosystem, which is the source of most of our news and information, is being flooded with fabricated content that’s becoming difficult to distinguish from reality. Deepfakes — which use deep learning AI to replace, alter or mimic a person in video or audio — are of particular concern due to their increasingly realistic nature.
The upcoming elections around the world will undoubtedly be at the top of the 2024 disinformation hit list.
Social media is the primary vector of disinformation online, and with nearly 5 billion of us now using these platforms, public opinion is widely vulnerable to manipulation. As for the politicians and political parties? It’s open season on them. Is it an image of former U.S. President Donald Trump that you’re after, perhaps of him fighting off arrest? No problem. Or maybe a voice recording of U.K. Labour leader Keir Starmer verbally abusing his staff? Easy.
These examples aren’t just possible, they’ve already happened and garnered millions of online hits in 2023. And can you imagine a scenario where a false and damaging video or audio recording lands right before election day, without any opportunity to debunk it? This has also happened. Just two days before Slovakia’s last general election, a deepfake audio recording of the Progressive Slovakia party leader Michal Šimečka discussing how to rig the election circulated online.
With the bar for spreading convincing disinformation now significantly lower, all of this may well be a preview of what’s to come.
Of course, there’s a strong argument that disinformation — and misinformation more generally — has already had widespread influence on previous elections. Perhaps some of the closer outcomes won by only a few percentage points come to mind. But while AI-powered disinformation isn’t new, synthetic media such as deepfakes were far less democratized and much easier to detect in previous years. This time, there’s a new political weapon in town, and 2024 will be the first time some of the world’s biggest democracies will hold a national vote with voters in its crosshairs.
At this point, it’s a foregone conclusion that AI-powered disinformation will continue to ramp up as the elections edge closer, and yet, we’re largely unprepared for it. While we bear witness to Big Tech’s unrelenting arms race to develop the smartest AI of the digital age, our laws and regulations remain firmly stuck in the analogue age.
Some progress is being made — but it’s not enough: Google and Meta have announced policies requiring campaigns to disclose political adverts that have been digitally altered. TikTok requires AI-generated content to be labelled as such. In the U.S., President Joe Biden unveiled proposals for managing the risks of AI — including a mandate for authenticating and watermarking AI-generated content. And the EU’s Digital Services Act requires online platforms to mitigate the risk of disinformation and remove hate speech.
Meanwhile, in the U.K., the newly rolled out Online Safety Act 2023, supported by media regulator Ofcom, places new obligations on online platforms to remove illegal content when they become aware of it. But whilst the Act includes priority offenses — such as the Foreign Interference Offence, which addresses malicious online activity conducted by foreign powers (for example, state-sponsored disinformation campaigns) — it largely fails to adequately tackle the spread of harmful misinformation and disinformation more broadly.
The bottom line is this: Self-regulation of social media platforms will always have its challenges; legislation — however comprehensive it may eventually be domestically — will require full global coordination and effective enforcement that will, in turn, necessitate sufficient funding and political will; and currently available fact-checking and AI detection tools aren’t advanced enough. In reality, current regimes are barely scratching the surface of the problems we face, as safety measures struggle to keep up with emerging technologies.
Though there’s good reason for concern, however, we don’t need to panic — we just need to be prepared.
Raising public awareness and prioritizing media literacy must be the imperative. We need to understand just how powerful AI is, and how convincing it can be. We need to recognize that this content can be produced with ease, speed and at little to no financial cost. We need to keep in mind that this content is already in our information ecosystem, and that it will likely become more prevalent the closer we get to each election day. And we need to accept that no one is immune from being duped.
If we’re aware of the threat and what it’s capable of, we can better equip ourselves to manage it, or at least soften its impact. Check for errors or inconsistencies, see whether there are any unusual language or formatting patterns, consider the emotional tone and, as always, fact-check. Verify the legitimacy of the source, seek corroboration and never forget your own existing biases.
The generative-AI genie is well and truly out of the bottle. This new form of disinformation has the potential to change the results of the 2024 elections and undermine democracy as we know it. So, if we are to stand a chance, we need to avoid sleepwalking to the polls and be vigilant of what’s to come.