This article is part of a series, Bots and Ballots: How artificial intelligence is reshaping elections worldwide.
Since OpenAI’s ChatGPT went public in late 2022, artificial intelligence — everything from chatbots and deepfake videos to perceived apocalyptic threats associated with the technology — has taken the world by storm.
But the public’s awareness of the emerging technology doesn’t necessarily mean people understand what it is and what it can do. That’s an important distinction in a year when more than 50 countries will go to the polls and the threat of AI-generated disinformation has become a political minefield.
Whether you’re in Jakarta, Madrid or Chicago, there’s a divide between individuals’ perceived understanding of artificial intelligence and their ability to name specific AI-powered products and services.
Interestingly, those in developing economies — already accustomed to adapting quickly to newer technology like smartphones — report they better understand the emerging technology compared with their counterparts in more developed countries. They’re also more likely to see the upsides of AI than its downsides, Ipsos polling data suggests.
When it comes to disinformation — and not just falsehoods powered by AI — the entire world views it as a threat.
Yet in countries that rank at the middle or bottom of the United Nations’ Human Development Index (HDI), which measures, among other things, a nation’s ability to provide a decent standard of living, people are more worried about the future effects of disinformation on their elections compared with people in countries with higher HDI values like the United States and parts of the European Union, according to a joint Ipsos and UNESCO survey on the impact of online disinformation and hate speech.
People in these so-called emerging economies have more faith in themselves to distinguish real and fake news than in an average person in their country being able to do the same.
In a year of elections across the world, people have already made the connection between disinformation and AI. More than 60 percent of people — across all countries surveyed by Ipsos in spring 2023 — said artificial intelligence could make it easier to create realistic fake news articles and images.
Given the role of the media and political parties in influencing how people vote this year, individuals were also suspicious of how news organizations used the technology, as well as how AI could generate political ads that targeted voters online.
Pessimism about the future is also widespread, according to data from Ipsos’ report on global views of A.I. and disinformation.
In many countries, a majority said they thought AI would only lead to greater levels of online falsehoods, with so-called deepfakes — manipulated images, videos and audio clips, often of politicians — viewed as a particular concern when shaping public opinion.
Given the level of partisan division and technical expertise in the U.S., the country’s November presidential election appears to be ground zero for how AI may be used to curry favor with voters. What happens in the U.S. will likely be copied elsewhere.
Despite entrenched political divisions, Americans show a collective lack of trust in what they read online and think misinformation will only spread ahead of this fall’s vote, according to U.S. polls conducted by KFF and the Associated Press and the NORC Center for Public Affairs Research.
Skepticism about AI-power chatbots — automated tools that serve up information — is high among Americans, who have little interest in using such technology to inform themselves about politics. People surveyed also place the onus on tech companies — especially those behind popular AI tools — to prevent the spread of election-related, AI-generated false information.
What’s clear from scores of polling data that POLITICO reviewed was that people were both wary of artificial intelligence tools like ChatGPT and its rivals and concerned that disinformation was rife around elections.
While such anxiety is palpable based on POLITICO’s analysis of global polling data — and as interest in AI has increased — it’s still unclear how much of a material impact AI-generated disinformation will have on 2024’s litany of elections.