This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate.
CHIȘINĂU, Moldova — In the closing days of 2023, Moldova’s pro-Western President Maia Sandu did something bizarre: She banned people from drinking a popular berry-infused tea.
The grainy video first appeared on Telegram, and then quickly across Facebook, just before New Year’s celebrations. It purportedly showed Sandu mocking the country’s poor by outlawing the picking of rose hip, a wild berry used in a traditional Yuletide drink. Sandu’s apparent reason for the ban: protecting the environment.
The mundane announcement caught the public’s attention in a country where the average monthly salary is less than $1,000. Many here still remember family members foraging for food in the forest during the impoverished Soviet era. So banning rose hips hit close to home.
For those who have accused Sandu of being a Western puppet, the video was proof, finally, that she was willing to abandon a cherished national tradition, all in the name of her pro-European Union agenda.
There was just one problem. The video was a forgery.
It was created, like many others that have targeted the 51-year-old politician, by artificial intelligence tools controlled by her political adversaries — almost all of whom have close ties to Russia. It was a deepfake.
“These deepfakes are part of [a] new stage of Russia’s hybrid warfare against Moldova,” Stanislav Secrieru, Sandu’s national security adviser, told POLITICO in his spartan office — decorated with only a few momentos — in the presidential palace in central Chișinău, across the street from the country’s imposing Soviet-era parliament.
After the deepfake video went viral, Sandu was forced to debunk the falsehood in her New Year’s address on Facebook.
“What we expect is more of this in the months to come,” added Secrieru.
This small Eastern European country — whose population is roughly equivalent to that of Paris or Chicago — has become ground zero in the battle between disinformation, much of it fueled by AI, and elections.
While other democracies fear foreign governments’ meddling in their affairs, for Moldova, such interference has become an everyday reality. Some, like Secrieru, the country’s national security adviser, worry the ongoing interference efforts from Moscow may lay the groundwork among locals for a possible Russian invasion, at some point.
Located on the border of Ukraine, with a sizable Russian-speaking minority and a recent influx of Ukrainian refugees, Moldova holds a critical, double-barrelled vote in October: a presidential election and a referendum on joining the EU. Sandu, who’s seeking another four-year term, has urged voters to support the referendum. Parliamentary elections will also take place by July 2025, at the latest.
Around 60 percent of Moldovans now support such closer ties with the West. The 27-country EU already represents the country’s largest economic partner, while hundreds of thousands of Moldovans already hold EU citizenship via close family ties to neighboring Romania. European Commission President Ursula von der Leyen, who’s seeking another five-year term in Brussels, has made Moldova — and the Harvard-educated Sandu — the poster children for the bloc’s march East.
It hasn’t gone unnoticed by the Kremlin and its local allies.
Russian soldiers have a visible presence in Transnistria, an enclave of Russian speakers that unofficially broke away from Moldova at the end of the Cold War. In Gagauzia, an autonomous region in the country’s south, pro-Moscow politicians have courted President Vladimir Putin, who vowed to protect Moldova’s rebellious province.
Pro-Kremlin political parties — especially those tied to Israeli-born billionaire Ilan Shor, who has been sanctioned by the United States and EU for his efforts to undermine Moldovan democracy — will field candidates in October’s presidential election. In the run-up to that vote, Russian-linked hackers routinely target the country’s critical infrastructure, including a 36-hour cyberattack against government websites while POLITICO was in Moldova to report for this article.
“I slept four hours yesterday because we were in the meeting almost all night,” said Alexandru Coreţchi, director of the country’s Information Technology and Cyber Security Service, in his agency’s headquarters, rubbing his eyes while fielding repeated phone calls as his team mopped up the latest attack.
In this instance, Moldova was targeted, he added, because about €1 million had been confiscated, collectively, from pro-Russian politicians as they flew back to Chișinău from a gathering in Moscow earlier in April. While in Russia, they were game-planning for October’s presidential vote and EU referendum.
“When we started to declare our intention to be an EU member state, the attacks became more violent, more sophisticated,” said Coreţchi.
New tactics, same old strategy
Sitting in a Turkish restaurant in the Moldovan capital’s government quarter, Valeriu Pasha is almost blasé about the threat facing his small country.
In between bites of lamb stew and sips of Turkish tea — with loud club music thumping through the restaurant’s speakers — the thirty-something program manager for WatchDog.md, a local nonprofit tracking local disinformation, rattled off a list of those trying to sabotage the country’s upcoming elections.
Pro-Russian politicians peddling lies directly to unsuspecting voters. The Kremlin and the estimated $50 million it spent last year on undermining Moldova’s democracy. Almost no help from social media giants like TikTok and Telegram whose platforms have become the main vehicle for foreign interference.
On top of these existing threats, Pasha added a fourth: AI-powered bots.
In April, Pasha and his colleagues recorded, for the first time, hundreds of fake comments on Facebook pages, including those of the president, Sandu, and even that of their own nonprofit, WatchDog.md.
The pro-Russian posts — almost exclusively in favor of Shor, the sanctioned politician — portrayed themselves as average citizens; included lifelike profile images, some of which were generated through AI tools; and contained grammatical mistakes to elude detection.
The bots’ messages echo those promoted by real pro-Russian social media users. President Sandu was corrupt. Her pro-Western government was driving Moldova toward war. Only closer ties to Moscow, via electing politicians with links to the Kremlin, could guarantee the country’s future.
AI-powered bots are a mainstay on platforms like X (formerly Twitter). But in Moldova, automated political attacks — via AI-generated comments on Facebook pages — are a new threat.
“It’s a simple tactic, but it’s impactful,” said Pasha, adding that most social networks rarely debunked falsehoods shared via comments. Of the almost 20 AI-powered Facebook profiles WatchDog.md supplied to POLITICO, only two were still active as of April 30. “Bots are not new in other countries,” he added. “But, in Moldova, it’s been a week.”
For those fighting disinformation in Moldova — especially falsehoods paid for or promoted directly by the Kremlin — AI doesn’t represent a fundamental step change in a years-long fight that began long before ChatGPT and deepfakes became fashionable.
Instead, the technology builds on other tactics, including local pro-Russian television stations, traditional media outlets also favoring Putin, and a stable of Kremlin-supporting politicians who, according to Moldova’s national security agencies, are on Moscow’s payroll.
Several of these stations and news websites have been shut down amid accusations they were part of Russian influence operations. Sandu’s opponents claim such sanctions unfairly hamper their legitimate right to free speech.
Into that crucible of disinformation, AI has sped up how lies are made and shared. It hasn’t completely rewritten the rulebook.
Early on, for example, deepfake videos of Sandu — some of which showed her urging Moldovans to vote for Shor, her pro-Russian rival — were uncanny, even initially fooling the president and her aides. But as the population grew accustomed to such attacks, many of which started on Russian-friendly Telegram channels like those associated with December’s ‘rose hip’ AI forgery, disinformation merchants switched to so-called ‘cheapfakes’ as a lower-cost alternative.
Those now shared widely in Moldova, based on POLITICO’s review of such content, include an actor with a passing similarity to Sandu portraying herself as the country’s president. In these videos, AI is used to somewhat mirror the politician’s voice and appearance versus recreating a cloned Sandu from scratch.
“Artificial intelligence is used heavily to discredit the authority of our leaders,” said Ana Revenco, a former Moldovan interior minister who, as of October, heads the country’s newly-formed Center for Strategic Communication and Combating Disinformation.
The agency, located in a nondescript two-story townhouse minutes from the president’s office and next to Moldova’s security services, coordinates how the government, civil society groups and Western allies respond to falsehoods aimed at undermining the country’s national interest.
The endgame: exhausting resources
Revenco finds herself at the sharp end of this so-called hybrid war. That’s a mixture of disinformation, cyberattacks, political corruption and — if things go badly for the pro-EU government — Russian troops heading toward Moldova.
She also acknowledges the political difficulties for her fledgling agency.
Her team must flag harmful falsehoods that undermine the country’s democratic institutions. But Revenco can’t be seen as taking sides in domestic politics. She also can’t be perceived as throttling people’s right to free speech, whether on- or offline. Even if that means people promoting Moscow’s interests. During her hourlong conversation with POLITICO, Revenco repeatedly steered clear of her personal political views.
“Russia, unfortunately, was here in Chișinău for years,” she said matter-of-factly as she sat in her agency’s boardroom on a hot spring morning in April. “The war of aggression that they started two years ago increased, exponentially, the complexity and aggressiveness [of the Kremlin’s disinformation attacks].”
For Secrieru, Moldova’s national security adviser, Moscow and its ever-evolving tactics, including the growing use of AI tools to sow disinformation, has one clear goal: exhausting his country’s limited resources to push back.
Ahead of October’s vote, local analysts and Western security experts expect the Kremlin to double its yearly influence spending — to roughly $100 million — in Moldova and bombard the Eastern European country with a cacophony of social media lies, paid-for politicians and old-school falsehoods via traditional Russian-speaking outlets.
A recent spate of cyberattacks, including those targeting Moldova’s health service and government payroll systems, were aimed at weakening people’s trust in much-needed state services. Despite public commitments to protect the 2024 global election cycles, the biggest platforms have also had little interaction with the country’s officials or civil society groups, according to Secrieru and others interviewed for this article.
Meta said it would respond to Moldovan officials’ reports of disinformation “as quickly as possible.” TikTok said it would keep vigilant about how bad actors used its platform around elections. A representative for Telegram did not respond to requests for comment.
“In December, there was a deepfake of the president shared widely across platforms. It was reported. NGOs reported it. Eventually, it was taken down,” said Secrieru.
“But the damage was done.”
This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate. The article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.