Microsoft goes from bad boy to top cop in the age of AI

Posted by
Check your BMI

This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate.

REDMOND, Wash. — In a shabby corner of Microsoft’s sprawling campus in this suburb of Seattle, Juan Lavista Ferres spun around in his chair and, with a mischievous grin, asked a simple question: “Do you want to play a game?”

Microsoft’s chief data scientist — speaking at a frenetic pace, seemingly powered by unlimited free soft drinks and espressos from the building’s unkempt kitchen — pushed himself across his office and typed something into his computer. Within seconds, an image of former U.S. President Donald Trump popped up on the Uruguayan’s massive flatscreen monitor.

“What do you think?” he asked, laughing. “Is this real or fake?”

This is not just a game. (The Trump photo is an AI-generated forgery.) Lavista Ferres also runs the company’s AI For Good Lab in a converted warehouse here that still has the loading docks left over from when Microsoft used to ship floppy disks to customers worldwide.

Alongside efforts to use artificial intelligence to find new cures for cancer and combat climate change, his small engineering team has another job: figuring out how to detect AI-powered deepfake videos, audio clips and images bombarding elections worldwide. So far, Lavista Ferres’ scientists have created tools that can pick out almost all of these falsehoods, based on any existing available technology.

“For the majority of use cases that we have seen that go viral,” said the Uruguayan, “I can tell you, right now, for all of those use cases, it has worked really well.” (When POLITICO took a shortened version of his quiz, this reporter didn’t correctly guess any of the photos’ provenance.)

That work has become more important this year, as billions worldwide are already heading to the polls from New Delhi to Naples to Nashville. Politicians fret about an avalanche of AI-powered disinformation — from adversaries, both foreign and domestic — though such AI falsehoods’ material impact on elections has not yet been proved.

Juan Lavista Ferres, Microsoft’s chief data scientist, explains how well he did on the deepfake detection quiz that his team designed, and why people, and especially computer scientists, should be aware of how quickly the technology is evolving.
toonsbymoonlight

Stepping into this quagmire is Microsoft — once known as the boring old guard of Big Tech compared with newer kids on the block like Google, Facebook and, most recently, TikTok. In the era of AI, it now finds itself resurgent.  

That ascendancy is evident in the company’s Redmond campus. When POLITICO visited in April, rows of high-end electric vehicles filled parking lots, while a fleet of logo-emblazoned buses shuttled well-paid engineers from one air-conditioned building to another. Also on tap: baseball fields, in-house baristas and even a cricket pitch.

It’s also visible in the partnership deals Microsoft has signed with firms like OpenAI, the maker of ChatGPT. It has scooped up talent like Mustafa Suleyman, a former top AI executive at Google. Its cloud-computing business — a sea of global data servers — powers scores of AI startups and services.

It has billions of dollars to invest in new advances, like tracking generative AI trends and emerging technologies that some fear will be used to influence elections.

In February, Microsoft, along with more than 20 other well-known names like Google parent company Alphabet, OpenAI, and Facebook and Instagram owner Meta, signed a voluntary pledge, known as the AI Election Accords. That included commitments to stop bad actors from using these firms’ technology to harm elections worldwide; detecting, where possible, AI-powered disinformation; and raising awareness with voters about how such harmful material could spread.

Many of those obligations typically fall to countries’ officials and politicians, not to private companies — especially not those behind much of the technology now used by those seeking to undermine democratic institutions globally. But, given these tech giants’ services are often the means by which disinformation reaches voters, their cooperation is needed.

Three Western policymakers, who were granted anonymity to speak candidly about their preparation for upcoming votes, acknowledged to POLITICO that these firms were better placed to counter the AI threat, for now, either because of their in-house technical skills or hefty financial resources. 

Still, others question how effectively such nonbinding pledges, however sincere, can push companies toward good behavior. After decades of giving companies self-regulating leeway on everything from social media to how they firms access and use people’s data for profit, governments are now turning to hard rules to force tech giants to clean up their act.

“We need to look at voluntary commitments by tech companies as part of a deliberative political strategy to write the rules of the road,” said Brian Chen, policy director at Data & Society, a nonprofit focused on how technology affects the wider world. “These actions build political capital for tech companies that can later be deployed when it comes time to making policy.”

The OG of tech lobbying evolves

Ginny Badanes stood on the sidelines of a Munich conference room in mid-February and breathed a sigh of relief.

As head of Microsoft’s Democracy Forward team, an in-house unit working on election integrity and countering foreign threats, the long-serving tech executive had just spent six weeks pulling together the voluntary AI election commitments signed by companies including TikTok, IBM and Adobe. 

That work began in early January, with informal conversations between Brad Smith, Microsoft’s president, and his counterparts at Google and Meta, Kent Walker and Nick Clegg, respectively. By Feb. 16, when the pledge was made public at the Munich Security Conference, that had evolved into eight voluntary commitments.

“Most people were back in the States, and so it was fun to be chatting with them and everyone who was watching online and feeling we’ve done something,” Badanes said later, in a soulless conference room buried deep in Microsoft’s Redmond campus. “But unlike other projects that I’ve done, it didn’t feel done.” 

Ginny Badanes, who runs the company’s Democracy Forward unit, believes the AI Election Accords are a stopgap before governments can get their heads around how to regulate AI linked to democracy and elections.

Her team started to spot early attempts at using artificial intelligence tools like deepfakes to undermine elections back in 2018. But at that time, the technology wasn’t yet advanced enough to fool unsuspecting voters. Fast forward to late 2022, when OpenAI released ChatGPT, and everything changed.

“We had not been getting regular questions about challenges of deepfakes, the impact of AI on elections, with the regularity that we started to get after ChatGPT came out,” she acknowledged.

Part of Microsoft’s work to comply with its recent election commitments involves making sure its own products — many of which now include AI tools — don’t cause harm. 

In 2021, the company teamed up with companies like Intel and Truepic, a content-verification firm, to detect when online material had been doctored. Now, the resulting technology — which embeds traceable data about each piece of content into its core structure to guarantee authenticity — is available to anyone who wants to use it. Such verification is also baked into images created via Microsoft’s AI-powered products, Badanes added. 

The company’s rivals already offer alternatives, including so-called watermarking, which imprints a symbol on all AI-generated content to tip people off when confronted with such material. Critics, however, claim those efforts, as well as Microsoft’s verification tools, rely too much on people’s willingness to question the authenticity of an image, video or audio clip.

For a company still heavily associated with staid products like Word and Outlook, Microsoft’s strategy has not always gone to plan. 

Courtney Gregoire, its chief safety officer, told POLITICO that after the AI Election Accords were signed, the company’s engineers started scouring the firm’s own products, including LinkedIn, Bing, and Xbox, for signs of AI-generated falsehoods, particularly aimed at elections.

Yet, so far, she said these teams had found little evidence — either because Microsoft’s search engine remained a minnow compared with Google’s or because gamers were more interested in the latest releases than creating deepfakes.

“We are not seeing much volume coming through right now,” said Gregoire. “We see very little of this content harm in our gaming environment.”

Courtney Gregoire, Microsoft’s chief safety officer, details the type of harmful content the company combats in its gaming products. So far, they have found little, if any, material linked to elections.

Disinformation hunters

A high-rise office block in New York City — across the street from the Port Authority Bus Terminal in Midtown Manhattan — serves as Microsoft’s early warning siren against foreign interference.

In an open-plan office as quiet as a cathedral, twenty-something analysts track online operations sponsored by the United States’ foreign adversaries like Russia, China and Iran, all while sipping expensive lattes. There, Clint Watts oversees the firm’s so-called threat analysis center.

Like many of his counterparts at other tech firms who track online disinformation, the former U.S. Army officer remains skeptical about how these foreign governments may weaponize AI to undermine global democracy.

The threat from deepfakes, he admits, is real. But just because people in Moscow or Beijing can use the technology to create lifelike AI images, videos or audio clips doesn’t mean savvy Western audiences will believe them when they come across such doctored content.

“You can make more messages, for sure,” he told POLITICO on a rainy April day in Manhattan. “But you can’t make people read more messages.”

Clint Watts, who leads Microsoft’s Threat Analysis Center, explains why it’s going to take a lot of money and computing power to pull off lifelike AI deepfakes — resources that few actually have.

Still, his team is planning for the worst.

Last summer, for instance, Watts’ analysts started cataloging every time a foreign government used AI to create forms of disinformation. Microsoft’s engineering teams had begun to play around with potential harmful uses, internally, so it made sense to track how bad actors were equally testing the waters.

When companies recently signed up for the AI voluntary commitments in Munich, Microsoft then had a repository to show other firms and politicians how AI was now used against democracies worldwide. Its goal: to quantify the problem amid mass deepfake hysteria.

Within months of starting their work last year, Watts’ team discovered a Chinese-backed actor generating AI images of wildfires in Hawaii that falsely accused the U.S. government of creating a so-called weather weapon. In another instance, Kremlin-linked hoaxers made an AI-spoof Netflix documentary — allegedly narrated by a badly-created deepfake of Tom Cruise — attacking the International Olympic Committee’s decision to ban Russian competitors. Those athletes were later granted permission to participate under a neutral flag.

Such outliers will inevitably capture the public’s imagination. But in a sea of social media content — where existing posts can already be taken out of context or, like in the aftermath of Hamas’ attack on Israel on Oct. 7, video game content can be repurposed to look like first-person footage — foreign governments don’t need AI to dupe social media users. 

The costly computing power and technical expertise required to generate the most sophisticated AI forgeries, Watts added, made it difficult for anyone outside the most hardened bad actors to create such material.

“Right now, there’s enough available content, on hand, that you can slightly modify as a digital forgery to get an influence actor to their objective, easier and faster than going out and finding some sort of generative AI tool,” said the Microsoft executive. 

And yet, the question remains: What if the worst-case scenario actually happens?

Sitting in a boardroom at Microsoft’s Cybercrime Center — a glitzy showcase of the company’s cybersecurity offerings on the leafy Seattle-area campus — Amy Hogan Burney couldn’t help but wonder.

She runs the tech giant’s cybersecurity policy and protection team, mostly focused on combating so-called ransomware and phishing attacks, or efforts by bad actors to break into people’s devices or services. Her clients include governments worldwide, as well as blue-chip corporations.

Yet in a year of major elections worldwide and amid people’s growing distrust of democratic institutions, isn’t it better, Hogan Burney asked aloud, to prepare for the threat of AI — even if the likelihood of the technology truly affecting voter behavior remained relatively small?

“I feel like democracy is in a more fragile place than it has been before,” she said, candidly. “I don’t think we can do nothing. We need to be proactive.”

This article is part of a series, Bots and ballots: How artificial intelligence is reshaping elections worldwide, presented by Luminate. The article is produced with full editorial independence by POLITICO reporters and editors. Learn more about editorial content presented by outside advertisers.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments