Your social media diet is becoming easier to exploit

Posted by
Check your BMI

The OpenAI ChatGPT logo is seen on a mobile device screen in this illustration photo taken in Warsaw, Poland on 01 June, 2024. (Photo by Jaap Arriens/NurPhoto via Getty Images)
toonsbymoonlight

It’s weird and often a bit scary to work in journalism right now. Misinformation and disinformation can be indistinguishable from reality online, as the growing tissues of networked nonsense have ossified into “bespoke realities” that compete with factual information for your attention and trust. AI-generated content mills are successfully masquerading as real news sites. And at some of those real news organizations (for instance, my former employer) there has been an exhausting trend of internal unrest, loss of confidence in leadership, and waves of layoffs.

The effects of these changes are now coming into focus. The Pew-Knight research initiative on Wednesday released a new report on how Americans get their news online. It’s an interesting snapshot, not just of where people are seeing news — on TikTok, Instagram, Facebook, or X — but also who they’re trusting to deliver it to them. 

TikTok users who say they regularly consume news on the platform are just as likely to get news there from influencers as they are from media outlets or individual journalists. But they’re even more likely to get news on TikTok from “other people they don’t know personally.” 

And while most users across all four platforms say they see some form of news-related content regularly, only a tiny portion of them actually log on to social media in order to consume it. X, formerly Twitter, is now the only platform where a majority of users say they check their feeds for news, either as a major (25 percent) or minor (40 percent) reason for using it. By contrast, just 15 percent of TikTok users say that news is a major reason they’ll scroll through their For You page. 

The Pew research dropped while I was puzzling through how to answer a larger question: How is generative AI going to change media And I think the new data highlights how complicated the answer is. 

There are plenty of ways that generative AI is already changing journalism and the larger information ecosystem. But AI is just one part of an interconnected series of incentives and forces that are reshaping how people get information and what they do with it. Some of the issues with journalism as an industry right now are more or less own goals that no amount of worrying about AI or fretting about subscription numbers will fix.

Here are some of the things to look out for, however: 

AI can make bad information sound more legit 

It’s hard to fact-check an endless river of information and commentary, and rumors tend to spread much faster than verification, especially during a rapidly developing crisis. People turn to the internet in those moments for information, for understanding, and for cues on how to help. And that frantic, charged search for the latest updates has long been easy to manipulate for bad actors who know how to do it. Generative AI can make that even easier.  

Tools like ChatGPT can mimic the voice of a news article, and the technology has a history of “hallucinating” citations to articles and reference material that doesn’t exist. Now, people can use an AI-powered chatbot to essentially cloak bad information in all the trappings of verified information. 

“What we’re not ready for is the fact that there are basically these machines out there that can create plausible sounding text that has no relationship to the truth,” Julia Angwin, the founder of Proof News and a longtime data and technology journalist, recently told the Journalists Resource.

“For a profession that writes words that are meant to be factual, all of a sudden you’re competing in the marketplace — essentially, the marketplace of information — with all these words that sound plausible, look plausible and have no relationship to accuracy,” she noted. 

A flood of plausible-sounding text has implications beyond journalism, too. Even for people who are pretty good at determining whether an email or an article is trustworthy or not, AI-generated text might mess with nonsense radars. Phishing emails and reference books — not to mention photography and video — are already fooling people with AI-generated writing. 

AI doesn’t understand jokes

It didn’t take very long for Google’s AI Overview tool, which generates automated responses to search queries right on the results page, to start creating some pretty questionable results. 

Famously, Google’s AI Overview told searchers to put a little glue on pizza to make the cheese stick better, drawing from a joke answer on Reddit. Others found Overview answers instructing searchers to change their blinker fluid, referencing a joke that’s popular on car maintenance forums (blinker fluid does not exist). Another Overview answer encouraged eating rocks, apparently because of an Onion article. These errors are funny, but AI Overview isn’t just falling for joking Reddit posts. 

Google’s response to the Overview issues said that the tool’s inability to parse satire from serious answers is partially due to “data voids.” That’s when a specific search term or question doesn’t have a lot of serious or informed content written about it online, meaning that the top results for a related query will probably be less reliable. (I’m familiar with data voids from writing about health misinformation, where bad results are a real problem.) One solution to data voids is for there to be more reliable content about the topic at hand, created and verified by experts, reporters, and other people and organizations who can provide informed and factual information. But as Google sweeps up more and more eyeballs to internal results, rather than external sources, the company’s also removing some incentives for people to create that content in the first place.  

Why should a non-journalist care?

I worry about this stuff because I am a reporter who has covered information weaponization online for years. This means two things: I know a lot about the spread and consequences of misinformation and rumor, and I make a living by doing journalism and would very much like to continue to do that. So of course, you might say, I care. AI might be coming for my job! 

I’m a little skeptical of the idea that generative AI, a tool that does not do original research and doesn’t really have a good way of verifying the information it does surface, will be able to replace a practice that is, at its best, an information-gathering method that relies on doing original work and verifying the results. When they’re used properly and that use is disclosed to readers, I don’t think these tools are useless for researchers and reporters. In the right hands, generative AI is just a tool. What generative AI can do, in the hands of bad actors and a phalanx of grifters — or when deployed to maximize profit without regard for the informational pollution it creates — is fill your feed with junky and inaccurate content that sounds like news but isn’t. Although AI-generated nonsense might be posing a threat to the media industry, journalists like me aren’t the target for it. It’s you.