In the unlikely event you haven’t heard, President Biden had a pretty rough time in the first presidential debate on June 27. Is there a way to measure how rough of a time?
Kind of: betting markets on who will win the election. On June 27 at 8:40 pm (shortly before the debate started) one aggregator of the markets gave Biden a 36 percent chance. By 11 pm? Down to 24 percent.
That was approximately the same reaction I encountered on social media (“wow, Biden looks bad and can’t win”), but presented with the concreteness and addictiveness of the swingy New York Times election-day needle. You could even see Biden’s numbers recover — slightly — during his 45-minute press conference yesterday evening, rising from around 10 percent to 13 percent. The markets were vibes translated into concrete numbers.
But are the concrete numbers correct? And is there a way to relate to them helpfully?
Markets are perhaps the best way humans have ever devised to integrate new information and quickly arrive at new pictures of important questions. When a company releases a bad earnings report, for example, the value of its stock tumbles almost immediately, as experts on the question of how much that stock will be worth in the future adjust their bets from that earnings report. People (and, increasingly, AI algorithms) who are good at their jobs make money, while the ones who are bad at their jobs lose money. In between, the entire world benefits from fast, accurate pricing.
The dream of election prediction markets is that we could build something similar for political questions. Over the last two weeks, as the election betting markets have swung wildly in response to every change in the news around Biden, the promise of that dream — and its acute current limitations — have been on full display.
The promise of prediction markets
An open secret about all election models, from Nate Silver’s to that of the New York Times, is that they involve a hefty sprinkling of expert judgment. Yes, they make a lot of use of polling, but how to weight each poll, how much movement to build in, and how to weight the polls against economic fundamentals are all judgmental calls. I disagree with people who call election prediction more of an art than a science, but only because I think they’re mistaken about how many judgment calls go into science.
Most of these election models work by running countless computer simulations under various assumptions and publishing the results. A prediction market, by contrast, has a much simpler setup.
You can buy “Biden,” which pays you $1 if Biden wins the election, or “Trump,” which pays you $1 if Trump wins the election, or some other name, which pays out if that person wins the election. How much are people willing to pay for “You get $1 if Biden wins the election”? That’s how the market judges Biden’s odds of winning.
Right now, people are only willing to pay about 12 cents for the right to collect $1 if Biden wins the election; if you think they’re underrating him, you can buy all these contracts from them and get very rich if Biden wins.
(In compliance with Vox’s ethics policies, I do not bet money on areas I cover, so I don’t participate in election betting markets. Still, I try to publicly register what I would buy when I disagree with them; for example, at the start of 2020, I said I thought there was a 60 percent chance Biden would be the Democratic nominee when prediction markets had him at 33 percent.)
There are many reasons to believe that markets could be very good at predicting elections — in principle, at least. Published research has found that prediction-market-like projects have worked in many past contexts, with aggregators like SciCast and now Metaculus showing startlingly good track records on policy questions.
And the track record of betting markets on the morning of election day is very good: If the market says someone has a 20 percent chance of winning, they generally win about 20 percent of the time.
Another argument is that markets are strong for precisely this sort of problem — one where a lot of smart people are thinking about it and there is a lot of available data, but many challenging judgment calls go into integrating that information and acting on it, and where we know there will ultimately be one correct answer.
People are better at making predictions if they have to put some money where their mouth is, and the wisdom of crowds is a real thing. So is the tendency in markets for people who are bad at predictions to lose their money to people who are good at it.
But the simplest argument is that if Nate Silver is consistently better than the markets, people will trade on his predictions until the markets simply reflect what Nate Silver thinks. A high-volume, highly liquid market at the very least shouldn’t predictably underperform any other sources, since any underperformance is an opportunity to make money.
But while prediction markets could be a highly effective and reliable way of predicting elections, they also have some serious flaws — flaws which have very much been on display over the last two weeks.
What goes wrong with betting on elections?
This whole newsletter has been ignoring one fairly important point: Prediction markets for elections are not, strictly speaking, generally legal in the United States. PredictIt has an exemption for research reasons that the Commodity Futures Trading Commission (CFTC) is constantly threatening to shut down. The other major markets I’ve been referencing throughout this piece, Betfair and Polymarket, generally don’t allow Americans to participate. (Betfair makes exceptions for Americans in a few states.)
These restrictions have two major downsides. One is that the people who know the most about the elections — like journalists or political staffers — are generally restricted from betting on it, which means that the markets are much less useful as information aggregators. (A futures market around, say, soybeans would be much less accurate if companies that know soybeans couldn’t participate in it.) Without the savviest people allowed to take part, the crowd is inherently less wise.
Possibly more importantly, these restrictions mean that the markets have limited liquidity in general. (Liquidity refers to how much money is changing hands on the market.) For big questions, like “Biden vs. Trump,” the liquidity is okay — hundreds of millions of dollars exchanging hands.
But for smaller markets, it’s a huge problem. One manifestation: The markets have a persistent tendency to overrate hugely long-shot candidates like Michelle Obama, and in the immediate aftermath of the debate, California Gov. Gavin Newsom emerged as by far the likeliest replacement for Biden as opposed to Vice President Kamala Harris, which seems highly unlikely.
These conditions all mean there aren’t enough trades happening to correct such errors. The low liquidity also means that markets are relatively easy to manipulate, which has happened at least once.
Ironically, the possibility of election manipulation is the main reason why the CFTC wants to ban prediction markets, which some elected officials have decried as “a clear threat to our democracy,” but the fact that they’re effectively banned makes them much easier to manipulate. It would take hundreds or thousands of times more money to manipulate commodity prices than to manipulate election betting odds, precisely because trading commodities is legal.
These disadvantages make prediction markets a double-edged sword. On the one hand, they’re powerful aggregators of public opinion, with a built-in accountability mechanism and a good track record for those questions where there’s a lot of trading. On the other hand, they’re too easy to manipulate, especially for smaller markets, handle unlikely candidates badly, don’t outperform professional forecasters, and as a result, can sometimes feel more like a distraction than a source of truth.
Toward a better election market
Over the last two weeks I’ve been observing another dynamic, and while I’m not sure whether to call it an upside or a downside, it does feel to me like a way that prediction markets are falling short of their potential.
The markets were quick to say that Biden had a bad debate, and their odds that the Democrats would pick another nominee briefly spiked. The odds that Biden would resign peaked at a 35 percent chance on July 4. But after Biden doubled down, the markets reverted to saying he’d likely be the nominee — jumping back up to an 83 percent chance he’d stay as of July 9.
Then George Clooney and some elected Democrats criticized him, and his odds of staying in office slid precipitously again. And then, Thursday evening’s press conference took place, and Biden did well enough to bring his odds back up.
Is this a rational response to new information? Is it a wild, vibes-driven pendulum? Are the markets seeking truth or are they, as Andrew Gelman once articulated the worry, “just a kind of noisy news aggregator”? For an unprecedented question like whether there will be sufficient public pressure to get Biden to drop out, is there an important difference between a rational aggregation and a vibes-driven stampede?
Where it seems like prediction markets could be most valuable is in the conditional markets: “If this person is the Democratic nominee, would they defeat Trump?” That, after all, is the question most Democrats care about the most. These markets do suggest Biden should be replaced. But such markets are much smaller and much less liquid, and as a result, it’s not clear to me they’re adding much clarity to our public discourse.
I don’t think prediction markets are a bad thing if they’re just an expression of general sentiment about whether Biden will resign. But the thing that I dream of is a world where markets offer reliable and accurate answers to the question, “Which Democrat is the most electable this November?” This could usefully direct the public conversation about Biden stepping down. Even better would be a world in which the markets identified Biden’s cognitive decline and related electoral weakness early, instead of at the exact same moment as everyone else.
That’s what it would look like for markets to be a major strength for policy decision-making. We’re not there yet.
A version of this story originally appeared in the Future Perfect newsletter. Sign up here!