The Turing Trap: Why Pursuing Human-Like AI is Misguided

Posted by
Check your BMI

We’re obsessed with creating human-like AI. Technologists, entrepreneurs, policymakers, and business managers all focus on using the latest innovation to automate humans. But fixating on AI that imitates people stifles true progress. It’s a trap – the Turing Trap.

 

SwissCognitive Guest Blogger:  HennyGe Wichers, PhD – “The Turing Trap: Why Pursuing Human-Like AI is Misguided”


 

toonsbymoonlight

We’re obsessed with creating human-like AI. Technologists, entrepreneurs, policymakers, and business managers all focus on using the latest innovation to automate and imitate humans. Erik Brynjolfsson, professor at the Stanford Institute for Human-Centred AI and director of The Digital Economy Lab, calls that the Turing Trap. A trap, because it’s a bad idea.

In his talk at the New Horizons in Generative AI conference at Stanford last month, Brynjolfsson explains how the Turing Trap works – and, importantly, how we can escape it.

The story begins with Alan Turing. You’ve probably heard of him, perhaps at school when you learned about the English cracking the German Enigma code to win WWII or while watching coverage of the AI Safety Summit at Bletchley Park, where Turing worked, a few weeks ago. Or maybe you remember him from the 2014 film The Imitation Game.

The idea of the imitation game is to build a machine that interacts with a human judge using written messages. The judge has to decide if they’re talking to another human or a machine. If they can’t tell the difference, the machine wins.

Alan Turing suggested the game in his 1950 paper Computing Machinery and Intelligence as a way to measure machine intelligence. It’s also known as the Turing test, and it’s inspired us for decades to build AI that mimics humans.

It’s an interesting technological challenge, but it’s the wrong goal if we want AI to improve people’s lives. Brynjolfsson outlines why that is the case in his Ten Theses of the Turing Trap.

TEN THESES

The Ten Theses of the Turing Trap capture how human-like AI causes problems for people and the economy and suggests a better path forward.

Let’s take a look at all 10 before adding context.

  1. The benefits of human-like AI are enormous
  2. But not all types of AI are human-like
  3. The more human-like a machine is, the more likely it’s a substitute for human labour
  4. Labour substitutes (automation) tend to drive down wages
  5. Substitution can reduce the economic and political power of those replaced
  6. Taking away power and agency creates a trap
  7. Alternatively, AI can complement (augment) labour and spur creativity
  8. Augmentation and creativity tend to increase wages
  9. Augmentation and creativity spawn not just new capabilities, but also new goods and services
  10. Today, there are excess incentives for substitutes vs complements

We would certainly have much more free time if we had human-like AI doing our jobs for us (1). But not all AI is human-like. Calculators, for example, are much better at arithmetic than humans.

And human intelligence isn’t the only useful kind of intelligence. Chimpanzees have far better short-term memories than us and use them to do things we can’t (2). We could deploy AI to add this kind of intelligence to our capabilities instead of emulating ourselves.

That said, human-like AI will be brilliant at one thing in particular: replacing people in the workplace (3). Businesses like machines. Machines are efficient, don’t get tired, and never disagree. Hiring a human to do the same job makes sense only if it’s cheaper, so automation drives down wages (4).

When people lose some or all of their wages, they can also lose economic and political power. Without money, their autonomy and ability to change their circumstances diminish (5 and 6). That’s the Turing Trap of human-like AI – and it’s a self-reinforcing loop.

The Turing Trap - Why Pursuing Human-Like AI is Misguided2

Fig 1: The self-reinforcing loop of the Turing Trap

What’s more, while we’re busy falling into the Turing Trap, we miss the positive effects AI can have. Technology can complement human abilities and spur creativity. Even calculators can.

Most of us have typed 5318008 into a calculator at some point and turned it upside down to entertain a friend during a boring maths class (7). That’s not the most sophisticated example, but the idea is important: economic progress is a story of technology boosting workers and creating space for new ideas.

We automate activities once they become mundane and then use the gain in time and capabilities for creative thinking, problem-solving, and innovation. That process is called augmentation, and it’s what propels us to new heights and better lives (8 and 9).

So why don’t we build machines to augment what we do? The answer is short and simple: because it’s hard. It’s really hard. We don’t know what to build with AI because we don’t yet know how we’ll use it. Fig 2 illustrates the problem.

The Turing Trap - Why Pursuing Human-Like AI is Misguided3

Fig 2: Humans and machines (image re-created from the talk)

The white oval represents tasks we know about today and humans can do. The black shape inside it are ones that machines can automate. But there is a much larger space outside the ovals with new jobs that humans can do with the help of new technology.

Right now, we don’t know what those jobs are when it comes to AI. We need visionaries who can imagine and inspire new possibilities — similar to how the iPhone sparked mobile services and the app industry.

That’s a challenge for entrepreneurs and business managers and technologists. But policymakers play a role, too. They are the ones guiding our choices – through taxes.

TAXING CREATIVITY

Policymakers design tax systems that incentivise businesses to make one decision over another. It’s how they shepherd the economy. Today, in much of the Western world, taxes encourage automation rather than augmentation and creativity (10). Here’s how that works.

Say you have an idea that will generate $1 billion of value using either 1.000 human workers or 1.000 machines. You work through the details of each option and realise something: capital tax rates are only about half of labour taxes.

So, in the long run, your idea is more profitable if you use machines instead of people. Great! Machines it is – that was easy.

The system works like that because lower capital tax rates encourage investment and result in higher productivity. Productivity is a key measure for governments, and, generally speaking, they do a good job if the number goes up.

Productivity is output divided by input. Making 100 whatsits in 10 hours of human labour gets a 10. Achieving the same in 2 hours with automation gets 50. Again, machines are the better option.

We’ve used the same incentives for some time now. The system has generated economic growth and improved living standards for many over the past decades. But sticking with it as we enter the era of AI could backfire because the incentives lure us into the Turing Trap.

So why don’t we change incentives to stimulate augmentation and creativity instead of automation? The answer is the same as before: because it’s hard. It’s hard because of the complex interplay of economic, political, and social factors.

Say we increase capital tax rates to match labour tax rates, mooting the choice between automation and augmentation. It seems like a good idea but will have some unpopular side effects. Any conservative government making that change will surely lose voters, and economic growth could slow while businesses delay investments.

How about we lower labour tax rates to match capital tax rates, then? If we do that, government income will drop, and less money will be available for essential services like social security and infrastructure such as building roads.

Whatever change we make will have knock-on effects. So how do we escape the Turing Trap?

THE CHALLENGE AHEAD

The first step to solving any complex problem is understanding the separate issues and how they interact, and Brynjolfsson makes a great contribution with his ten theses. It’s clear we need two things.

  1. Visionaries who can imagine and inspire new possibilities that create whole new industries and jobs
  2. Policymakers who can figure out how to incentivise augmentation and creativity to help those jobs become real sooner rather than later

It’s no small feat. But what an exciting time to grab a piece of the puzzle and solve it. Your impact would be real and tangible and visible.


About the Author:

HennyGe Wichers is a technology science writer and reporter. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag The Turing Trap: Why Pursuing Human-Like AI is Misguided erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.