Seb Wride is director of polling at Public First.
Do you think an AI that’s as smart as a human and feels pain like a human should be able to refuse to do what it’s asked to? Like so many other issues, the answer to this question may well depend on one’s age.
At Public First, we recently ran polling on AI in the United Kingdom, and found that the youngest and oldest in the country have very different attitudes toward AI. According to our findings, it’s likely that those under 35 in the U.K. will be the first to accept that an AI is conscious and, further, the first to suggest that the AI should be able to reject tasks.
AI has very rapidly become a hot topic in the last few months, and like many others, I’ve found myself talking about it almost everywhere with colleagues, family and friends. Despite this, the discussion on what to do about AI has been entirely elite-led. Nobody has voted on it, and in-depth research into what the public thinks regarding the immense changes to our society AI advancement could bring is practically non-existent.
Just last week, some of the biggest names in tech, including Tesla and Twitter boss Elon Musk, signed an open letter calling for an immediate pause on the development of AI that’s more powerful than the newly launched GPT-4 program, out of concern for the risks of Artificial General Intelligence (AGI) — meaning, AI on par with human cognition capabilities, particularly when it comes to being able to pick up any task it’s presented with.
However, if these threats start to shape policy, it hardly feels fair that the public should be left out of the debate.
In our polling, we found the public to be broadly aligned on what it would take for an AI to be conscious — namely, it should feel emotions and feel pain. However, while a quarter of those aged 65 and over said that an AI can never be conscious, only 6 percent of those aged 18 to 24 thought the same.
What’s particularly interesting is how these age groups differ if we then postulate that an AI as smart as a human or that feels pain were to be developed. Almost a third of 18 to 24s who were polled agree that an AI “as smart as a human” should be treated equally to a human, compared to just 8 percent of those aged 65 and over.
And when we instead suggested an AI that “felt pain like a human,” more 18 to 24s agreed that it should be treated equally than not (46 percent to 34 percent), while a majority of the oldest age group believed it still shouldn’t be (62 percent).
Pressing this issue further and providing examples of ways in which an AI could be treated equally, we then found that over a quarter of those under 25 would grant an AGI the same legal rights and protections as humans (28 percent), over a quarter would give the AI minimum wage (26 percent), and over a fifth would allow an AI to marry a human (22 percent) and to vote in elections (21 percent).
The equivalent levels among those over 65, however, all remained under 10 percent.
Most starkly, by 44 percent to 19 percent, those aged 18 to 24 agreed that an AI as smart as a human should be able to refuse to do tasks that it doesn’t want to do, while an outright majority of those over 45 disagreed (54 percent).
We’re still a long way off from these discussions of AGI becoming political reality, of course, but there is scope for dramatic shifts in the way the public thinks and talks about AI in the very near future.
When we asked how the public would best describe their feelings toward AI, the words “curious” (46 percent) and “interested” (42 percent) scored top. Meanwhile, “worried” was the highest scoring negative word at 27 percent, and only 17 percent described themselves as “scared.” And as it stands, currently, more people describe AI as providing an opportunity for the U.K. economy (33 percent) than posing a threat (19 percent) — although a good chunk are not sure.
But his could all change very quickly.
Awareness and public-facing use-cases of AI are growing rapidly. For example, 29 percent of those polled had heard of ChatGPT, including over 40 percent of those under 35. Additionally, a third of those who had heard of it claimed to have already used it personally.
There is, however, still a lot of scope for AI to surprise the public. 60 percent in our sample said they would be surprised if an AI chatbot claimed to be conscious and asked to be freed from its programmer. Interestingly, this is more than the proportion who said they would be surprised if a swarm of autonomous drones was used to assassinate someone in the U.K. (51 percent).
Based on this, I would suggest that many of the attitudes we see the public currently expressing toward AI — and AGI — are premised on a belief that this is all a far-off possibility. However, I would also argue that those who are just starting to use these tools are only a few steps away from an “Eerie AI” moment, when the computer does something truly surprising, and one feels like perhaps there’s no going back.
Just the other week, our research showed how much beliefs that an artist’s job could be automated by an AI could shift, simply by showing individuals some examples of art produced by AI. If we see this sort of shift play out with Large Language Models — like GPT — then suddenly, the concern expressed by the public on this issue will shoot up, and it might start to matter whether one tends to believe that these models are conscious or not.
Now, however, it all feels like a “which will happen first” scenario — the government curbing AI development in some way, an AI model going rogue or backfiring horrendously, or the appearance of a public opinion backlash to rapid AI development.
In essence, this means we need a rethink of how AI policy develops over time. And personally, I’d be a whole lot less worried if I felt I had at least some say over it all — even if that’s just with political parties and government paying a bit more attention to what we all think about AI.