With the recent news that the ChatGPT AI can pass a theory of mind test, how far away are we from an artificial intelligence that fully understands the goals and beliefs of others?
SUPERHUMAN artificial intelligence is already among us. Well, sort of. When it comes to playing games like chess and Go, or solving difficult scientific challenges like predicting protein structures, computers are well ahead of us. But we have one superpower they aren’t even close to mastering: mind reading.
Humans have an uncanny ability to deduce the goals, desires and beliefs of others, a crucial skill that means we can anticipate other people’s actions and the consequences of our own. Reading minds comes so easily to us, though, that we often don’t think to spell out what we want. If AIs are to become truly useful in everyday life – to collaborate effectively with us or, in the case of self-driving cars, to understand that a child might run into the road after a bouncing ball – they need to establish similar intuitive abilities.
The trouble is that doing so is far harder than training a chess grandmaster. It involves dealing with the uncertainties of human behaviour and requires flexible thinking, which AIs have typically struggled with. But recent developments, including evidence that the AI behind ChatGPT understands the perspectives of others, show that socially savvy machines aren’t a pipe dream. What’s more, thinking about others could be a step towards a grander goal – AI with self-awareness.
“If we want robots, or AI in general, to integrate into our lives in a seamless way, then we have to figure this out,” says Hod Lipson at Columbia University, New York. “We have to give them this gift that evolution has given us to read other people’s minds.”
Psychologists refer to the ability to infer …