Elon Musk has asked a court to settle the question of whether GPT-4 is an artificial general intelligence (AGI), as part of a lawsuit against OpenAI. The development of AGI, capable of performing a range of tasks just like a human, is one of the leading goals of the field, but experts say the idea of a judge deciding whether GPT-4 qualifies is “impractical”.
Musk was one of the founders of OpenAI in 2015, but he left it in February 2018, reportedly over a dispute about the firm changing from a non-profit to a capped-profit model. Despite this, he continued to support OpenAI financially, with his legal complaint claiming he donated more than $44 million to it between 2016 and 2020.
Advertisement
Since the arrival of ChatGPT, OpenAI’s flagship chatbot product, in November 2022, and the firm’s partnership with Microsoft, Musk has warned AI development is moving too quickly – a view only exacerbated by the release of GPT-4, the latest AI model to power ChatGPT. In July 2023, he set up xAI, a competitor to OpenAI.
Now, in a lawsuit filed in a California court, Musk, through his lawyer, has asked for “judicial determination that GPT-4 constitutes Artificial General Intelligence and is thereby outside the scope of OpenAI’s license to Microsoft”. This is because OpenAI has pledged to only license “pre-AGI” technology. Musk also has a number of other asks, including financial compensation for his role in helping set up OpenAI.
But can a judge decide when AGI has been achieved? “I think it’s impractical in the general sense, since AGI has no accepted definition and is something of a made-up term,” says Mike Cook at King’s College London.
“Whether OpenAI has achieved AGI is at its very best hotly debated between those who decide on scientific facts,” says Eerke Boiten at De Montfort University in Leicester, UK. “It seems unusual to me for a court to be able to establish a scientific truth.”
Such a ruling wouldn’t be legally impossible, however. “We’ve seen all sorts of ridiculous definitions come out of court decisions in the US. Would it convince anyone apart from the most out-there AGI adherents? Not at all,” says Catherine Flick at Staffordshire University, UK.
What Musk hopes to achieve with the AGI lawsuit is unclear – New Scientist has contacted both him and OpenAI for comment, but is yet to receive a response from either.
Regardless of the rationale behind it, the lawsuit puts OpenAI in an unenviable position. CEO Sam Altman has made it clear that the firm intends to build an AGI and has issued dire warnings that its powerful technology needs to be regulated.
“It’s in OpenAI’s interests to constantly imply their tools are getting better and closer to doing this, because it keeps attention on them, headlines flowing and so on,” says Cook. But now it may need to argue the opposite.
Even if the court relied on expert views, any judge would struggle to rule in Musk’s favour at best – or to unpick the differing viewpoints over the hotly disputed topic of when an AI constitutes an AGI. “Most of the scientific community currently would say AGI has not been achieved,” says Boiten, that is “if the concept of AGI is even considered meaningful or precise enough”.
Topics: