The race to roll out artificial intelligence is happening as quickly as the race to contain it – as two key moments this week demonstrate.
On 10 May, Google announced plans to deploy new large language models, which use machine learning techniques to generate text, across its existing products. “We are reimagining all of our core products, including search,” said Sundar Pichai, the CEO of Google’s parent company Alphabet, at a press conference. The move is widely seen as a response to Microsoft adding similar functionality to its search engine, Bing.
A day later, politicians in the European Union agreed on new rules dictating how and when AI can be used. The bloc’s AI Act has been years in the making, but has moved quickly to stay up to date: in the past month, legislators drafted and passed rules dictating the use of generative AIs, the popularity of which has exploded in the past six months. This includes a requirement to disclose the use of any copyrighted material in training such AIs. The draft text will move forwards to a vote in the European Parliament in June.
But Google, like Microsoft and other tech giants, appears to be paying little attention to what may soon become the world’s most dominant form of AI legislation. Although EU laws only apply in member countries, the size of the bloc means companies can end up complying with its rules globally, as has broadly happened with the EU’s General Data Protection Regulation (GDPR).
How do we square this contradiction? “I hope I’m wrong, but it seems to me that these companies ignoring copyright issues is a power move,” says Carissa Véliz at the University of Oxford. “They are betting that their products are so seductive that governments will have to adapt to them, as opposed to these companies adapting their products to the rule of law.”
While some AI companies have set up agreements to license copyrighted material, others appear to be taking the approach of begging for forgiveness, rather than asking for permission. The EU’s AI Act may eventually force companies to formalise their use of copyrighted material, but exactly how that will play out is unclear.
Michael Veale at University College London thinks companies like Google will develop something similar to its Content ID system for YouTube, allowing rights-holders to claim content and choose to either remove it or monetise it. “I suspect AI firms are looking at similar models today, which would allow them both to play a compliance game while minimising costs by staying the price-setter, not the price-taker,” he says. Google didn’t respond to a request for comment.
Whatever happens, it is clear that the roll-out of AI is unlikely to slow down. “The speed at which companies are moving shows the strategic edge that AI will give today,” says Benedict Macon-Cooney at the Tony Blair Institute for Global Change, UK. “This race could present profound opportunities, as a once-in-a-generation technology begins to be applied to accelerate science, health and industries old and new.”
But the divergent paths being trodden by the tech giants and the EU set up a “struggle between titans, a clash between cultures”, says Véliz. She believes that “humanity is at a crossroads” and the rules we establish now – or our failure to do so – will set the future direction of travel for years to come.