Halt development of new AI to protect humanity: Chilling call by Elon Musk and tech titans

 (Getty Images Entertainment Video)
(Getty Images Entertainment Video)

Humanity is in danger from “AI experiments” and they must be paused to ensure that we are not at risk, according to more than 1,000 experts.

Researchers need to stop working on the development of new artificial intelligence systems for the next six months – and if they will not, then governments need to step in, they warned.

That is the grave conclusion of a new open letter signed by experts including academics in the field and technology leaders including Elon Musk and Apple co-founder Steve Wozniak.

The letter notes that the positive possibilities of AI are significant. It says that humanity “can enjoy a flourishing future” with the technology, and that we can now enjoy an “AI summer” in which we adapt to what has already been created.

But if scientists continue to train new models, then the world could be faced with a much more difficult situation. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the authors of the letter write.

The most advanced publicly available AI system at the moment is GPT-4, developed by OpenAI, which was released earlier this month. The letter says that AI labs should pause work on any system more powerful than that, for at least the next six months.

“This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the authors write.

During those six months, both AI labs and experts should work to create new principles for the design of AI systems, they say. That should mean that any system built within them is “safe beyond a reasonable doubt”.

It would not mean pausing AI work in general, but stopping the development of new models and capabilities. Instead, that research “should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal”.

The same pause could also allow time for policymakers to make new governance systems for looking at AI. That would involve creating authorities that can track their development and ensure they are not pursuing dangerous ends.

At the moment, the letter includes signatures from founders and chief executives of Pinterest, Skype, Apple and Tesla. It also includes experts in the field from universities including Berkeley, Princeton and others.

Some researchers from within the companies that are working on their own AI systems – such as Deepmind, the UK artificial intelligence company owned by Google parent Alphabet – have also signed the letter.

Elon Musk was one of the founders of OpenAI, contributing funding when it launched at the end of 2017. But in recent months he has seemingly become more opposed to its work, arguing that it is becoming fixated on new systems and is wrongly developing them for profit.