A leading expert in artificial intelligence safety has stated that a letter calling for a six month moratorium on the development of powerful AI systems is not sufficient.
In a recent opinion piece, Eliezer Yudkowsky (a Machine Intelligence Research Institute decision theorist) stated that the six-month “pauseā on “AI systems stronger than GPT-4” called for by Tesla CEO Elon Musk and hundreds of other innovators and specialists understates the “seriousness” of the situation.Ā He proposed to implement a worldwide moratorium on large-scale AI learning models.
The Future of Life Institute issued the letter and more than 1000 people signed it. It argued that safety protocols must be developed by independent oversight to ensure the success of future AI systems.
The letter stated that “powerful AI systems should only be developed once we are confident their effects will prove positive and their risks can be managed.”Ā Yudkowsky thinks this is inadequate.
Yudkowsky wrote that the key issue isn’t ‘human-competitive intelligence’ (as stated in an open letter); it’s what happens when AI becomes smarter than human intelligence.
He asserts that “many researchers who are deeply involved in these issues, including me, believe that the most likely outcome of building a superhumanly intelligent AI is that literally everybody on Earth will die.”Ā “Not as in’maybe some remote chance’, but as in “that is the obvious thing that will happen.”
Yudkowsky believes that an AI intelligenter than humans might not be able to obey its creators and may disregard human life.Ā He suggests not thinking “Terminator”. “Visualize an entire alien civilisation, thinking at millions times the speed of human thought, initially confined only to computers — in worlds of creatures that are very stupid and slow,” he writes.
Yudkowsky warns against any plan to deal with superintelligences that determine the best solution to every problem.Ā He also raised concerns about whether AI researchers are able to determine if learning models are “self-aware” and whether it is ethical for them to be owned if so.
He argues that six months is not enough time for a plan.Ā It would take about half the time to solve safety of superhuman intelligence. This is not perfect safety. But safety in the sense that ‘not killing literally anyone’.
Yudkowsky instead proposes international cooperation even between rivals such as the U.S. or China to stop the development of powerful AI systems.Ā Yudkowsky says that this is more important that “preventing a complete nuclear exchange” and that countries should consider using nuclear weapons “if it’s necessary to reduce the risk from large AI training runs.”
Yudkowsky wrote, “Shut it down all.”Ā Shut down all large GPU clusters (large computer farms that produce the most powerful AIs).Ā All large training runs must be stopped.Ā To compensate for better training algorithms, you should set a limit on the computing power that can be used to train an AI system.Ā There will be no exceptions for militaries and governments.
As artificial intelligence software continues its rapid growth, Yudkowsky’s warning is a stark one.Ā ChatGPT by OpenAI is an artificial intelligence chatbot that can create content, compose songs and even code.
OpenAI CEO Sam Altman stated that “We have to be cautious here” when speaking about the company’s creation.Ā “I believe people should be content that we are a bit scared of this,” he said.