Categories: FeaturedLatest News

AI Expert Warns Elon Musk-Signed Letter Doesn’t Go Far Enough

A leading expert in artificial intelligence safety has stated that a letter calling for a six month moratorium on the development of powerful AI systems is not sufficient.

In a recent opinion piece, Eliezer Yudkowsky (a Machine Intelligence Research Institute decision theorist) stated that the six-month “pause” on “AI systems stronger than GPT-4” called for by Tesla CEO Elon Musk and hundreds of other innovators and specialists understates the “seriousness” of the situation. He proposed to implement a worldwide moratorium on large-scale AI learning models.

The Future of Life Institute issued the letter and more than 1000 people signed it. It argued that safety protocols must be developed by independent oversight to ensure the success of future AI systems.

The letter stated that “powerful AI systems should only be developed once we are confident their effects will prove positive and their risks can be managed.” Yudkowsky thinks this is inadequate.

Yudkowsky wrote that the key issue isn’t ‘human-competitive intelligence’ (as stated in an open letter); it’s what happens when AI becomes smarter than human intelligence.

He asserts that “many researchers who are deeply involved in these issues, including me, believe that the most likely outcome of building a superhumanly intelligent AI is that literally everybody on Earth will die.” “Not as in’maybe some remote chance’, but as in “that is the obvious thing that will happen.”

Yudkowsky believes that an AI intelligenter than humans might not be able to obey its creators and may disregard human life. He suggests not thinking “Terminator”. “Visualize an entire alien civilisation, thinking at millions times the speed of human thought, initially confined only to computers — in worlds of creatures that are very stupid and slow,” he writes.

Yudkowsky warns against any plan to deal with superintelligences that determine the best solution to every problem. He also raised concerns about whether AI researchers are able to determine if learning models are “self-aware” and whether it is ethical for them to be owned if so.

He argues that six months is not enough time for a plan. It would take about half the time to solve safety of superhuman intelligence. This is not perfect safety. But safety in the sense that ‘not killing literally anyone’.

Yudkowsky instead proposes international cooperation even between rivals such as the U.S. or China to stop the development of powerful AI systems. Yudkowsky says that this is more important that “preventing a complete nuclear exchange” and that countries should consider using nuclear weapons “if it’s necessary to reduce the risk from large AI training runs.”

Yudkowsky wrote, “Shut it down all.” Shut down all large GPU clusters (large computer farms that produce the most powerful AIs). All large training runs must be stopped. To compensate for better training algorithms, you should set a limit on the computing power that can be used to train an AI system. There will be no exceptions for militaries and governments.

As artificial intelligence software continues its rapid growth, Yudkowsky’s warning is a stark one. ChatGPT by OpenAI is an artificial intelligence chatbot that can create content, compose songs and even code.

OpenAI CEO Sam Altman stated that “We have to be cautious here” when speaking about the company’s creation. “I believe people should be content that we are a bit scared of this,” he said.

American Conservatives

Recent Posts

Justice Alito Isn’t Buying the Racial Bias Argument on Haitian TPS

Justice Samuel Alito did what too few jurists bother doing anymore. He asked the obvious…

2 hours ago

South Texas Democrat Claims White Nationalists Join ICE to Hunt People Down

There's something almost poetic about watching a political campaign implode in real time, and Bobby…

2 hours ago

Van Hollen Says PTSD Excuses Nazi Ink as Democrats Scramble to Defend Their Maine Mess

Let me get this straight. The Democratic Party, the same crowd that spent years lecturing…

2 hours ago

Fourteen Thousand Luxury Car Owners Caught Gaming Food Stamp System

Fourteen thousand people in a single state are driving Ferraris, Lamborghinis, and Bentleys while collecting…

2 hours ago

The Pedophile Illinois Set Free Tells You Everything About Sanctuary State Madness

There's a convicted child rapist walking free in Illinois right now because Governor JB Pritzker…

2 hours ago

Pete Hegseth Tells Warren Where to Stick Her Baseless Trading Accusations

Elizabeth Warren spent Thursday doing what she does best: grandstanding with someone else's reputation on…

18 hours ago