Latest News

AI Expert Warns Elon Musk-Signed Letter Doesn’t Go Far Enough

A leading expert in artificial intelligence safety has stated that a letter calling for a six month moratorium on the development of powerful AI systems is not sufficient.

In a recent opinion piece, Eliezer Yudkowsky (a Machine Intelligence Research Institute decision theorist) stated that the six-month “pause” on “AI systems stronger than GPT-4” called for by Tesla CEO Elon Musk and hundreds of other innovators and specialists understates the “seriousness” of the situation. He proposed to implement a worldwide moratorium on large-scale AI learning models.

The Future of Life Institute issued the letter and more than 1000 people signed it. It argued that safety protocols must be developed by independent oversight to ensure the success of future AI systems.

The letter stated that “powerful AI systems should only be developed once we are confident their effects will prove positive and their risks can be managed.” Yudkowsky thinks this is inadequate.

Yudkowsky wrote that the key issue isn’t ‘human-competitive intelligence’ (as stated in an open letter); it’s what happens when AI becomes smarter than human intelligence.

He asserts that “many researchers who are deeply involved in these issues, including me, believe that the most likely outcome of building a superhumanly intelligent AI is that literally everybody on Earth will die.” “Not as in’maybe some remote chance’, but as in “that is the obvious thing that will happen.”

Yudkowsky believes that an AI intelligenter than humans might not be able to obey its creators and may disregard human life. He suggests not thinking “Terminator”. “Visualize an entire alien civilisation, thinking at millions times the speed of human thought, initially confined only to computers — in worlds of creatures that are very stupid and slow,” he writes.

Yudkowsky warns against any plan to deal with superintelligences that determine the best solution to every problem. He also raised concerns about whether AI researchers are able to determine if learning models are “self-aware” and whether it is ethical for them to be owned if so.

He argues that six months is not enough time for a plan. It would take about half the time to solve safety of superhuman intelligence. This is not perfect safety. But safety in the sense that ‘not killing literally anyone’.

Yudkowsky instead proposes international cooperation even between rivals such as the U.S. or China to stop the development of powerful AI systems. Yudkowsky says that this is more important that “preventing a complete nuclear exchange” and that countries should consider using nuclear weapons “if it’s necessary to reduce the risk from large AI training runs.”

Yudkowsky wrote, “Shut it down all.” Shut down all large GPU clusters (large computer farms that produce the most powerful AIs). All large training runs must be stopped. To compensate for better training algorithms, you should set a limit on the computing power that can be used to train an AI system. There will be no exceptions for militaries and governments.

As artificial intelligence software continues its rapid growth, Yudkowsky’s warning is a stark one. ChatGPT by OpenAI is an artificial intelligence chatbot that can create content, compose songs and even code.

OpenAI CEO Sam Altman stated that “We have to be cautious here” when speaking about the company’s creation. “I believe people should be content that we are a bit scared of this,” he said.

Nate Kennedy

Recent Posts

Shocking Discovery in Boston: Four Newborns Found Frozen, No Charges Filed

The Suffolk District Attorney announced Tuesday, in an unexpected move, that no criminal charges would…

22 hours ago

NYPD Officer Fires Gun While Removing Anti-Israel Agitators from Columbia University Building

The Associated Press reported that an NYPD officer fired a gun inside Hamilton Hall, on…

22 hours ago

Biden Stumbles with Teleprompter Spelling, Makes False Claim About Walmart

Joe Biden and his team have a lot to be on the lookout for. The…

1 day ago

Tulsi Gabbard Warns of Freedom’s “Doomsday Scenario” if Biden Administration Re-elected

Tulsi Gabriel, a former Democratic Rep., and Iraq War veteran, outlined the dangers of losing…

1 day ago

Trump Makes Surprise Campaign Stop, Delivers Pizza to NYC Firefighters After Court Appearance

Former President Trump honored first responders in Midtown Manhattan on Thursday evening after spending hours…

2 days ago

Drew Barrymore Under Fire for Allegedly Disrespecting Kamala Harris with “Momala” Remark

Charles Blow, a New York Times Columnist, called out Drew Barrymore as a talk show…

2 days ago