Top AI researcher Warns: Humanity Will Be Extinct

The only way to prevent the extinction of humanity is to halt the global development of powerful artificial intelligence systems and severely punish anyone who break the moratorium, a prominent AI expert has advised.

In an opinion piece published in TIME magazine on Wednesday, co-founder of the Machine Intelligence Research Institute (MIRI) Eliezer Yudkowsky explains why he chose not to sign a petition urging "all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4," a multimodal large language model that was released by OpenAI earlier this month.

Elon Musk and Steve Wozniak, among others, signed the letter, which Yudkowsky claimed "asked for too little to remedy" the issue of AI's unchecked and rapid development. In his essay, Yudkowsky predicted that "virtually everyone on Earth will perish" if superhumanly intelligent AI were created in the existing environment.

He stated that for humanity to survive a collision with a computer system that "does not care for us nor for sentient life in general," it would take "precision and preparedness and fresh scientific insights" that it now lacks and is unlikely to acquire in the near future.

A sufficiently clever AI won't be restricted to computers for very long, according to Yudkowsky. The AI would probably be able to "create artificial life forms or bootstrap straight to postbiological molecular manufacturing," he said, adding that it's already possible to email DNA strings to labs to produce proteins.

The researcher contends that there must be a permanent, global ban on all new significant AI training runs. He emphasized that "there can be no exceptions," including for governments or militaries.

International agreements should be made to set a cap on the amount of processing power that may be used to train these systems, according to Yudkowsky.

"Be less afraid of a shooting conflict between nations than of the moratorium being breached; be willing to demolish a rogue datacenter by airstrike," he said. "If information says that a country outside the accord is developing a GPU cluster."

It should be made "explicit in international diplomacy that preventing AI extinction scenarios is regarded a priority above preventing a full nuclear exchange," he continued, because the threat posed by artificial intelligence is so great.

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 washingtonengager.com
Privacy Policy