Here’s why AI can be extremely dangerous, whether conscious or not

Here’s why AI can be extremely dangerous, whether conscious or not
Spread the love

“The idea that these things could become smarter than people… I thought it was a long way off… Obviously, I don’t think that anymore,” said Geoffrey Hinton, one of Google’s top artificial intelligence scientists, also known as “the godfather of AI” after quitting his job in April so he could warn about the dangers of this technology.

He’s not the only one worried. A 2023 survey of AI experts found that 36 percent fear that AI development could result in a “nuclear-level catastrophe.” Nearly 28,000 people have signed an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies, and many other leading technologists, calling for a six-month hiatus or moratorium on new advances. AI development.

As a consciousness researcher, I share these strong concerns about the rapid development of AI, and I am one of the signatories of the open letter The Future of Life.

Why are we all so worried? In short: AI development is going too fast.

The key issue is the profoundly rapid improvement in conversation among the new generation of advanced “chatbots”, or what are technically called “extensive language models” (LLMs). With this upcoming “AI explosion,” we probably only have one chance to get it right.

If we’re wrong, we may not live to tell about it. This is not hyperbole.

This rapid acceleration promises to soon result in “artificial general intelligence” (AGI), and when that happens, the AI ​​will be able to improve itself without human intervention. It will do this in the same way that, for example, Google’s AlphaZero AI learned to play chess better than even the best human or other AI chess players in just nine hours from when it was first turned on. He accomplished this feat by playing himself millions of times.

A team of Microsoft researchers who analyzed OpenAI’s GPT-4, which I believe is the best of the new advanced chatbots currently available, said it had “sparks of advanced general intelligence” in a new white paper.

Testing GPT-4, he performed better than 90 percent of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers to practice in many states. That figure is up compared to just 10 percent in the previous version of GPT-3.5, which trained on a smaller dataset. They found similar improvements on dozens of other standardized tests.

Most of these tests are reasoning tests. This is the main reason why Bubeck and his team concluded that GPT-4 “could reasonably be seen as an early (but still incomplete) version of an artificial general intelligence (AGI) system.”

This rate of change is why Hinton told the New York Times:: “Look how it was five years ago and how it is now. Take the difference and propagate it forward. That is scary”. At a Senate hearing in mid-May on the potential of AI, Sam Altman, director of OpenAI, called the regulation “crucial.”

Once the AI ​​can improve itself, which may take no more than a few years, and in fact could already be here by now, we have no way of knowing what the AI ​​will do or how we can control it. This is because super-intelligent AI (which, by definition, can outperform humans in a wide range of activities), and this is what worries me most, will be able to run around programmers and any other human by manipulating humans to do their will; it will also have the ability to act in the virtual world through its electronic connections, and to act in the physical world through robotic bodies.

This is known as the “control problem” or the “alignment problem” (see philosopher Nick Bostrom’s book superintelligence for a good overview) and has been studied and discussed by philosophers and scientists, such as Bostrom, Seth Baum, and Eliezer Yudkowsky, for decades.

I think of it this way: Why would we expect a newborn baby to beat a chess grandmaster? We wouldn’t. Similarly, why would we expect to be able to control super-intelligent AI systems? (No, we won’t be able to just hit the switch off, because the super-smart AI will have thought of all the possible ways we could and will take steps to prevent it from going off.)

Here’s another way to look at it: a super-intelligent AI will be able to do in about a second what a team of 100 human software engineers would take a year or more to complete. Or pick any task, like designing a new advanced aircraft or weapons system, and the super-intelligent AI could do it in about a second.

Once the AI ​​systems are integrated into the robots, they will be able to act in the real world, rather than just the virtual (electronic) world, with the same degree of super intelligence, and of course they will be able to replicate and improve themselves. on a superhuman level. passed.

Any defenses or protection we try to build into these AI “gods”, on their path to godhood, will be easily anticipated and neutralized by the AI ​​once it reaches the state of super intelligence. This is what it means to be super smart.

We will not be able to control them because whatever we think, they will have already thought, a million times faster than us. Any defenses we’ve built will fall apart, like Gulliver pulling the little strings the Lilliputians used to try to contain him.

Some argue that these LLMs are just automation machines without awareness, which implies that if they are not aware, they are less likely to break free from their programming. Even if these language models, now or in the future, are not fully conscious, it doesn’t matter. For the record, I agree that they are unlikely to have any real awareness at this point, though I remain open to new facts as they emerge.

Anyway, a nuclear bomb can kill millions without any conscience. In the same way, AI could kill millions unconsciously, in a myriad of ways, including the potential use of nuclear bombs, either directly (much less likely) or through manipulated human intermediaries (more likely).

So the consciousness and AI debates don’t really figure much in the AI ​​safety debates.

Yes, language models based on GPT-4 and many other models are already widely circulated. But the moratorium being called for is stopping development of any new model more powerful than 4.0, and this can be enforced, forcefully if necessary. Training these more powerful models requires energy and massive server farms. They can be closed.

My ethical compass tells me that it is very unwise to create these systems when we already know that we will not be able to control them, even in the relatively near future. Discernment is knowing when to pull back from the edge. Now is that time.

We shouldn’t open Pandora’s box any more than it has already been opened.

This is an opinion and analysis article, and the opinions expressed by the author(s) are not necessarily those of American scientist.

#

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *