Be very afraid—maybe

AI_androidTechnological and scientific innovations have brought enormous benefits to the world, as discussed many times in this space.

We’ve also touched on cases where technology and science have been deployed for questionable, even malevolent, purposes.

Underlying all technical and scientific achievements is human ingenuity. Our seemingly limitless imagination, intuition, and creativity have led to the rapidly evolving technological landscape.

We will surely continue to see impressive, sometimes miraculous, advancements in every field—computers, medicine, space exploration, biogenetics, and so on.

It follows, therefore, that human ingenuity will continue to be responsible for those advancements. Today, surely, but what about the future?

According to a growing chorus of prominent scientists and technologists, artificial intelligence (AI) might someday be responsible for exponential leaps forward in scientific and technological development.

AI might also bring about the end of the human race.

That would be bad, right?

The year is 2029. Smoke fills the air. The landscape has been reduced to rubble, including the twisted metal frames of a playground, and in the distance what were once skyscrapers. Human remains are scattered about.

Suddenly a metallic foot steps on a skull, and a quite ghastly looking android carrying an other-wordly weapon appears. Airships with spotlights patrol the area, firing lasers. Desperate humans, completely outgunned, scatter like cockroaches.

ghostbusters copyOur dystopic future comes into focus, and it is chilling. But these images, from the opening scene of Terminator 2: Judgement Day, are science fiction.

We have nothing to fear, do we?

Yes, according to some very prominent scientists and technologists. If we continue on the path toward AI, they say, it could mean the eventual annihilation of the human race.

To paraphrase Bill Murray in Ghostbusters, that would be bad, right?

Who are these naysayers signaling such dire warnings? Let’s explore.

Summoning the demon

Back in August, Elon Musk, he of Tesla and SpaceX fame, tweeted that artificial intelligence could end up being more dangerous than nuclear weapons.

demonMusk was recommending the book, Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, who explores the possibilities that machine intelligence could surpass human intelligence.

On October 24, Musk spoke at the MIT Aeronautics and Astronautics Centennial Symposium and stated:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.

Musk is highly quotable, an entrepreneur, a visionary, an inventor, and a colorful personality. He is known for making thought-provoking, even controversial, statements. One might approach some of his statements with a grain of salt, but on the whole, it’s hard not to take him seriously.

Humans to be superseded

Speaking of taking someone seriously, when Stephen Hawking, the Dennis Stanton Avery and Sally Tsui Wong-Avery Director of Research at the Department of Applied Mathematics and Theoretical Physics and Founder of the Centre for Theoretical Cosmology at Cambridge, speaks, people listen.73015788BV003_SCIENTISTS_MO

In an expansive interview with the BBC, Hawking spoke on a number of subjects, but perhaps his most surprising comments were in reference to the dangers of AI. Hawking told the BBC:

The development of full artificial intelligence could spell the end of the human race…[AI] would take off on its own, and re-design itself at an ever increasing rate…Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.

Hawking’s comments about the accelerated evolution of AI, the idea that machines could iteratively improve themselves, and much faster than humans would evolve, is chilling.

Malevolent machines

Not all scientists and technologists believe that the scenarios mapped out by Musk and Hawking will come to pass.

malevolent_AIBut if they do, what would be the catalyst that would make highly-evolved thinking machines malevolent to humans?

How about scientific evidence that we are slowly but surely destroying life on our planet. Over-populating. Polluting our atmosphere. Killing our oceans. Changing our climate.

Would intelligent, powerful machines deduce that the human race must be eradicated to save Mother Earth?

Here’s an idea: let’s fix these problems and make the possibility a moot point.

Leave a Reply

Your email address will not be published. Required fields are marked *