Artificial Ignorance

By in Blog Posts, Book Notes, Facebook Instant Articles, Ideas, Semantics

Have you seen the headlines?

Just recently Elon Musk made waves by saying “AI is like summoning the demon” during a talk and Q&A he gave at MIT. This was the most prominent instance of a sentiment that has been echoed throughout the tech community for several years. He was referencing a recently-published book, Superintelligence, where philosopher and futurist Nick Bostrom lists and explains the various concerns raised by the possibility of AI.

Musk has made other comments with a similar sense of urgency since:

Soon thereafter Bill Gates, Ray Kurzweil, and others made public statements about their lack of concern for anything AI related.

Perhaps it seems strange to you that the most prominent and well-informed minds in the world have completely opposite viewpoints on the “dangers” of artificial intelligence. It certainly did to me.

Superintelligence by Nick Bostrom

So I set out to read “Superintelligence” for myself last year, and I also took notes on my thoughts after each AI-themed movie that came out (there were several).

What I discovered is an important schism in the conceptual framework people are referring to when they talk about the various “dangers of artificial intelligence” – That is, the concept of AI that Elon Musk is afraid of is VERY different from the concept of AI that Bill Gates insists is safe.

At its heart, this is a problem of terminology.

To put it simply, people in the “AI is safe” camp are referring to AI the way movies refer to AI. This concept is basically just a silicon-based person. It’s not hard to imagine that a machine-person would be no threat at all assuming it just had more processing power and a more thorough, nuanced understanding of morality than ourselves.

The trouble is that’s it is very difficult to build a machine with nuanced ethical framework.

The people who are afraid of “Artificial Intelligence” aren’t afraid of a fully formed computerized person. They’re afraid that it’s much easier to build an extremely efficient self-improving learning machine that isn’t a person at all.

This distinction is why it’s so frustrating that the term “Artificial Intelligence” is being thrown around in this discussion at all. The term intelligence is underlaid with myriad assumptions about self-awareness, personhood, agency, and more.

Intelligence is difficult.

By comparison, it may be easy for one of the hundreds of companies now working at a furious pace to build an AI to “accidentally” initiate a machine capable of improving its own code, iteratively becoming a more capable, (and potentially more destructive) force for progress toward whatever goal it has been assigned.

This is the real danger. Those in the “AI may destroy us” camp shouldn’t be using the term AI at all. They should be directly referencing recursively self-improving software or hardware of any kind. That is the real danger we face, not AI. If it were intelligence, it wouldn’t pose a threat.

The threat is artificial ignorance.

An ignorant (or un-aware) iteratively self-improving machine. It need not have an ethical framework at all, or any sense of its place in the world. It may not even have an understanding of its own agency, or any form of self awareness. Much like the web services we already use on a day to day basis, it could just be an algorithm carrying out a task.

Already there are powerful optimization routines being run on neural networks. See the video above of a neural net playing Breakout – this was accomplished without any direct instructions about how to play the game. The machine learned on its own. None of these are powerful or sophisticated enough to refactor their own code base, but they are powerfully capable of pattern recognition, enough even to make sense of images using human language. For the first time in human history there are no more technological or conceptual barriers between the current state of the art and a potentially self-designing machine. It’s merely a matter of implementation.

When I finished reading Superintelligence (after seeing movies like Chappie, Ex Machina, and others that completely miss the point) I decided I would find a way to change the discussion.

If the entire machine learning field insists on continuing to misuse the term “artificial intelligence” just to make pithy headlines, we risk overlooking and not being able to adequately describe one of the most important, specific, and dangerous risks the advancement of human innovation has ever encountered.

To paraphrase Superintelligence, the risk we face as a planet with the chance we might mistakenly create a self-improving system with no ethical regard for human life is equivalent to the risk we would have faced if the Manhattan project could have been carried out in people’s garages in the 1940s.

This is why it is so essential we equip ourselves with terminology that will let us have more precise conversations about the risks we face. I hope this post can start that conversation.

Thanks for reading.

Want to learn more?

These are the most informative 20-minute reads about artificial intelligence I’ve found so far.

The AI Revolution Part 1: The Road to Superintelligence on WaitButWhy.com

The AI Revolution Part 2: Our Immortality or Extinction on WaitButWhy.com

The AI Anxiety by The Washington Post

Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem on Wired.com

The Doomsday Invention: Will artificial intelligence bring us utopia or destruction? on NewYorker.com

I also highly recommend Superintelligence if you really want to dive into the details of the problem.

Want to know when new posts go up?

Sign up for the Mora.co Mailing List

Author, photographer, doer of other stuff, too. Learn more here. Or follow him on twitter.

0 Comments

Leave a reply