Sam Harris Explains the AI Myth

TED Talk: Can we build AI without losing control over it?

This talk packs most AI mythology into a single 15 minute presentation. Harris joins a long list of "talking heads" that warn us that AI is a clear and present danger. We are just around the corner from an AI armageddon. 

His bottom line is that this disaster is real and unavoidable. His complaint is that we are not sufficiently scared about it.

AI, like the concept of God, becomes more and more real in the public mind just because everyone is talking about it.

His assumptions:
  1. Intelligence is a matter of information processing. He employs the fallacy that our brains are "mere matter" so we will eventually build something like brains because our brains are just "mere matter"
  2. We will continue to improve our machines.
  3. We don't stand on a peak of intelligence. He assumes that intelligence is defined as in #1 and puts everyone on a curve between a chicken and John Van Neuman. It's "overwhelmingly likely" that machines will "explore this spectrum" to the point where you will "obviously" build something millions of times faster than a team of Standford researchers.
Past about the 9 minute mark, Harris totally abandons any connection to machine intelligence and slips into a polemic that assumes the catastrophe he thinks he's established as inevitable. AI is assumed to be possible, assumed to zoom past human "intelligence", and assumed to become God-like. 

Let's unpack his assumptions.

At the most basic level, Harris is presenting himself as uniquely capable of seeing the future. Marvin Minsky,  is pessimistic about making predictions in the field he founded.  But Harris, who has no claims to expertise in the field, knows better. Much has been written about the laughable track record of "futurists" in general.

The "elephant in the room" is the term AI itself. Those who warn about the dangers of "Artificial Intelligence" assume that industry is hard at work creating machines that are more and more "intelligent". To the average person, "intelligence" is a human quality and therefore, "Science" is working on building very smart humans. A flood of movies featuring metallic people creates the impression that "AI" is just around the corner. This seems to be the modern equivalent of the vision of interstellar travel we saw in Star Trek. The level of plausibility is similar.

What we are really working on is "Machine Intelligence", which Harris correctly defines as "Information Processing". But the "Information" a machine "processes" is not the "information" that the human mind processes. It is more precisely referred to as "data". It becomes "information" only in the eye of a human being, who is uniquely capable of detecting the "meaning" of the data. "Meaning" is a very slippery concept. We are still struggling to understand what it would be for a machine to "understand" what it's doing. The pioneers of AI (such as Marvin Minsky) are still with us and they admit that they still have no convincing definition of what it is to "reason". Half a century of AI research is showing a divergence, not a convergence between the concept of machine intelligence and the human mind.

Is it true that intelligence is "nothing more" than information processing - "mere computation"? This is the "brain as computer" analogy. It's important to realize that this is an analogy, not a fact.  Harris uses the most naive possible reductionist assumption: because our brains are "just" atoms whizzing around in our skulls, there is nothing more to them. "In principle", not only is intelligence just a matter of physics, it's a matter that we can completely understand and duplicate with equivalent machines. In fact, the more we learn about brains and minds, the more we discover layers of complexity that don't seem to be fruitfully explained or described by the physics of "atoms whiling around", by which we must assume Harris means "quantum mechanics".

There are radically different ways of describing human intelligence. For example, "Surfaces and Essences" portray the human brain as outstripping computers in the ability to form and use analogies and form new categories based on experience. At bottom, computers are terrible at detecting meaning. One of the most compelling observations in "Surfaces" is that meaning has little do do with "logic" - the basis of all computer software.

Can we assume that "thinking machines" will continue to improve endlessly? It's fair to put this assumption up against a similar prediction about other technologies. Are we improving our ability to explore space "in person" endlessly? At what point will we discover a way to travel faster than light and explore he galaxy? Even here on Earth, are we improving the speed of aircraft "endlessly"? Are our transport jets getting faster and faster? Will we continuously improve the efficiency of the chemical engines or does thermodynamics pose a limit? In the case of computers, can we assume that they will get faster and faster "without limit"? Can we assume that the "intelligence" of machines is solely a function of speed and sophisticated design or are there things that are intrinsically impossible for a machine to do? 

Do we have any sign at all that machines are on the verge of designing better versions of themselves?

What we do see is humans getting smarter and smarter as they build and use more and more powerful machines of all sorts. It is the human mind, not the machine mind that seems to hold promise of unlimited intelligence. The danger is posed by humans who control other humans by means of machines. The problem seems to be that we create human organizations that reduce humans to "cogs in the machine" and these organizations don't seem to be getting "smarter". In fact, we see dumber and dumber human organizations employing more and more sophisticated technology. 

The trend for human to use technology to enslave other humans is not new. Genghis Kahn used the technology of the horse to enslave vast swaths of Eurasia. America knocked Japan out of World War II with an atom bomb that didn't design itself, select its target or deliver itself.

We are justifiably worried about the power of information technology to monitor our every movement. In the hands of a totalitarian government, such technology would seem to lock in the power of those who control it, spelling the end of our dreams that society can be organized to benefit all is members. In fact, we see this actually happening all over the world. While the power to monitor and control is aided and abetted by sophisticated technology, it's also true that we are seeing hints that there are limits to the power technology bestows on its owners. These limits are precisely due to the inability of machines to discover meaning in all the vast data they process so quickly. 

The real danger to humanity is that technology itself, in any form, forces humans into more and more narrow "slots" in big machines of human construction. We have "jobs". Our role in society is reduced to the mathematics of economics. We are producers, consumers, soldiers or superfluous. We have seen the enemy and he is us.

Comments

Popular posts from this blog

Facebook and Bing - A Killer Combination

A Process ...

Warp Speed Generative AI