Gary Kasparov and Stuart Russel on Machine Intelligence
This is a great talk by Kasparov, the chess master who "lost" to the Deep Blue computer.
Stuart Russel has more to say on the subject.
Stuart Russel has more to say on the subject.
Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring and philosophical foundations.
He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty.
Intelligent machines (or even dumb ones) are a model of the human mind, or, more exactly, a model of certain capabilities of the human mind. Russel's talk begins with the notion that "soon", intelligent machines will have read everything humans have ever written and understood it all. It would be hard to find a clearer statement of what computers are and what their capabilities might ultimately be. I'd say that the computer models human language - ultimately all human language, along with the complex structure of metaphor that underlies language. The question is, could a machine understand language? Or, to put it another way, what is it to understand?
In "Surfaces and Essences", Hofstader points out that our linguistic categories that underly language are ultimately rather arbitrary and based on situations encountered in human life. This would seem to be a problem for a machine that can obviously never encounter anything like human experience. If this problem is to be dealt with, perhaps Jaynes provides a clue as to the kind of "experience" that the computer must use as the basis for metaphorical "boot strapping". This would include things like the human perspective of being in a specific place at a specific time, along with a "narrative" about what is going on there and why.
The biggest challenge for a computer will be to model the non-linguistic part of the human mind, including recognition of objects and general "situations" -- in other words, the context of the computer's actions. It seems like a stretch for us to model this in a computer, which is more and more "cloud based" (no location) and (with Russel's assumption) "aware" of all human history. How is this moment different? What is the "situation"?
Perhaps the trickiest thing to "model" (hinted at by Russel) is that the human experience always involves lack of knowledge and massive uncertainty. What would the computer need to "forget" in order to "understand" a human "situation"?
Comments
Post a Comment