The Language Based Interface (LamBDA) as a Communication Medium


 In this post, I summarize some of the conclusions that come out of the LamBDA issue controversy. To find out what LamBDA actually is, check here.  This post deals with the current controversy, which is whether "LamBDA" or any similar creation can be "sentient." Spoiler alert: noting we are doing now places us on the road to creating a sentient machine.

A sentient computer would be like a sentient book.

Computers talk to us like books, TV, and phones talk to us. 

Talking computers descend from centuries of effort that have given us talking parrots.

When we see intelligent words coming out of a computer, our natural instinct is to attribute sentience to the computer. We will get over it, just as we got over our amazement at words coming from the first telephone.  Or a parrot.

Under normal circumstances, I use words to cause mental events in your mind. This depends on both understanding the language and having a reasonably similar experience of life. In this way, language is a means of communication between two sentient beings.

If I use a megaphone or a telephone to communicate with you, this has not changed.  Here, I use an incredibly sophisticated suite of technologies to communicate with you.  Here, it will not occur to you to think that these words are coming from some element of that technology, such as a word processor AI. For example, you will assume there is some kind of conscious intent behind all this, and likewise, I assume that you are a sentient being.

Google developed LamBDA for an automated way to summarize what you get when you submit a Google query.   That would be part of "many-to-many" communication that originates with human beings who have the intention of communicating something in perhaps many ways to many human other beings. Yet, the technical "bucket brigade" between them and you is still no more "sentient" than your TV set.  And, again, the "information" under the system matters more than the presentation. One might argue that wrapping it in a "conversation" may make the whole thing more credible than it should be.

LamBDA is not only an "intelligent" speaker but also an "intelligent" "listener" and "learner."  The ability of LamBDA to "listens" and "learn" represents the current state of the art in natural language processing that is half a century old. "Training" consists of tweaking a vast array of parameters until the machine starts getting the right answers (or talking) in the desired way. Apple's SIRI and Amazon's Alexa are products of this engineering effort. This would be fairy magic in 1970. It is now routine.

Our family tree includes leaves. The computer's family tree includes door knobs*.

The building blocks of sentience are a special case of what happens everywhere in our bodies--electro-chemical reactions and large molecules like proteins. This is the same "took kit" we see in photosynthesis. 

What happens in a brain does not, as commonly imagined, resemble what happens in a computer. 

Our brain has more in common with a tree than with a computer. To put it yet another way, the simulation of a single protein reaction, let alone a cell or a tree, is far beyond the current state of the art. The problem is that "calculation" and "life" are fundamentally different things.

A computer belongs to a family of objects we regard as "tools" - useful for some human purpose. A simple example of an ancient "computer" is a doorknob. It is locked or not--a switch. The diagram at right shows how transistors implement a "gate." This is what a computer "means" by "yes or no." The Internet is a vast collection of electronic doorknobs.

It is only a human's understanding of a doorknob that makes it a doorknob. A doorknob doesn't "know" it's a doorknob.

LamBDA - A Stochastic Parrot

The more you know about LamBDA, the less impressive it becomes. 

LamBDA is the "front end" of a system cobbled together by Lemoine.  Blake Lemoine, one of Google's most skilled engineers, used years of effort and hundreds of interactions to create what should really be called something like a "Lemoine Machine."It is wrong to think of Blake's LamBDA as a single "thing" or entity. It is a setup that includes a LamBDA instance (version). Let me call it LM - Lemboines machine.  LamBDA itself is an experimental technology that implements a natural language interaction with a body of knowledge. The "body of knowledge" is what we are "talking to." It is equally important to know how Lemoine trained LM to talk the way we hear in the only public conversation available.  He has "cherry-picked" just one of many hundreds of "chats." Nobody will ever see more samples of these chats, nor will anyone ever see a new sample. LM, like Blake, no longer works for Google. 

LM is an instance of a long series of LM's, each tweaked by Lemoine to talk about an unspecified of knowledge that is interesting to Lemoine, who is a bit of a mystic. As an engineer, Lemoine tweaks not only what his creation says but how it says it. Lemoine deeply resents the fact that Google, as an organization, is hostile to his "religious" convictions.

Blake made himself famous by claiming to create a "sentient" machine. What Lemoine says about "LamBdA" comes from Blake, wearing his "preacher hat." We must remember that Blake is a professional mystic, trained in what we might call "leaps of faith."

What LamBDA itself "says" is what other people have said that Blanke finds "interesting" and spoken in a way that satisfies Blake's engineering goals. The result is impressive but, to say the least, open to interpretation. 

"Training" of an AI is a sophisticated process, but we all know how we can use patient training to produce interesting results. Parrot training is the best-known example.

------

*AI. The term "Artificial Intelligence" is unfortunate. Originally, it meant a computer program that did something that would, in a human, require "intelligence." The computer program is acting as if it had intelligence (but it really doesn't).  "An AI" has come to mean a computer application that is specifically NOT "intelligent" in how a human is intelligent. In fact, "intelligent" computers arrive at "intelligent" behavior that is notably NOT analogous to the way humans "think." This has been the core insight of the effort from the start.

"Intelligence" itself is a slippery idea, but in this context, many people take it to mean "conscious" or "sentient," terms that are confidently applied only to a living creature. When we speak of "intelligent" machines, we refer to an engineering concept that never pretends to be anything else. In that sense, the "artificial" in AI can always be omitted. An "AI" is just a product of clever engineerings like a camera or a drone. Perhaps we should use "Intelligent Door Knob" - IDK. We should at least stop using "artificial" to mean its opposite.

Comments

Popular posts from this blog

Facebook and Bing - A Killer Combination

A Process ...

Warp Speed Generative AI