Confusing the Robot

 

I couldn't resist a further attempt to get to the bottom of the mistakes that Perplexity was making with my hard-one list of movies I had seen. When caught, it simply adds the correction to its context without correcting its own logic - resulting in a "fixed" list with the same kind of error.

This illustrates a problem of profound philosophical interest. The robot is not "self-aware." It provides amazing answers based on an impressive search of databases it "knows" but has little or no insight into its own logic.

In this particular case, there is no reason in principle why the AI could not decipher its own procedure. It's great at explaining mine. But we see another troubling aspect of the robot mind. It doesn't do logic. It admits the "mistake" but seems unable to detect that the new "information" creates an inconsistency in the context it is dealing with. It just plows ahead. 

This reminds me of the "logic" in Christian theology, which took more than a century to "explain" why God got himself nailed to a cross. Are we simply simulating the human mind with all its flaws?

The robot's confusion reminds me of a broader issue. As we attempt to model the human mind in silicon, we keep encountering problems with our mental architecture. In fact, reality itself is illogical in some ways, and we tend to paper over our inability to explain things with plausible-sounding bullshit. 

The ability to create plausible sounding noise is a feature, not a bug, in a "Large Language Model," a technology that sits at the heart of the AI boom.

Comments

Popular posts from this blog

Panic Part 6 - The IPCC Summary for Policymakers

The Carbon Offset Hoax

Dennis Hoffman and The Nature of Reality