The Computer as a Model of Mind

I am parking this note in an obscure blog. I think better when I write things down.

The idea is to go back to school at age 70 to get a Ph. D., probably in something like AI or the overlap between psychology, philosophy and computer science. This would have to be by remote study or under local supervision due to the risk of being away from my medical team.

The core idea is to take a good look at computers as a model for the mind.

From one aspect, computer languages are a powerful analogy for human language. The similarities and differences are both worth a long, hard look.

  • Computer "classes" and human "categories" share a lot of features but differ in the key aspect that human categories are based arbitrarily on human situations (Hofstadter)
  • Computer languages face some very sharp limitations about what can and cannot be "said". Similar restrictions seem to apply to human languages but only when we are looking at human language "pared down" to what amounts to a computer language. Can we say anything about what a human can "know" that a computer cannot know?
  • Language is only part of human cognition. It is interesting to see what happens when the computer needs a capability that is not intrinsically "linguistic" (can't be expressed in a computer program, no matter how long or sophisticated), such as the ability to "see" objects and detect "situations" in the real world. It could be useful to describe and explore this boundary in detail. It would be interesting to explore alternate "object and situation" detectors such as analog devices that do instant reverse Fourier transforms or quantum computers that "solve" equations beyond the reach of digital methods.
Another important aspect of the subject is the way we can say a lot about how human language works to create a common "gestalt" using the example of computer language (which is, after all, a subset of human language). For example, the "wisdom" of Java is in the language itself rather than any particular installation of it on any particular platform. Issues such as portability, virtualization play a role in human language. The rich structure of Object Oriented Analysis can be illustrated in the way human language is structured and used.

A traditional treatment of computer "intelligence" would say that it's all based ultimately on bits and bytes: what a CPU can really do which boils down to what a Turing Machine can do (Dennett). Perhaps another way of looking at it is to show how the language of human categories is the "machine language" in question. We are ultimately striving to model the power and expressiveness of human language by whatever technology we can lay our hands on. It is a mistake to put too much emphasis on the particular technology we use at any one time.

This program would require a working knowledge of brain architecture, at least as we currently understand it. The "mind" is, after all, "running" on this platform. How well can we simulate this platform in silicon? What are the limitations? Many authors  assume that such a simulation is not only possible but just around the corner. Are they right?

My initial impression is that such a proposal should be very well thought out and researched before being presented to any possible supervising body. The "moving parts" of the idea should also be worked out, such as whether it could be conducted mostly in Calgary with remote supervision by a real expert in the field with occasional on-campus face to face visits or whether it would need to be sponsored entirely in Calgary. The latter option may suffer from a lack of local expertise, which is something that should be researched on its own.

---- Later, to JH ---

Thank you for the encouraging words and good luck July 7. Clair and I will be out of town for July and the first weeks of August to look after Clair's mom and other family-related issues.

The Ph. D. idea has many practical problems but the degree itself is not an essential part of the project. I'm looking for focus and interaction. Perhaps I could do something similar under the heading of "Independent Study". I spent some time looking in to what a dissertation proposal would look like and this shed some light on the situation. What would count as "original research"? The answer is quite different depending on the faculty. For example, what I have in mind would not count as "research" in Computer Science or Psychology but might fit the criteria in Philosophy or even History. On the other hand, I can't see myself working usefully with professional philosophers or historians.

The online world has many unexplored opportunities for connection and discussion. It would even be useful to make more of an effort to attract readers and especially commentators to my blog(s) along with perhaps a bit more polish and focus in the blogs themselves. I can also spend more time commenting on other blogs and find active online discussion groups. I could write a book and self publish it -- I'd be happy if 100 people read it. It's also quite easy to get into e-mail discussions with almost anyone by commenting on their ideas. After all, there are many points of contact between my own "research" and that of others. The "way in" is to find those points of contact and establish dialogue based on what already interests other people. In the end, that's what you want, isn't it? To come up with useful ideas, not discover "truth" while meditating in a cave.

I should mention that my Mensa talk was cancelled due to low attendance at the event itself (Regional Gathering), but preparing that talk reminded me of the importance of focus. Mensa may still turn out to be an avenue for connection.

The urge to "sit around the campfire", share ideas and tell our stories is an important part of what it is to be human. 

Which brings us back to pizza ...

Comments

Popular posts from this blog

Facebook and Bing - A Killer Combination

A Process ...

Warp Speed Generative AI