Computers in Historial Context
Computer architecture represents the latest technology supporting some trends that go back a long, long way.
Whatever one might think of Jaynes' version of human history, he does get us thinking about the emergence of language. Obviously, there was a time when it was simpler and, if we don't blow ourselves up, it gets more rich and complex as we go along. I have a whole blog about "programming" the mind. Language plays a key part. Imagine language in software terms.
There are about 180,000 words in the English language. On the "back of a napkin", allowing for 10 bytes (characters) per word, that's 1.8 megabytes. If we imagine the Oxford English Dictionary to be equivalent to the "database software", it's 22,000 pages. allowing 1000 characters per page, that's 22 megabytes. In a very rough way, we could say that the "software" of the English Language is implemented in less than 50 megabytes - far less in any particular brain (but any particular brain can "install" a word from the base of 180,000 if that word is needed). Some claim that he human brain has a capacity in the petabyte (million megabyte) range, so 50 megabytes is negligible. Language represents a stunning amount of information compression. It's can also be seen as an impressive feat of information storage, even in the old form where "storage" was spread over hundreds or thousands of individual brains. The more brains used to store language "software" and database, the more information can be stored reliably (redundantly), leading to a further increase in the power of language to store and process information.
Writing things down improved our ability to standardize and stabilize language as a data storage mechanism.
We are close to the point where everything humans have ever written will be stored "online" in computers. The ability to process and "understand" all this information is steadily growing. It is fashionable to worry that computers are growing in their ability to "understand" all this information, but it's also true that humans are getting better at it too, with the help of computers.
The computer excels in some areas and falls flat in others. Whatever the "state of the art", it's obvious that computers have added another order of magnitude or three to our ability to store and process information. The importance of this leap is certainly similar to the invention of language or writing, both of which produced massive changes in the way humans lived their lives. For example, written language is more or less identical to what we call "civilization", the most recent 10,000 years of our 200,000 year history as Homo Sapiens.
Our ability to process information runs in parallel with two other trends: the invention of money and the emergence of "technology" - specifically our ability to build increasingly sophisticated real things. In both cases, the underlying trend is to standardize of things and the work that creates them. This can be seen as another example of information compression. Computers love standardization, but so does the human mind. This is why we seem to live in a world of "objects" governed by a set of laws that strive to regard everything as a special case of a set of "templates" - as few templates as possible.
Whatever one might think of Jaynes' version of human history, he does get us thinking about the emergence of language. Obviously, there was a time when it was simpler and, if we don't blow ourselves up, it gets more rich and complex as we go along. I have a whole blog about "programming" the mind. Language plays a key part. Imagine language in software terms.
There are about 180,000 words in the English language. On the "back of a napkin", allowing for 10 bytes (characters) per word, that's 1.8 megabytes. If we imagine the Oxford English Dictionary to be equivalent to the "database software", it's 22,000 pages. allowing 1000 characters per page, that's 22 megabytes. In a very rough way, we could say that the "software" of the English Language is implemented in less than 50 megabytes - far less in any particular brain (but any particular brain can "install" a word from the base of 180,000 if that word is needed). Some claim that he human brain has a capacity in the petabyte (million megabyte) range, so 50 megabytes is negligible. Language represents a stunning amount of information compression. It's can also be seen as an impressive feat of information storage, even in the old form where "storage" was spread over hundreds or thousands of individual brains. The more brains used to store language "software" and database, the more information can be stored reliably (redundantly), leading to a further increase in the power of language to store and process information.
Writing things down improved our ability to standardize and stabilize language as a data storage mechanism.
We are close to the point where everything humans have ever written will be stored "online" in computers. The ability to process and "understand" all this information is steadily growing. It is fashionable to worry that computers are growing in their ability to "understand" all this information, but it's also true that humans are getting better at it too, with the help of computers.
The computer excels in some areas and falls flat in others. Whatever the "state of the art", it's obvious that computers have added another order of magnitude or three to our ability to store and process information. The importance of this leap is certainly similar to the invention of language or writing, both of which produced massive changes in the way humans lived their lives. For example, written language is more or less identical to what we call "civilization", the most recent 10,000 years of our 200,000 year history as Homo Sapiens.
Our ability to process information runs in parallel with two other trends: the invention of money and the emergence of "technology" - specifically our ability to build increasingly sophisticated real things. In both cases, the underlying trend is to standardize of things and the work that creates them. This can be seen as another example of information compression. Computers love standardization, but so does the human mind. This is why we seem to live in a world of "objects" governed by a set of laws that strive to regard everything as a special case of a set of "templates" - as few templates as possible.
Comments
Post a Comment