The New Sapience Thesis

Knowledge and Intelligence

Artificial Intelligence has been considered the “holy grail” of computer science since the dawn of computing, though these days when all kinds of programs are grouped loosely together under the term “AI” it is necessary to say “real AI” or “Artificial General Intelligence” to indicate we are talking about intelligence in the same sense as human intelligence. Humans are intelligent animals. It is that one attribute, that humans possess in so much greater degree than any other known animal that it defines us.

We define ourselves by our intelligence and the experience of being thinking entities. But who knows what is going on in the minds of other creatures? Pilot whales not only have larger brains than humans and their neo-cortex, thought to be the seat of intelligence in humans, is also larger. What is truly unique about humans is the end product of our cognitive processes: knowledge. It is knowledge of the world which allows us to evaluate how different courses of actions lead to different results, that has made our species masters of our world.

It takes but a moment of reflection to realize that, since the reason we build machines to amplify our power in the world, the real goal of intelligent machines is not “thinking” in the information processing sense, computers can already reason, remember and analyze patterns superbly – in that sense they are already intelligent but – they are ignorant. Imagine if Einstein lived in Cro-Magnon times. What intellectual achievements could he have made with so little knowledge of the world to build on? It is the acquisition and comprehension of knowledge or more specifically knowledge of the world that extends and amplifies human capabilities that is the true holy grail of computing.

Knowledge and Language

When human children reach a certain point in their development, that point when they have “learned to talk,” they are ready for the next developmental step in acquiring knowledge of the world. Knowledge is built upon knowledge and when children have acquired that critical mass of knowledge sufficient to serve as a foundation for all that comes after, we teach them to read and send them off to school. It is estimated that “first graders” have a vocabulary of about 2500 words.

That vocabulary, or rather the mental cognitions that the words relate to, represent a “knowledge bootstrap program” sufficient to permit acquiring new knowledge (assuming it is presented layer by foundational layer) of arbitrary quantity and complexity through natural language. But this bootstrap capability is far more than a vocabulary sufficient for “looking up” or being told the meaning of additional words.

The vocabulary of the average college graduate is estimated to be around 30,000 words. Only a tiny fraction of the ideas these words relate to were acquired by looking them up in dictionaries or through direct pedagogic instruction. They are unconsciously “picked up” in the context of reading and conversing.
The human brain is a vast network of interconnected neurons, so too are the information processing organs of vast numbers of other animals. Today artificial neural networks are demonstrating some of the low-level capabilities of animal brains such as auditory discrimination and image recognition that are ubiquitous throughout the animal kingdom.

These programs, with a kind of heroic optimism, are collectively termed “Cognitive Computing” on the basis of nothing more than that the programs have processing elements fashioned in imitation of biological neurons. The programs certainly have nothing resembling actual cognition or knowledge. In any case it is a long, long way from low level training of an artificial neural network to the cognitive power to create predictive internal models of the external world that a human first grader possesses.

This may not be self-evident, especially in light of how egregiously these programs can be hyped by the media and/or their creators who often talk very loosely about what is going on inside. Because a program can respond to a range of inputs with outputs that a human would recognize as a correct answer in no way justifies asserting the program “comprehended” the question or “knew” the answer.

The confusion arises from the very understandable misconception that language contains knowledge. It does not. Language is information not knowledge. It is a specification for the recreation of an idea in the mind of the sender (human or machine) from component ideas that already exist in the mind of the receiver. Read more about knowledge and language.

This is the great fallacy of using stochastic programs like neural networks to “mine” text databases. They will never understand what is in the records because they are not reading the text. They cannot because they have no pre-existing internal knowledge to refer the words and decode the grammar against.

We understand that the human brain becomes furnished with a critical mass of building block concepts during childhood. The internal biological processes that are responsible for this build-out remain a mystery. The brain is a product of an incredibly complex and unique evolutionary process. Because we understand how neurons work at a base level doesn’t tell us what is going on thousands of processing layers above, any more than understanding why a light bulb illuminates when you connect it to an electric circuit throws much light on to what goes on inside a micro-processor.

We understand what goes on inside a micro-processor because they are products of our own knowledge. Modern object-oriented software enables us to create data structures in computer memory that correspond to concepts in the human mind.

It is far easier to endow computers with a “knowledge boot-strap” program commensurate with a human first grader’s than to build an artificial human brain that can create knowledge by means of neural processing.

By |2018-07-14T16:46:03+00:00July 7th, 2017|AGI, AI, CKM|0 Comments

About the Author:

Leave A Comment