In a recent TED talk, AI researcher Janelle Shane shared the weird, sometimes alarming antics of Artificial Neural Network (ANNs) AI algorithms as they try to solve human problems. [i]
She points out that the best ANNs we have today are maybe on par with worm brains. So how is it that ANNs were ever termed AI in the first place? Worms aren’t intelligent.
Calling ANNs AI is like being invited into a hangar to look at a new aircraft design but finding nothing but landing gear. You ask: “I thought you said there was an airplane.” And are told: “Yes, there it is – it is just not a very good airplane yet.”
We saw another example of this “Aspirational AI” in a recent article in the Analytics India magazine [ii] that listed New Sapience among 10 companies in the Artificial General Intelligence space. They all say they are working on the AGI problem, but we are the only one that has nothing to show for our efforts: a working prototype that comprehends language in the same sense as humans do. The others aspire to have a cogent theory about reaching our same goal, but they are not accomplishing it; like the medieval alchemists who mixed this and that together to see what might result.
It is also evident that these other “AGI” companies continue to focus on ANNs and look to the human brain as their inspiration. This general fixation was mentioned in a recent article in the Wall Street Journal titled, “The Ultimate Learning Machines” which describes DARPA’s latest big AI project: Machine Common Sense. [iii]
The ultimate learning machines, we are told in the WSJ article, are human babies because they are far superior at pulling patterns out of vast amounts of data (in this case we are talking about the data that comes into the brain through the senses) compared to what “AI” researchers can achieve with artificial neural networks. A human brain compared to a worm brain? – not surprising babies are better.
But infants are totally incapable of learning that “George Washington was the first president of the United States.” However, a five-year-old can learn that easily. Assuming infants to be the best learners presupposes a single path to common sense knowledge that must be based on running algorithms in neural networks because the human brain is a neural network. But somewhere between infancy and early childhood, the human brain acquires an ability to learn in a way that is vastly different from the kind of neural learning, like recognizing faces, that infants do.
AI today (exclusive of what we are doing at New Sapience) has been called a one-trick pony because of its fixation with neural networks and the brain. We stand by our earlier comparison that this approach is similar to the people (prior to the Wright Brothers) who tried to build aircraft that flapped their wings like birds because, after all, birds were the best flyers in the universe, hence this was the only way to accomplish the goal. History proved that was not true.
The process of transformation that an infant goes through to become a 5-year-old with the capacity to learn abstract ideas through language comprehension is quite amazing. The idea that you could start with an artificial neural network of the complexity of a worm brain and somehow program it to recapitulate the functionality that millions of years of natural evolution have endowed a human infant’s brain with seems – well, ambitious.
We have found a better way. From the article:
“In the past, scientists unsuccessfully tried to create artificial intelligence by programming knowledge directly into a computer.”
We have succeeded where others have failed by understanding that functional knowledge is an integrated model with a unique hidden structure, not just an endless collection of facts and assertions. At New Sapience we are giving computers the commonsense world model and language comprehension of the five-year-old. We don’t need to know how the brain works to create the end product – because we know how computers work.
Today, if you tell a “sapiens,” created by New Sapience: “My horse read a book.” It will reply something like: “I have a problem with what you said, horses can’t read.” If you ask why, it will tell you: “Only people can read.” This is machine common sense and we are already there.
[i] Ted Talk: The danger of AI is weirder than you think.
[ii] 9 Companies Doing Exceptional Work in AGI
[iii] The Ultimate Learning Machines
0 Comments