AI

Whatever Happened to AI?

By |2018-07-14T14:19:10+00:00June 23rd, 2018|AGI, AI, CKM, Competition|

What We Have Been Waiting For Altaira: "Oh Robbie, make me a new dress!" Robbie: "Yes ma'am, with diamonds and rubies this time?" For decades we have envisioned a day when we would have wonderful machines that would understand what we ask of them and have the power to do it. We build machines to extend our own power. The first machines amplified the power of our muscles. Later, with devices like the telescope, we learned to amplify our senses. More recently, we have found ways to amplify some of our brain's basic cognitive functions with computers. Computers have always exceeded humans at arithmetic and formal logic and more recently, by imitating some of the brain's cellular architecture with artificial neural networks, substantial progress has been made in areas like auditory discrimination, image recognition and extracting useful information from large databases. Programs based on these techniques have beaten the best human players in certain games, such as Go and even Jeopardy. But these achievements leave us wanting even more. They are narrow, not general, in their capabilities. A program "trained" to play one game cannot play another. What We Have Been Settling For Anyone who has tried to have a real conversion with SIRI, Alexa or any other of today's "digital personal assistants" soon realizes that they don't actually comprehend a single word you say to them. They are simply matching input data patterns [...]

The Hidden Structure

By |2018-07-14T12:30:15+00:00June 2nd, 2018|AGI, CKM, Foundations|

Discovering the Hidden Structure of Knowledge --- Though we don’t how the human brain is able to transcend from the data processing layer (where the brain too is just processing low level data coming in from the senses) to the realm of knowledge we can, through introspection, examine the structure of the end product of our thought processes, that is knowledge itself. What we find is a collection of ideas that are connected through various relationships that are themselves ideas. While many of these ideas represent specific objects in the real world, that tree, this car and so forth, many are abstractions, trees, cars. Each idea is connected to many others, some of which define its properties and some its relationship to others ideas. The power of abstract ideas as opposed to ideas representing particular things is that they are reusable. They can become components of new ideas. Complex concepts are built out of fundamentals. As the material world is composed of atoms, our knowledge of the world is composed of ideas. The English language is has over a million words each referring to an idea. Without some notion that only a small portion of these ideas are fundamental (atoms) and can only be combined in certain ways, the task of putting knowledge in machines is overwhelming. Democritus is known as the "laughing philosopher." There is speculation that he was laughing at his critics who clearly had not thought things out as well as [...]

"AI" Today: Reality Check

By |2018-07-14T12:35:11+00:00November 6th, 2017|AGI, CKM, Competition, Narrow AI|

Narrow AI's Dark Secrets Articles about AI are published everyday. The term "AI" is used in a very narrow sense in the majority of these articles: it means applications based on training artificial neural networks under the control of sophisticated algorithms to solve very particular problems. Here is the first dark secret: This kind of AI isn't even AI. Whatever this software has, the one thing it lacks is anything that resembles intelligence. Intelligence is what distinguishes us from the other animals as demonstrated by its product: knowledge about the world. It is our knowledge and nothing else that has made us masters of the world around us. Not our clear vision, our acute hearing or our subtle motor control, other animals do that every bit as well or better. The developers of this technology understand that and so a term was invented some years ago to separate these kind of programs with real AI; Narrow AI which is in used in contrast to Artificial General Intelligence (AGI) which is the kind that processes and creates world knowledge. Here's the second dark secret. The machine learning we have been hearing about isn't learning at all in the usual sense.  When a human "learns" how to ride a bicycle, they do so by practicing until the neural pathways that coordinate the interaction of the senses and muscles have been sufficiently established to allow one to stay balanced. This “neural learning” is clearly very different than the kind of “cognitive [...]

How we got here.

By |2018-07-14T16:33:50+00:00July 28th, 2017|AGI, AI, CKM, Foundations|

The Road To Compact Knowledge Models The body of epistemological theory and insights that have found practical application in Compact Knowledge Models is the result of over forty years of focused interest and study by New Sapience founder, Bryant Cruse.  He first formulated his epistemological theories as an undergraduate at St. Johns' College in Annapolis in 1972, inspired by the works of Plato, Aristotle, Locke, Hobbes, Descartes, and Kant, as well as the existentialists of the 19th century. Epistemology is generally considered an obscure and esoteric branch of philosophy of interest only to academics who, traditionally, have been focused on debate about the truth, belief, and justification of individual assertions.  Cruse’s theories, which approach knowledge as an integrated model designed from the standpoint of utility, represent a clear departure from classic epistemological traditions, and his career focus has been oriented towards practical applications rather than to academic publications. As a space systems engineer on the operations team for the Hubble Space Telescope in the mid-1980s, Cruse became interested in finding a way to automate the analysis of massive amounts of telemetry data in the ground system computers.  This began a path that led him to a residency in AI at the Lockheed Palo Alto research labs where he became the driving force behind development of the first real-time expert system shell. Rule-based systems proved not to be a practical solution for representing human knowledge, and in 1991 he led a team which succeeded in developing a much more [...]

Comments Off on How we got here.

The Turing Test

By |2018-07-14T12:38:23+00:00July 27th, 2017|AI, CKM, Foundations|

An Imitation Game The great mathematician and computer scientist, Alan Turing, proposed his now famous test for Artificial Intelligence in 1950. The test was simple, in a text conversation (then via teletype – today we would say texting) with a person and a machine, if the judge could not reliably tell which was which, then as Turing put it - (hedging even here), it would be unreasonable to say the machine was not intelligent. The Turing Test bounds the domain of intelligence without defining what it is. We can only recognize intelligence by its results. However, over the more than 50 years since Turing’s formulation, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do,” but rather merely emulate some perceived component of intelligence such as inference or some structure of the brain such as a neural network. Recently the term “Artificial General Intelligence” (AGI) has come into use to refer precisely to the domain as Turing defined it. There are lots of issues with such a test, the machine would have to be taught how to lie or the judge would have to be very restricted in what could be talked about, the judgement could be shaded from the standpoint of the judge’s expectations with respect to current state-of-the-art in AI, and finally do we really want to build artificial humans [...]

AGI

By |2018-07-14T12:40:30+00:00July 20th, 2017|AGI, CKM, Foundations|

Artificial General Intelligence (AGI) AGI or Artificial General Intelligence, the quest for software that does have genuine comprehension that would be recognizable as such by anyone because (in the spirit of the Turing Test) you can hold a general unscripted conversation with it. Today, outside of our work, AGI research efforts fall into two categories. Whole brain emulation The approach is that you first create a neural network with the size and complexity of the human brain and then program it to recapitulate, in some form, human cognitive processes that will eventually result in the production of world knowledge. The assumption here is that intelligence is kind of an emergent property of a vast neural network. We find this assumption extremely doubtful, and there are numerous other problems associated with this approach even should it produce something. Ray Kurzweil, who popularized the idea of an AI “singularity” and is currently VP of Engineering at Google is pursuing this approach (no doubt with lots of money – he will need it). The project at Google and numerous other whole brain research projects at DARPA, IBM and other places are described are artificialbrains.com Cognitive algorithms This approach seeks to discover one or a small number of immensely powerful algorithms that endow the human brain with intelligence and then reverse engineer them such that the program will be able to process raw inputs and turn them into real knowledge as humans can do. We call this the magic algorithm approach. Significant [...]

Knowledge and Language

By |2018-07-14T12:47:05+00:00July 10th, 2017|AGI, AI, CKM, Foundations|

The Library of Congress, a great repository... of something. in Humans The question of the relationship between language and knowledge in the human mind has fascinated philosophers and other deep thinkers since ancient times. One theory is that language is a prerequisite for knowledge and that knowledge cannot exist without it. The common-sense notion that is that language contains or records knowledge. True or False: “The Library of Congress as a great repository of knowledge.” Who would not answer, true, without hesitation? But consider the following thought demonstation: Suppose Socrates[i] told you he saw a cisticola while on a trip to Africa and you asked what that might be. He answered: “A cisticola is a very small bird that eats insects.” In an instant you know that cisticolas, have beaks, wings, and feathers, almost certainly can fly, that they have internal organs, that they have mass and hundreds of other properties that were not contained in the sentence. Let us step through the articulation process that Socrates when through to create the specification for the creation of this new knowledge. First, he decomposed the concept denoted by the word “cisticola” in his mind into components concepts and selected certain ones that he guesses already exist in your mind.  The key one is “bird” because if you classify cisticolas as birds you will assign them all the properties common to all birds as well as all the essential properties and attributes of animals, organisms and physical objects; a [...]

The New Sapience Thesis

By |2018-07-14T16:46:03+00:00July 7th, 2017|AGI, AI, CKM|

Knowledge and Intelligence Artificial Intelligence has been considered the “holy grail” of computer science since the dawn of computing, though these days when all kinds of programs are grouped loosely together under the term “AI” it is necessary to say “real AI” or "Artificial General Intelligence" to indicate we are talking about intelligence in the same sense as human intelligence. Humans are intelligent animals. It is that one attribute, that humans possess in so much greater degree than any other known animal that it defines us. We define ourselves by our intelligence and the experience of being thinking entities. But who knows what is going on in the minds of other creatures? Pilot whales not only have larger brains than humans and their neo-cortex, thought to be the seat of intelligence in humans, is also larger. What is truly unique about humans is the end product of our cognitive processes: knowledge. It is knowledge of the world which allows us to evaluate how different courses of actions lead to different results, that has made our species masters of our world. It takes but a moment of reflection to realize that, since the reason we build machines to amplify our power in the world, the real goal of intelligent machines is not “thinking” in the information processing sense, computers can already reason, remember and analyze patterns superbly – in that sense they are already intelligent but - they are ignorant. Imagine if Einstein lived in Cro-Magnon times. What intellectual [...]

A New Epistemology

By |2018-07-14T17:12:32+00:00July 5th, 2017|AGI, CKM, Foundations|

How do we know what we know? If we want to endow machines with knowledge we had better understand what it is. Epistemology, a term first used in 1854, is the branch of philosophy concerned with the theory of knowledge. It is not much studied in the schools these days and certainly not in computer science curriculums. Traditionally, epistemologists have focused on such concepts as truth, belief and justification as applied to any given assertions. From that perspective it is not much help since previous attempts to put knowledge into machines failed because they treated knowledge as just that, a vast collection of assertions (facts or opinions). That is not knowledge -that is data. We need to find an organizing structure for all these facts that will transform them into a road map of the world.  Since the dawn of civilization there have successive descriptions of the our world or reality. The ancients created, as beautify articulated by the theorems of the Alexandrian mathematician Ptolemy, an elegant geometric model of the universe with the earth at the center and everything else travelling around it on perfect circles, at a constant velocity. They had to put circles traveling on other circles to make the model match the actual celestial observations - but it worked![1] Claudius Ptolemy AD 100 - 170 The Ptolemaic System The Sextant Later this model was, (what should one say, refuted, replaced, superseded?) by [...]

AI at Google

By |2018-07-14T13:23:18+00:00September 20th, 2016|AGI, Competition|

Representation of a neural network Artificial Neural Networks  & Natural Language When we explain our Compact Knowledge Model technology and describe it's far reaching implications for Artificial General Intelligence a common reaction is "but surely Google and the other big tech companies are doing something similar." As we know, Google (and all of the big tech companies) have been making massive investments in the (we think misnamed) "cognitive computing" technology that is now considered almost  synonymous with AI by common usage. "Cognitive computing" is jargon for artificial neural networks (ANNs). Neural networks are "trained" over vast numbers of iterations on supercomputers to recognize patterns in equally vast databases. A very expensive process, but one that works reasonably well for things like pattern recognition in photographs, though even here, there are limitations, because ANNs lack any knowledge of the real world objects they are being trained to recognize. Applications of neural networks to natural language processing proceed in the same way as with images. The networks are trained under the control of algorithms designed to find certain patterns in huge databases, in this case, of documents, which from the standpoint of the program, are just an array of numbers (exactly as a photograph is nothing but an array of numbers to such programs.) The applications process these text databases but they have no reading comprehension as humans recognize it - no notion whatsoever about the content or meaning of the text. Humans curate the databases to limit the [...]

Load More Posts