New Sapience has achieved a breakthrough in computing: Compact Knowledge Modelling, a software technology that endows machines with human knowledge that has been transformed into a computable form. It is the result of a multi-disciplinary approach combining state-of-the-art computing techniques, practical experience in modeling complex real-world systems, and an original theory about the underlying structure of human knowledge.
Knowledge is a mental model of reality that allows us to envision a world that may or not come to pass, depending on our actions. The ability to “predict the phenomena” is the cornerstone of the scientific method, one that was already in play when the first human progenitor made the first tool and used it to change its world.
Until now our machines could not do this – not even a little. They process data and information, but processing knowledge has been beyond their capability. Although knowledge representation has been considered a branch of Artificial Intelligence for more than 40 years, previous approaches, rule-based expert systems and semantic networks, could never be made to scale beyond the level of narrow applications. Disappointment with these technologies eventually led the AI researchers to largely abandon attempts to model knowledge altogether.
But achieving functional knowledge models has turned out to be possible after all. What was missing was insights into knowledge itself, that it has an underlying structure independent of logic or semantics. Our Compact Knowledge Model technology exploits this structure, enabling for the first time ever, representation of abstract knowledge as it is introspected in the human mind, within a computer. Read more about the difference between data, information and knowledge.
Previous attempts to create world models in software failed because they assumed knowledge was just a collection of facts. These were expressed as rules or in semantic structures. Each fact had to be explicitly represented and debugged the same as lines of code. Thus, the applications could not be made to scale.
The idea of an “expert system;” a computer program that embodies some domain of human expertise to operate complex systems or analyze and diagnose complex problems, has been a dream of computer science for over 50 years. Despite early optimism about technologies such as rule-based inference engines going back to the 1980s, software that emulates human logic and decision-making is, today, still highly code intensive. Whether the domain is industrial data monitoring or business rules, such applications are very expensive to design, develop and maintain.
It is said we live in the Information Age or Computer Age, the two being synonymous, because, until now, computers had no capacity to raise above the level of data and information. As the Information Age has progressed, computers produce more and more data and information that can only be converted into useful knowledge in human minds. The result is a society of knowledge workers.
Automation of even a little of what these knowledge workers do using computers to process information one fact at a time, means essentially – one line of code at a time. Developing, debugging and maintaining computer code is expensive. Even when the investment is made, today’s automation software is brittle; subject to total failure when confronted with a problem that lies outside the anticipated scope of the program. Where the cost of failures is high, as with transportation systems or network management, the solution is to keep humans in the loop, but this is a less than optimal solution. Human knowledge confers flexibility, but people have a hard time keeping up was large amounts of data in real-time, beside humans are also very expensive to train and employ.
Mission critical applications require that expert knowledge be applied in real time. The capacity to accurately and efficiently model knowledge in software is a game changer in this domain.
New Sapience is ready to work with our customers to achieve unprecedented gains in automated decision making while simultaneously reducing system development, maintenance and operating costs. New Sapience has the operational platform and know-how today to build a new generation of automated applications – ones that combine the flexibility that comes from deep world or domain knowledge with computers capability to process vast quantities of data.
It takes a human to read and comprehend the language to understand whether the answer is correct in that it matches reality. Despite the level of investment today’s proliferating crop of “digital personal assistants” are just as likely to be made fun of as made use of.
Collectively, these conversational interfaces are known as chatbots. They are built on software that treats electronic stored human language as data. The simplest ones consist of programs that will respond to specified language inputs from a person with specified outputs. More sophisticated ones utilize so-called “machine learning” software built using artificial neural networks or other natural language processing techniques designed to analyze large amounts of text.
Chatbots vary in sophistication. The relatively simple ones that are popping up on commercial websites with increasing frequency are developed using various platform toolkits that streamline the process, making it somewhat comparable to creating a cell phone app. Sophisticated ones, such as Amazon’s Alexa and Apple’s SIRI cost $100s million to develop and their owners continue to pour vast resources into them as they seek competitive advantage over each other.
None of the “conversational interfaces” such as Apple’s Siri, Google Assistant, Amazon Alexa, Microsoft Cortana, Facebook Messenger, etc. can actually converse. In fact, they don’t understand a single word you say to them. They have no understanding of any kind.
However, chatbots do not converse with people in the same sense that people converse with one another. This is indisputably true even of the very sophisticated programs we now call AI (in spite of how much hype this technology gets). IBM’s Watson returns answers to questions in the form of natural language as a result of statistical pattern matching but the program has no knowledge of what the words mean.
CKM enables a vast number of potentially transformative applications and capabilities. Perhaps most exciting is the world’s first software that does comprehend natural human language in the same sense that humans do. CKM provides an internal model of the world or domain that embodies the same concepts and relationships that people recognize as objects of their own thoughts. This model is, by design, completely independent of natural language vocabulary and grammar. The system processes language by interpreting it as instructions to create new knowledge models through connecting existing model elements into new configurations. This process is cognitive leaning or comprehension. Read more about natural language comprehension.
The result is something the world has never seen before: a digital entity with the core knowledge needed to understand the meaning of natural language words and grammar. In short, our technology can do what no other software can do because it is the only software that understands what the words mean. Each instance of our software becomes a unique individual as it learns, like a person, through reading and conversation. We call these entities sapiens.