Evaluating New Sapience

Truly world changing technologies are seldom recognized for what they are from the outset. In 1943, Thomas Watson, president of IBM famously said, “I think there is a world market for maybe five computers.” Hindsight makes it easy to make fun of a prediction that turned out so wrong but it was not wrong from the standpoint of the huge vacuum-tube powered computers of his day. No one could foresee the game-changing inventions of the transistor followed by the microprocessor that lay just a few years in the future.

Hindsight also makes it easy to wistfully examine missed investment opportunities in tech: “If only I had invested in Microsoft in 82 or Google in 98.” Based on hindsight, investors and venture capitalists have developed formulas of assessment and due-diligence to help them recognize “the next big thing.” Such principles do help them identify the wave – once the wave is a wave – but they are pretty useless for identifying fundamentally new innovations. These are the ones that “come out of left field,” the result of a combination of unforeseeable events and are based on insights that challenge accessed conventions and assumptions about what is possible.

In his book, “Zero to One” entrepreneur and investor, Peter Thiel, described these types of uniquely innovative technologies as “0 to 1” in contrast to all others which are simple variations of established themes: “1 to 2 and 2 to 3 …” and so forth. These higher stage technologies are the safer, easier bet for investors and that is where the vast majority of institutional money now flows.

Established due-diligence principles require that a new technology be assessed by independent experts – but the only experts in truly innovative tech are its inventors. Thus, potential investors have a choice, wait until the new technology has proven itself in the market, when the return on investments will be orders of magnitude lower since most of the risk has been taken away or to dig in, discern how it works and make their own assessments as to the risk that the new technology will make it to market.

Today vast resources are being invested in a number of established technologies collectively described as Artificial Intelligence. While substantial progress has been made in training artificial neural networks to recognize the kind of patterns in large datasets needed for applications like voice recognition and self-driving cars – Artificial General intelligence — the kind that actually comprehends human language and can comprehend knowledge is believed to be decades away.
We believe our Machine Knowledge technology, based on a unique set of innovations, contrarian to the mainstream, has the potential to achieve that long-sought dream in just a few years. Our current technology is poised to achieve a step function in language comprehension, one highly disruptive to all current applications that have language interfaces, even sooner.

Given the challenges of assessing something truly new, how should a potential investor proceed? Here are some questions to ask:

Q: How is it a small group of individuals could succeed in solving such a difficult problem when the vast resources of the big tech companies, government labs, and universities have not?

A: The short answer is that they discovered answers in places where no one else was looking. Some of those discoveries were serendipitous and others were sought for. That odyssey of innovation and discovery is described in “Machine Knowledge – How we got here.”

Q: Is the approach New Sapience has taken to Artificial General Intelligence plausible in a way that can be understood by someone without highly specialized training?

A: Yes (fortunately). Although it requires looking at the problem from a fresh perspective (more difficult for those steeped in current stochastic based approach to AI) the basic logic of our approach is straight-forward and easily grasped once one looks at the problem from a fresh perspective. A guide to that perspective is described in “The New Sapience Thesis.”

Q: Granted, the approach is plausible but there have been previous attempts to represent knowledge in computers and they failed to scale. What is the fundamental difference between MK and semantic networks or rule-based expert systems?

A: The fundamental difference is that MK makes no attempt to directly represent knowledge as a mass of individual facts or assertions. Instead we have taken a manageable number of core (building block) concepts and arranged them in a model, a computable data structure, in accordance with a unique and original theory about the underlying structure of human knowledge. An external software process, decodes natural language (comprehension) inputs on the basis of the knowledge contained in the model and in the process, adds to the model structure (learning). Thus, it acquires new knowledge precisely as humans do, through reading and conversation even though the underlying processors and processing (biologic versus electronic) are vastly different.