William Bandy

William Bandy2017-07-07T21:40:12+00:00

Professional Background

Dr. Bandy joined the National Security Agency in 1967, and spent his career in all aspects of microelectronics, including integrated circuit design, fabrication, and testing. He started and ran the first in-house fabrication facility in the mid 70’s. In 1985 he was detailed to DARPA where he managed a $30M/year microelectronics research program that encompassed projects at over 20 research institutions in the areas of computer-aided design of electrical and mechanical systems, computer-aided manufacturing, and technology infrastructure.
His tour was extended in 1988 to serve as the first program manager of the newly formed $200M/year SEMATECH industry and government consortium. In 1989 he was designated the executive director of the National Advisory Committee on Semiconductors, a presidential committee mandated by Congress to study the state of the U.S. semiconductor industry and to make recommendations. In 1990 he returned to the NSA to become the director of the Microelectronics Research Laboratory.
In 1999 he retired from NSA and co-founded Matrics Technology Systems, Inc., for the development and production of low-cost, high-performance Radio Frequency Identification (RFID) tag technology and product. Matrics was bought by Symbol Technologies, Inc. in 2004 for $230M.
He left Symbol in May 2006 to co-found Innurvation, which is developing innovative technologies for swallow-able diagnostic capsules. In March 2014 he co-founded Matrics2 for the development of a parallel integrated circuit assembly system for the production of very low cost tags for the Near Field Communication (NFC) market.

Educational Backgound

Ph.D., Physics, University of Maryland, College Park, Maryland
M.S., Electrical Engineering, University of Oklahoma, Norman, Oklahoma
B.S., With Distinction (third in class), Electrical Engineering, University of Oklahoma, Norman, Oklahoma

Why I invested in New Sapience

I worked at DARPA in the mid-80’s, a time when advances in computing gave impetus to a new branch of computer science: Artificial Intelligence. I became hooked on the promise of AI, and have waited for those promises to be fulfilled ever since. In my view the greatest failure of the AI community, in spite of $millions of invested by DARPA and others is that today, 30 years later, we still can’t talk to the many computational machines that serve us and have them understand what we want of them.
Why has this goal proven so difficult? The problem, as I see it, is this: we don’t know how human intelligence works, we only recognize it by its results: the practical understanding and comprehension of things that has given our species unequalled mastery of the world around us. We can create a program based on some theory of how human intelligence works but until that program produces some useful results based on comprehension of the world there is no way to be sure we are on the right track.
Thus researchers have a dilemma, they can focus on putting knowledge of the world into the machine and then create the algorithms to process it, or they can try to side step the issue of knowledge and try to create algorithms sophisticated and powerful enough to create knowledge as the by-product of processing raw input.
Researchers have taken the latter path, focusing on intelligence as an information processing task and have pursued complex algorithms, largely based on statistics and probability. Today, computational linguistics programs running on supercomputers can create structured databases that can beat a human at playing the game “Jeopardy” but clearly have no genuine comprehension of the words that come in and go out of the program. Though these kinds of systems are given considerable hype by their developers and the press they are, from my standpoint, very disappointing results after three decades of effort.
Early on there was much interest in so called “knowledge-based systems,” but they were largely abandoned by the main stream AI community by the mid-90s. One reason may be that the programming languages of the time were not up to the task of modelling the rich abstract data structures genuine knowledge of the world requires together with the fact that computational resources were relativity limited.
The only means these systems had of representing knowledge was simple inference; an approach which assumes all knowledge can be reduced to a simple assertions of facts. As it has turned out, this assumption is far too limited and has the further drawback of requiring large numbers of rules to solve any but the most elementary and well-bounded problems. These systems typically appear very promising early in their development but, as the number of rules grows, additional capability levels off while the number of rules grows exponentially.
A more fundamental reason why the AI community took the road it did and why it persists in following the same path long after the previous limitations with programming languages and raw computing power were overcome may be cultural rather than technical. The AI community is made up of computer scientists and computer scientists are trained to write algorithms, and to paraphrase the psychologist Abraham Maslow, “when your only tool is a hammer, you see every problem as a nail.” I believe that real progress in AI requires a sophisticated understanding of the classes and categories of human ideas and how each relates to real-world objects. The study of this is called Epistemology, a relatively obscure branch of philosophy completely outside of what is defined as computer science in our colleges and universities.
So things remained throughout the 90s and into the first years of the new century, AI it seemed was still a far off dream, more a figure of science fictions than science. Then several years ago I met Bryant Cruse, the founder of New Sapience, and first learned of his “general world model” approach to human-machine understanding. Here was an individual, an entrepreneur with two successful advanced computing start-ups behind him, with a classical education that included the study of all the significant works in epistemology from Aristotle forward. Understanding “how we know what we know” has been a lifetime passion and he has previously successfully applied his insights to software solutions for complex real-world problems such as spacecraft monitoring and control. Bryant may well be the first “computational epistemologist.”
I began to think that the elusive goal of machines that could fully communicate with humans in our own language was achievable in the near term after all. Events in my personal life prevented my involvement for several years, but his approach had taken hold of a small corner of my brain and wouldn’t let go.
When we re-connected almost a year ago, I was thrilled to see the company hand made very substantial progress and could demonstrate a working prototype which learned the meaning of new nouns through English dialogs. I was captivated by the power of the approach and its potential to revolutionize how we interface with our machines. Having missed out on involvement with the Microsoft, Apple, Google, and other technology revolutions, I admit to wanting bragging rights on being involved with the next one. So I became his first outside investor.
Over the several months since, the software had advanced rapidly and from just being able to learn nouns, already exhibits some comprehension of noun, adjective and adverb modifiers and some action verbs. This is accomplished by relating the new words to its internal general world model, a rich and sophisticated data structure implemented using new state-of-the-art object-oriented programming languages, that specifies how existing knowledge, both knowledge about knowledge itself and knowledge of real-world objects, can be combined to learn new knowledge based on the grammatical structure and vocabulary content of human language input.
This progress demonstrates that the approach has a characteristic that I expected to see, one that has become a mantra for experienced investors in this field: exponential capability with bounded complexity. It is exactly this that all previous AI approaches have lacked as many investors who have seen their money disappear in the black hole of its opposite: exponential complexity with bounded capability, have discovered to their loss.
The New Sapience system that learns the way we humans learn the vast majority of what we know, through the comprehension of human language, it is not a system that has been populated with a database of “facts” or algorithmically generated patterns. The world model is an effect an “applied epistemology” that not only contains core building-block concepts about the real-world but also methods to ensure that the new concepts created by combining these building blocks will also reflect reality. Bryant has made the analogy that the world model is a compact specification for knowledge of the world the way DNA is a compact specification for a life form. I think it is a fair comparison.
I believe the New Sapience approach is the first real breakthrough in the long sought goal of communicating with machines in our native tongues. It will change the world in ways that cannot even be imagined today. Ten years from now, I will be able to say that I was in on the ground floor. Bragging rights. The fact that I will also be unfathomably wealthy because of my involvement is only secondary.