AI

Models and Metaphors

By |2018-07-14T13:26:25+00:00September 5th, 2016|Foundations|

Personal reflections on neural networks, modeled Artificial Intelligence and the experience of being human. I become more and more excited about the progress we are making, here at New Sapience, in solving the language problem – that is, learning how to build knowledge structures that accurately model the world but that are completely independent of languages and linguistics. Our fundamental realization - that language is an encoded communications protocol between entities and does not contain or record knowledge in itself is hugely helpful in keeping us on the right track. Our biggest challenge is that, as we use introspection to examine our own interior world model, we find ourselves “articulating” that model to ourselves and so language is always coming back in to cloud the issue. I find myself constantly admonishing our “epistemological engineers” to remember to think in terms of nodes and connectors - not the meaning of words – which only can have meaning in relationship to a model independent of language. As the equivalent reading comprehension level of our sapiens climbs up the human grade levels it is tempting to think that once it reaches – say fourth grade we can “send it to school.” Let it read textbooks and eventually the Internet and it will be able to automatically accumulate arbitrarily large quantities of knowledge. We will certainly be able to do this and for a long time I believed we would – why not? Interestingly, the farther we go down the road, the [...]

The Third Singularity

By |2018-07-14T13:47:11+00:00September 20th, 2015|AGI, CKM, Foundations|

The Third Singularity Are Super Artificial Intelligences going to make humanity obsolete? If you’re not worried about this maybe you should be since some of the leading technical minds of our time are clearly very concerned. Eminent theoretical physicist, Stephen Hawking said about AI: “it would take off on its own, and re-design itself at an ever increasing rate. Humans who are limited by slow biological evolution, couldn’t compete, and will be superseded.” Visionary entrepreneur and technologist Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” No less than Bill Gates seconded his concern: "I agree with Elon Musk and some others on this and don't understand why some people are not concerned." The scenario Hawking refers to, of A.I.s redesigning themselves to become ever more intelligent is called The Singularity. It goes like this: once humans create A.I.s as intelligent as they are, then there is no reason to believe they could not create A.I.s even more intelligent, but then those super A.I.s could create A.I.s more intelligent than themselves and so on ad-infinitum and in no time at all A.I.s would exist as superior to humans in intelligence as humans are to fruit flies. The term Singularity is taken from mathematics where it refers to a function that becomes undefined at a certain point beyond which its behavior becomes impossible [...]

Knowledge and Intelligence

By |2018-07-14T13:39:17+00:00September 20th, 2015|AGI, CKM|

Understanding Intelligence Alan Turing, in his 1950 paper “Computing Machinery and Intelligence,” proposed the following question: “Can machines do what we (as thinking entities) can do?” To answer it, he described his now famous test in which a human judge engages in a natural language conversation via a text interface with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. The Turing Test bounds the domain of intelligence without defining what it is. We recognize intelligence by its results. John McCarthy, who coined the term Artificial Intelligence in 1955, defined it as "the science and engineering of making intelligent machines." A very straight-forward definition, yet few terms have been more obfuscated by hype and extravagant claims, imbued with both hope and dread, or denounced as fantasy. Over the succeeding decades, the term has been loosely applied and is now often used to refer to software that does not by anyone’s definition enable machines to “do what we (as thinking entities) can do.” The process by which this has come about is no mystery. A researcher formulates a theory about what intelligence or one of its key components is and attempts to implement it in software. “Humans are intelligent because we can employ logic” and so rule-based inference engines are developed. “We are intelligent because our brains are composed of neural networks” and so software neural networks are [...]

Assessing AI

By |2018-07-14T13:56:27+00:00September 16th, 2015|CKM|

Measuring Language Comprehension How intelligent will our sapiens become?  For the first time in the history of computing, the language comprehension of a software technology can be measured with tools designed for people.  We expect human language comprehension tools to be useful to assess our technology’s increasing language comprehension at regular intervals.  The performance level of a Modeled Intelligence is determined solely by the scope and fidelity of its world model.  There is no limit to how well the world can be modeled as the history of human knowledge attests.  However, the computational bandwidth and memory capacity of an individual human brain is forever bounded in ways computer technology is not. We expect the baseline language comprehension to climb quickly through the grade levels, continuing to college, graduate levels, and beyond.  Such a notion has been inconceivable for any other approach because, without world models, they have no language comprehension to measure and no thoughts to articulate.  Since its beginnings in the 1980s, the AI community has been rife with hyperbole and vague claims of programs that “think like humans,” but always without measurable results. We believe that era is now in the past.  With quantifiable comprehension, we foresee that New Sapience’s CKM will demonstrate a breakthrough potential to move into a field of machine-human interface applications that is basically unlimited as compared to the technologies currently available. Blooms Taxonomy of Learning Bloom’s Taxonomy provides an important framework teachers use to focus on higher order thinking.  By providing [...]

“Anticipatory Computing”

By |2018-07-14T13:58:44+00:00July 20th, 2015|AI, Competition|

"Anticipatory Computing" Recently many applications that self-indentify as AI have also been cited as examples of “anticipatory computing,” as in this National Public Radio article: “Computers That Know What You Need, Before You Ask” Here is the Wikipedia entry for “Anticipatory Computing:” In artificial intelligence (AI), anticipation is the concept of an agent making decisions based on predictions, expectations, or beliefs about the future. It is widely considered that anticipation is a vital component of complex natural cognitive systems. As a branch of AI, anticipatory systems is a specialization still echoing the debates from the 1980s about the necessity for AI for an internal model. When asked: “What do you anticipate would happen if someone jumped off the Empire State Building?” A human would employ their internal model of acceleration due to gravity, the relative frailty of the human body and the size of the building to predict: “They would impact the pavement at a high velocity and be killed.” So what for a human is simple common sense, in the context of computing is asserted to be a whole new branch of Artificial Intelligence, one that, according to the NPR article cited above, is being used to change the way we interact with our technology: “Google Now”, which is available on tablets and mobile devices, is an early form of this (anticipatory computing). You can ask it a question like, "Where is the White House?" and get a spoken-word answer. Then, Google Now recognizes any follow-up questions, [...]

Load More Posts