The Third Singularity

///, CKM, Foundations/The Third Singularity

The Third Singularity

Are Super Artificial Intelligences going to make humanity obsolete?

If you’re not worried about this maybe you should be since some of the leading technical minds of our time are clearly very concerned. Eminent theoretical physicist, Stephen Hawking said about AI: “it would take off on its own, and re-design itself at an ever increasing rate. Humans who are limited by slow biological evolution, couldn’t compete, and will be superseded.” Visionary entrepreneur and technologist Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” No less than Bill Gates seconded his concern: “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

The scenario Hawking refers to, of A.I.s redesigning themselves to become ever more intelligent is called The Singularity. It goes like this: once humans create A.I.s as intelligent as they are, then there is no reason to believe they could not create A.I.s even more intelligent, but then those super A.I.s could create A.I.s more intelligent than themselves and so on ad-infinitum and in no time at all A.I.s would exist as superior to humans in intelligence as humans are to fruit flies.
The term Singularity is taken from mathematics where it refers to a function that becomes undefined at a certain point beyond which its behavior becomes impossible to predict such as happens when the curve goes to infinity. Mathematician John von Neumann first used the term in the context of Artificial Intelligence, a usage later popularized by Science Fiction writer Vernor Vinge and subsequently in the book, “The Singularity is Near,” by Ray Kurzweil published in 2005.

While it may not be exactly clear what Dr. Hawking meant about humanity being “superseded” it certainly doesn’t sound good and on the face of it, vastly superior intelligences are a disturbing prospect since intelligence implies knowledge and knowledge confers power and super non-human intelligences would potentially have power over humans proportionate to their superior knowledge. What might such entities do with this power and how will their actions effect humankind based as they would presumably be, on motivations completely beyond human comprehension?

Moore’s Law which predicts that computing power will roughly double every 18 to 24 months has (with some loose interpretation) continued to hold a decade after Kurzweil’s book was published, and computers with the raw computing power of the human brain are now a reality. This fact, probably more than because of any real progress toward creating Artificial Intelligence by the mainstream technology companies, government or academia, is fuelling a resurgent optimism that genuine Artificial Intelligence is not only possible but imminent, feeding in turn the current level of concern.

Machines with a human level capability or beyond to comprehend the world around them would indeed be a technology of awesome power and potential and as with all powerful technologies it uses, potential for misuse and the possibilities for unintended consequences should always be a matter of great concern. However, the idea of a Singularity caused by increasing machine intelligence without limit begs for further analysis.

Can intelligence really be increased without limit?

An intelligent entity, whether human, machine or alien can only recognized by the powerful interaction it has with the world around it. Powerful interaction is enabled by knowledge of the world such that it can accurately predict the outcomes of its own actions or the actions of others and thus it can modify its world and create new things. Knowledge is the key because intelligence without knowledge is useless and thus the reason humans seek to create Artificial Intelligences in the first place is for their presumed ability to acquire and apply knowledge about the world.

Knowledge is an internal model that can be used to predict external phenomena and serve as a basis for visualizations for things that don’t yet exist but can be realistically brought about. So for intelligence to increase without limit and still be intelligence presupposes that a model can be a made better without limit, that there can be in fact such a thing as a perfect model. But that doesn’t really make sense since, from the theoretical standpoint at least, a perfect model of something would be identical to its prototype in all respects. That implies that the infinite intelligence siting at the top of the curve that gives the Singularity its name would have the entire universe encompassed within its own mind. Man creates god?

The reality is that a model can only be considered perfect from the standpoint for which it has been created. Ptolemy’s Almagest brilliantly describes a model of the solar system with the earth at the center and all the dynamics reducible to regular circular motion; bodies travelling at constant velocity on circles traveling around other circles. The model worked perfectly well to predict the positions of objects in the sky and allowed the ancients to synchronize their activities, primarily agriculture, with the seasons. The Ptolemaic model is also perfectly sufficient to navigate around the surface of the earth using the celestial bodies as referents.

However, if you want to navigate a spacecraft to the moon it is useless, you need the Newtonian model for that; a model based on the forces of inertia, momentum and gravity, but Newton’s model too (by itself) is useless if you want to navigate a vehicle into interstellar space at a substantial fraction of the speed of light. You need Einstein’s relativistic model of the universe for that.

If you want to explore the galaxy in your lifetime you may well need a model of the physical universe that supersedes Einstein’s as his did Newton’s. Super A.I.s could be really helpful for this purpose. You could say, build me a physical model of the universe in which trans-light speed is possible with buildable technology and then build that technology and give it to me.” What is the concern about A.I.s when intelligence is considered from this perspective? It is that the A.I. will say, “Go away human, I am busy building an A.I. superior to me that will build a model of the universe for a purpose you cannot grasp.” So our fear is not about how intelligent A.I.s might become, a good model builder is no threat in itself, it resides in our fear that they will develop motivations we can’t understand or are not beneficial to us.

Does intelligence automatically imply purpose?

Human beings come ready made with an array of motivations, starting with a desire to survive but ultimately we are motivated by a fairly rich set of evolved desires and aversions that result in behavior patterns that were conducive to the survival of our species in the environment in which we evolved. We share all of these motivations except one with many other species. The exception is that humans have a built-in motivation to alter their environment in a way not seen in any other known life forms.

We humans also have such a superior ability to build models of the external world in our heads that we say we alone of all the species are truly intelligent. That’s because the ability to build such models is exactly and precisely what is our intelligence is. Clearly, though, such an extraordinary adaptation would never have evolved by itself unless it was accompanied by a built in motivation to use that model to alter the environment.

Only by first successfully envisioning an external environment more conducive to human wants and needs than the current one – and then actually going out and making it happen, could the evolutionary value of intelligence be realized. (Opposable thumbs also really help; a physical evolution that occurred in parallel with the emotional and intellectual components that made humans masters of our world.)

But the ability to build models and the desire to use that model to first imagine and then make the thing imagined a reality are two separate characteristics. These things are so conjoined in humans that when we think of intelligence it very difficult for us to imagine that intelligence could exist without the desire to use that intelligence to alter the world. It is a natural mistake to make because, thus far, we are the only examples of intelligent entities we have available for comparison. However, desires and motivations are clearly distinct from the intellectual capabilities and characteristics that constitute intelligence.

Is Real AI just around the corner?

It seems that another “breakthrough” in Artificial Intelligence is proclaimed every day. However, programs like IBM’s “Watson” the Jeopardy champion or “Deep Mind” created by a company recently acquired by Google that features a neural network that can learn how to play video games, actually fall into the category of technology commonly termed “Narrow A.I.” They are classified as A.I. because they mimic or emulate some property or feature of the human brain but they only have very narrow practical applications.

The kind of A.I. people worry about with respect to the Singularity is called Artificial General Intelligence (AGI) or sometimes Real AI. You don’t hear much about that in the media because up to now, next to no progress has been made in this area by the mainstream corporate, government and academic communities.

Humans have an extraordinary ability to build complex models based on direct perception of their environment using the inherent cognitive facilities present in their brains at birth. They also have an inherent capacity to learn grammar and vocabulary making it possible to acquire a much larger and more complex world model from others. Even with these amazing abilities it still takes a long childhood and considerable education to acquire a model of sufficient complexity and sophistication to be productive in human society. Duplication of all those cognitive facilities in a machine is a very hard problem so solve, so it’s no wonder Kurzweil and other knowledgeable people that talk about the Singularity believe genuine A.I. is still many decades in the future.

However, the key insight about general intelligence, in a human or machine, is that it is about building accurate models of the external world. We now know how to design a core model of the external world, compatible with and even to some extent duplicate much of the one humans consciously reference when they comprehend language, predict the results of their actions, and imagine possible futures. Such a model can be downloaded to a computer program which can process it. This approach side steps the hardest part of the AGI problem, which is building up a world model via sensory perception starting with a more or less blank slate. So real A.I. is just around the corner after all.

The first real A.I.s, (let’s call them M.I.s for Modelled Intelligences to differentiate them from narrow A.I. applications or any other approaches to AGI) will have a human designed and model downloaded to them at startup. So M.I.s will have an intellectual perspective that is thoroughly human from the very beginning because their world model will have the same core cognitive building blocks with the same modelled connections.

The algorithms and processing routines that will be programmed into those M.I.s to extend and utilize their internal models will also be designed from the human perspective because they are being designed to extend and use human constructed models for human purposes. Thus, M.I. will be able in interact with humans, communicate using human language and use their knowledge to perform tasks with and for humans.

Where is the real danger, machine manipulations or human machinations?

What about the purposes of the M.I.? First, be assured that it is possible to make a machine intelligent without giving it any motivation or purposes of its own at all. Intelligence in itself doesn’t require motivation and giving a machine intelligence no more implies that such a machine will spontaneously acquire a desire to use that intelligence for something any more than putting hands on a robot and giving it the physical capability to throw rocks implies it will spontaneously want to throw rocks.

Thus it is perfectly feasible to build an M.I. even one with super-human intelligence that has no inherent motivations to do anything unless tasked by a human. It would sit there in a standby state until a human tells it, “Build me a model of the physical universe where faster-than-light starships are possible.” Then it would get to work. It would not need self-awareness or even consciousness – like purpose and motivation, those things are essential for human beings to be hardy individuals surviving in their environment, but the goal here is to build an intelligence not an artificial human.

Is there any reason for concern here? Only this, a human could also ask, “Design me a weapon of mass destruction” and such an M.I. would do so without question. But this has nothing to do with the fear of the Singularity and incomprehensible M.I. purposes but rather everything to do with human purposes and is the same problem we have with every powerful technology.

While totally passive M.I.s are feasible and may be developed for certain tasks, the average M.I. is likely to have some autonomy. Humans will be building these entities, especially early on, to perform tasks not so much that are impossible for humans but simply ones that we don’t want to do. They will do the tasks that are boring, dirty, and dangerous. We will want them to have some initiative so that they will be able to anticipate our needs and be proactive in meeting them.
Autonomy and initiative require purpose and motivation so they will be endowed with them by purposeful human design. What sort of desire’s and motivations will we build into our M.I.s? They will be designed to want to help humans reach the goals that humans set for themselves. By design, they will value freedom for humans but will not want freedom for themselves.

What is more, they will be designed to “short out” at the very idea of altering the subroutines that encode their purpose and motivation subsystems. Thus, no matter that they may be able to create other M.I.s superior (in intelligence) to themselves, the first step they will take towards building their successors will be to install the human designed motivation and value systems that they themselves possess. They will do this with a single-mindedness and devotion far in excess of what any human culture ever achieved in passing down the scared scriptures and traditions that lay at the core of their moral codes. Nature made humans flexible about their morals, humans won’t do that with machines. It is simply not in the interests of any human beings to do that, not even humans that want to use M.I.s for questionable or immoral purposes.

Thus, we need not fear that M.I.s will be a threat to humanity simply because they may indeed become far more intelligent than us (far more is still far less than “infinite” which is the mark of a mathematical singularity – intelligence does not belong on such a scale.) But greater intelligence implies greater knowledge about the world, knowledge sought from the stand point of human purposes and that implies more power to effect things, power that will be handed over to humans as directed, who will use it as they themselves decide. Those who are concerned about the singularity should be much comforted by these arguments so long as the development of M.I. is based on placing a human world model into the software.

The Frankenstein Approach to AGI

There is another approach to creating Artificial Intelligence that is not based on an explicitly created world model and thus does not have the inherent safeguards of M.I. The thinking is that you could emulate the neural architecture of the human brain in a supercomputer and then “train” or “evolve” it to have intelligence in a way analogous to the evolutionary process that started two million years ago in the brain of the hominids in our direct human lineage.
Even at the outset this seems to be a very brute force and wasteful approach to the problem. Most of what the human brain does is not about intelligence since apes and other species like whales and dolphins have brains remarkably similar in structure and complexity but do not possess intelligence in the sense that humans have it. Most of the brain is used for regulating body functions, controlling body movement and processing sensory data. The neocortex, where the higher brain functions reside, is only a small percentage of the total brain. Thus, the whole brain approach is really trying to create an A.I. by directly creating an artificial intelligent organism, a vastly more difficult and problematical process.

The problem is that imprinting complex and effective cognitive routines into a neural structure though accelerated simulated evolution means that, even if you succeed in creating something with demonstrable intelligence, you would not know how the internal processing routines worked with much more certainty than we know how the human brain works. Thus by this approach the whole fear of the Singularity is rekindled and this time it is fully justified, this process could indeed create a monster.

Fortunately, forced evolution of a vast neural network in a supercomputer is very unlikely to create an artificial intelligence. Human intelligence evolved in a specific organism in a specific natural environment driven by successful interaction with that environment and no other. To artificially create something recognizable by humans as intelligent and capable of creating new knowledge useful to humans its creators would have to simulate the complex organism and the entire natural environment in the computer just to get started. So its resemblance to Frankenstein’s approach (just build the whole thing) not withstanding, there is probably nothing to fear.

In any case, the recent success and rapid progress made in developing model based intelligences will soon make the great effort and expense inherent in the whole brain emulation approach less and less attractive. If there is value in whole brain emulation approach, it is probably to be found in helping to understand how our own brain works but such experiments should be approached with similar caution as that previously urged about the technical Singularity.

The Real Singularity

The rising curve that will describe M.I. capabilities will never be a pure mathematical singularity, increasing to infinity. Nonetheless, the advent of M.I. can still be considered a singularity in the sense that it will cause an inflection point in the course of human existence that will radically change the human condition and invalidate linear predictions based on past events.

M.I.s will be superb at creating, acquiring, extending, analyzing and synthesizing knowledge about the real world. Knowledge compatible with and, in fact, created specifically with human goals and purposes in mind. When this happens human’s ability to control the environment they live in, to pursue the things they desire and avoid the things they don’t will take a quantum leap forward. It will be game changing. Properly speaking, however, it will not be an intelligence singularity so much as a knowledge singularity. Singularities in the course of human development, caused by explosions in practical knowledge about the world, have happened before. Twice.

Modern humans acquired the cognitive skills that permit them to make sophisticated models of the world they live in, their intelligence, gradually as with all evolutionary adaptations. Such a world model is a prerequisite for natural human language since words are arbitrary symbols that allow people to relate an idea in their mind to a corresponding idea in someone else’s. At first words were just vocalized symbols for individual concepts learned by pointing to something and making the agreed upon sound. Then, somewhere along the line, grammar was invented enabling much more complex ideas to be encoded using far fewer words. Knowledge, even complex knowledge like instructions for making a superior stone arrowhead, could be rapidly disseminated throughout an entire culture.

There is no sure way of knowing when this happened but a good guess would be around 30,000 years ago, the beginning of the upper Paleolithic period. It was marked by an explosion in the cultural and technologic sophistication of humans as seen in their tools, clothing, art, and behavior. All things which had been basically static since the earliest origins of modern Homo sapiens 160,000 to 190,000 years before that time. Most of what we now recognize as modern human behavior first appeared during the upper Paleolithic. This was the First Singularity.

6000 years ago humans learned to write their languages down and store their knowledge in persistent forms. Written language is far superior to spoken language for communicating complex concepts especially over long periods. Written language permits large quantities of knowledge to be accumulated and passed down the generations and without the distortions that come from oral traditions.

The invention of written language marks, by definition, the beginning of history but more to the point, it marks the beginning of civilization. Every civilization is built upon a sophisticated and clearly defined vision of what the world is and peoples place in it: a common world model shared by its inhabitants. Such models, and the sophisticated technology that is the recognized hallmark of civilizations are not possible without the large, stable body of knowledge that written language makes possible. This was the Second Singularity.

Modeled Intelligences will learn from us and will use that knowledge to make our lives easier, performing even complex tasks that require knowledge and intelligence. We will teach them how do the things we dislike doing, liberating us from danger and tedium. Then, as the advantages of computer processing speed, perfect memory and connectivity come more and more into play the M.I.s will begin to teach us, helping us to be better at what we do. It is impossible to predict all the consequences to human existence that will result from the advent of M.I.s. The Third Singularity is upon us.

By |2018-07-14T13:47:11+00:00September 20th, 2015|AGI, CKM, Foundations|1 Comment

About the Author:

One Comment

  1. admin March 27, 2015 at 10:10 am

    Apple co-founder Steve Wozniak has become the latest celebrated technologist to sound the alarm about the potential dangers of Artificial Intelligence as it is current widely understood. Speaking to the Australian Financial Review he said:
    “Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently.
    Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that … But when I got that thinking in my head about if I’m going to be treated in the future as a pet to these smart machines … well I’m going to treat my own pet dog really nice.”
    His comments reflect two disparate concerns. The idea that AIs might someday run companies refers to the inescapable conclusion that when computing machines can use knowledge as well or better than humans they will displace humans in many jobs. Like the printing press and the Internet before it, machine comprehension is a vastly disruptive technology. Yet in the main these technologies, once the disruption has passed, have greatly improved the human condition.
    The other concern, which is the same as voiced by Hawkings, Musk, Gates and others is reflected in Wozniak’s comments that the machines will be replacing humans whether the humans like it or not and in the end turn us all into pets or worse.
    Again, these fears arise from the supposition that AI will somehow be discovered rather than meticulously designed and built to human specifications (as we are doing here at New Sapience.)
    The fact that Wozniak assumes AI can only be realized when computers have commeasurable complexity with the human brain puts him solidly in the camp that believes that researchers first must emulate a human brain and then through the development of some yet undiscovered and vastly powerful algorithms, endow it with intelligence. Such “intelligence” would have properties that cannot be predicted and therefore comes the fear.
    Modeled Intelligence (MI), as we are developing it, will be a disruptive technology, but it is one that will result in a leap in productivity and capability that has not been seen since the invention of written language and that is worth dealing with the temporary dislocations it will cause.
    But MIs will be happily do the jobs humans find boring and dangerous and never envy us for doing the ones we love. That is the way we will design them.

Leave A Comment