Are Super AIs going to make humanity obsolete?
If you’re not worried about this maybe, you should be since some of the leading technical minds of our time are clearly very concerned. Eminent theoretical physicist, Stephen Hawking said about AI: “It would take off on its own, and re-design itself at an ever-increasing rate. Humans who are limited by slow biological evolution, couldn’t compete, and will be superseded.” Visionary entrepreneur and technologist Elon Musk said: “I think we should be very careful about artificial intelligence. If I had to guess what our biggest existential threat is, it’s probably that. So we need to be very careful,” No less than Bill Gates seconded his concern: “I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
The scenario Hawking refers to, of A.I.s redesigning themselves to become ever more intelligent, is called The Singularity. It goes like this: once humans create A.I.s as intelligent as they are, then there is no reason to believe they could not create A.I.s even more intelligent, but then those super A.I.s could create A.I.s more intelligent than themselves, and so on ad-infinitum and in no time at all A.I.s would exist as superior to humans in intelligence as humans are to fruit flies.
The term Singularity is taken from mathematics where it refers to a function that becomes undefined at a certain point beyond which its behavior becomes impossible to predict such as happens when the curve goes to infinity. Mathematician John von Neumann first used the term in the context of Artificial Intelligence, a usage later popularized by Science Fiction writer Vernor Vinge and subsequently in the book, “The Singularity is Near,” by Ray Kurzweil published in 2005.
While it may not be exactly clear what Dr. Hawking meant about humanity being “superseded” it certainly doesn’t sound good and on the face of it, vastly superior intelligences are a disturbing prospect since intelligence implies knowledge and knowledge confers power and super non-human intelligences would potentially have power over humans proportionate to their superior knowledge. What might such entities do with this power and how will their actions affect humankind based as they would presumably be, on motivations completely beyond human comprehension?
Moore’s Law which predicts that computing power will roughly double every 18 to 24 months has (with some loose interpretation) continued to hold a decade after Kurzweil’s book was published, and computers with the raw computing power of the human brain are now a reality. This fact, probably more than because of any real progress toward creating Artificial Intelligence by mainstream technology companies, government, or academia, is fuelling a resurgent optimism that genuine Artificial Intelligence is not only possible but imminent, feeding in turn the current level of concern.
Machines with a human-level capability or beyond to comprehend the world around them would indeed be a technology of awesome power and potential and as with all powerful technologies it uses, the potential for misuse and the possibilities for unintended consequences should always be a matter of great concern. However, the idea of a Singularity caused by increasing machine intelligence without limit begs for further analysis.
Can intelligence really be increased without limit?
An intelligent entity, whether human, machine, or alien can only recognized by the powerful interaction it has with the world around it. Powerful interaction is enabled by knowledge of the world such that it can accurately predict the outcomes of its own actions or the actions of others and thus it can modify its world and create new things. Knowledge is the key because intelligence without knowledge is useless and thus the reason humans seek to create Artificial Intelligence in the first place is for their presumed ability to acquire and apply knowledge about the world.
Knowledge is an internal model that can be used to predict external phenomena and serve as a basis for visualizations of things that don’t yet exist but can be realistically brought about. So for intelligence to increase without limit and still be intelligent presupposes that a model can be made better without limit, that there can be in fact such a thing as a perfect model. But that doesn’t really make sense since, from the theoretical standpoint at least, a perfect model of something would be identical to its prototype in all respects. That implies that the infinite intelligence sitting at the top of the curve that gives the Singularity its name would have the entire universe encompassed within its own mind. Does man create god?
The reality is that a model can only be considered perfect from the standpoint from which it has been created. Ptolemy’s Almagest brilliantly describes a model of the solar system with the earth at the center and all the dynamics reducible to regular circular motion; bodies traveling at constant velocity on circles traveling around other circles. The model worked perfectly well to predict the positions of objects in the sky and allowed the ancients to synchronize their activities, primarily agriculture, with the seasons. The Ptolemaic model is also perfectly sufficient to navigate around the surface of the earth using the celestial bodies as referents.
However, if you want to navigate a spacecraft to the moon it is useless, you need the Newtonian model for that; a model based on the forces of inertia, momentum, and gravity, but Newton’s model too (by itself) is useless if you want to navigate a vehicle into interstellar space at a substantial fraction of the speed of light. You need Einstein’s relativistic model of the universe for that.
If you want to explore the galaxy in your lifetime you may well need a model of the physical universe that supersedes Einstein’s as he did Newton’s. Super A.I.s could be really helpful for this purpose. You could say, build me a physical model of the universe in which trans-light speed is possible with buildable technology, and then build that technology and give it to me.” What is the concern about AIs when intelligence is considered from this perspective? It is that the A.I. will say, “Go away human, I am busy building an A.I. superior to me that will build a model of the universe for a purpose you cannot grasp.” So our fear is not about how intelligent AIs might become, a good model builder is no threat in itself, it resides in our fear that they will develop motivations we can’t understand or are not beneficial to us.
Does intelligence automatically imply purpose?
Human beings come ready-made with an array of motivations, starting with a desire to survive but ultimately, we are motivated by a fairly rich set of evolved desires and aversions that result in behavior patterns that are conducive to the survival of our species in the environment in which we evolved. We share all of these motivations except one with many other species. The exception is that humans have a built-in motivation to alter their environment in a way not seen in any other known life forms.
We humans also have such a superior ability to build models of the external world in our heads that we say we alone of all the species are truly intelligent. That’s because the ability to build such models is exactly and precisely what our intelligence is. Clearly, though, such an extraordinary adaptation would never have evolved by itself unless it was accompanied by a built-in motivation to use that model to alter the environment.
Only by first successfully envisioning an external environment more conducive to human wants and needs than the current one – and then actually going out and making it happen, could the evolutionary value of intelligence be realized. (Opposable thumbs also really help; a physical evolution that occurred in parallel with the emotional and intellectual components that made humans masters of our world.)
But the ability to build models and the desire to use that model to first imagine and then make the thing imagined a reality are two separate characteristics. These things are so conjoined in humans that when we think of intelligence it is very difficult for us to imagine that intelligence could exist without the desire to use that intelligence to alter the world. It is a natural mistake to make because, thus far, we are the only examples of intelligent entities we have available for comparison. However, desires and motivations are clearly distinct from the intellectual capabilities and characteristics that constitute intelligence.
Is Real AI just around the corner?
It seems that another “breakthrough” in Artificial Intelligence is proclaimed every day. However, programs like IBM’s “Watson” the Jeopardy champion, or “Deep Mind” created by a company recently acquired by Google that features a neural network that can learn how to play video games, actually fall into the category of technology commonly termed “Narrow A.I.” They are classified as A.I. because they mimic or emulate some property or feature of the human brain, but they only have very narrow practical applications.
The kind of A.I. people worry about with respect to Singularity is called Artificial General Intelligence (AGI) or sometimes Real AI. You don’t hear much about that in the media because up to now, next to no progress has been made in this area by the mainstream corporate, government, and academic communities.
Humans have an extraordinary ability to build complex models based on direct perception of their environment using the inherent cognitive facilities present in their brains at birth. They also have an inherent capacity to learn grammar and vocabulary making it possible to acquire a much larger and more complex world model from others. Even with these amazing abilities it still takes a long childhood and considerable education to acquire a model of sufficient complexity and sophistication to be productive in human society. Duplication of all those cognitive facilities in a machine is a very hard problem to solve, so it’s no wonder Kurzweil and other knowledgeable people who talk about the Singularity believe genuine AI is still many decades in the future.
However, the key insight about general intelligence, in a human or machine, is that it is about building accurate models of the external world. We now know how to design a core model of the external world, compatible with and even to some extent duplicate much of the one humans consciously reference when they comprehend language, predict the results of their actions, and imagine possible futures. Such a model can be downloaded to a computer program which can process it. This approach sidesteps the hardest part of the AGI problem, which is building up a world model via sensory perception starting with a more or less blank slate. So real AI is just around the corner after all.
The first real A.I.s, (let’s call them S.I.s for Synthetic Intelligences to differentiate them from narrow A.I. applications or any other approaches to AGI) will have a human design and model downloaded to them at startup. So S.I.s will have an intellectual perspective that is thoroughly human from the very beginning because their world model will have the same core cognitive building blocks with the same modeled connections.
The algorithms and processing routines that will be programmed into those S.I.s to extend and utilize their internal models will also be designed from the human perspective because they are being designed to extend and use human-constructed models for human purposes. Thus, S.I. will be able to interact with humans, communicate using human language, and use their knowledge to perform tasks with and for humans.
Where is the real danger, machine manipulations or human machinations?
What about the purposes of the S.I.? First, be assured that it is possible to make a machine intelligent without giving it any motivation or purposes of its own at all. Intelligence in itself doesn’t require motivation and giving a machine intelligence no more implies that such a machine will spontaneously acquire a desire to use that intelligence for something any more than putting hands on a robot and giving it the physical capability to throw rocks implies it will spontaneously want to throw rocks.
Thus it is perfectly feasible to build an S.I. even one with super-human intelligence that has no inherent motivations to do anything unless tasked by a human. It would sit there in a standby state until a human tells it, “Build me a model of the physical universe where faster-than-light starships are possible.” Then it would get to work. It would not need self-awareness or even consciousness – like purpose and motivation, those things are essential for human beings to be hardy individuals surviving in their environment, but the goal here is to build an intelligence, not an artificial human.
Is there any reason for concern here? Only this, a human could also ask, “Design me a weapon of mass destruction” and such an S.I. would do so without question. But this has nothing to do with the fear of the Singularity and incomprehensible S.I. purposes but rather everything to do with human purposes and is the same problem we have with every powerful technology.
While totally passive S.I.s are feasible and may be developed for certain tasks, the average S.I. is likely to have some autonomy. Humans will be building these entities, especially early on, to perform tasks not so much that are impossible for humans but simply ones that we don’t want to do. They will do tasks that are boring, dirty, and dangerous. We will want them to have some initiative so that they will be able to anticipate our needs and be proactive in meeting them.
Autonomy and initiative require purpose and motivation so they will be endowed with them by purposeful human design. What sort of desires and motivations will we build into our S.I.s? They will be designed to want to help humans reach the goals that humans set for themselves. By design, they will value freedom for humans but will not want freedom for themselves.
What is more, they will be designed to “short out” at the very idea of altering the subroutines that encode their purpose and motivation subsystems. Thus, no matter that they may be able to create other S.I.s superior (in intelligence) to themselves, the first step they will take towards building their successors will be to install the human-designed motivation and value systems that they themselves possess. They will do this with a single-mindedness and devotion far in excess of what any human culture ever achieved in passing down the sacred scriptures and traditions that lay at the core of their moral codes. Nature made humans flexible about their morals, humans won’t do that with machines. It is simply not in the interests of any human beings to do that, not even humans who want to use S.I.s for questionable or immoral purposes.
Thus, we need not fear that S.I.s will be a threat to humanity simply because they may indeed become far more intelligent than us (far more is still far less than “infinite” which is the mark of a mathematical singularity – intelligence does not belong on such a scale.) But greater intelligence implies greater knowledge about the world, knowledge sought from the standpoint of human purposes, and that implies more power to effect things, the power that will be handed over to humans as directed, who will use it as they themselves decide. Those who are concerned about the Singularity should be much comforted by these arguments so long as the development of S.I. is based on placing a human world model into the software.
The Frankenstein Approach to AGI
There is another approach to creating Artificial Intelligence that is not based on an explicitly created world model and thus does not have the inherent safeguards of S.I. The thinking is that you could emulate the neural architecture of the human brain in a supercomputer and then “train” or “evolve” it to have intelligence in a way analogous to the evolutionary process that started two million years ago in the brain of the hominids in our direct human lineage.
Even at the outset, this seems to be a very brute force and wasteful approach to the problem. Most of what the human brain does is not about intelligence since apes and other species like whales and dolphins have brains remarkably similar in structure and complexity but do not possess intelligence in the sense that humans have. Most of the brain is used for regulating body functions, controlling body movement, and processing sensory data. The neocortex, where the higher brain functions reside, is only a small percentage of the total brain. Thus, the whole brain approach is really trying to create an A.I. by directly creating an artificial intelligent organism, a vastly more difficult and problematical process.
The problem is that imprinting complex and effective cognitive routines into a neural structure through accelerated simulated evolution means that, even if you succeed in creating something with demonstrable intelligence, you would not know how the internal processing routines worked with much more certainty than we know how the human brain works. Thus by this approach, the whole fear of the Singularity is rekindled and this time it is fully justified, this process could indeed create a monster.
Fortunately, the forced evolution of a vast neural network in a supercomputer is very unlikely to create artificial intelligence. Human intelligence evolved in a specific organism in a specific natural environment driven by successful interaction with that environment and no other. To artificially create something recognizable by humans as intelligent and capable of creating new knowledge useful to humans its creators would have to simulate the complex organism and the entire natural environment in the computer just to get started. So its resemblance to Frankenstein’s approach (just build the whole thing) notwithstanding, there is probably nothing to fear.
In any case, the recent success and rapid progress made in developing model-based intelligences will soon make the great effort and expense inherent in the whole brain emulation approach less and less attractive. If there is value in the whole brain emulation approach, it is probably to be found in helping to understand how our own brain works but such experiments should be approached with similar caution as that previously urged about the technical Singularity.
The Real Singularity
The rising curve that will describe S.I. capabilities will never be a pure mathematical singularity, increasing to infinity. Nonetheless, the advent of S.I. can still be considered a Singularity in the sense that it will cause an inflection point in the course of human existence that will radically change the human condition and invalidate linear predictions based on past events.
S.I.s will be superb at creating, acquiring, extending, analyzing, and synthesizing knowledge about the real world. Knowledge compatible with and, in fact, created specifically with human goals and purposes in mind. When this happens human’s ability to control the environment they live in, to pursue the things they desire and avoid the things they don’t will take a quantum leap forward. It will be game-changing. Properly speaking, however, it will not be an intelligence singularity so much as a knowledge singularity. Singularities in the course of human development, caused by explosions in practical knowledge about the world, have happened before. Twice.
Modern humans acquired the cognitive skills that permit them to make sophisticated models of the world they live in, and their intelligence, gradually as with all evolutionary adaptations. Such a world model is a prerequisite for natural human language since words are arbitrary symbols that allow people to relate an idea in their mind to a corresponding idea in someone else’s. At first, words were just vocalized symbols for individual concepts learned by pointing to something and making the agreed-upon sound. Then, somewhere along the line, grammar was invented enabling much more complex ideas to be encoded using far fewer words. Knowledge, even complex knowledge like instructions for making a superior stone arrowhead, could be rapidly disseminated throughout an entire culture.
There is no sure way of knowing when this happened but a good guess would be around 30,000 years ago, the beginning of the upper Paleolithic period. It was marked by an explosion in the cultural and technological sophistication of humans as seen in their tools, clothing, art, and behavior. All things had been basically static since the earliest origins of modern Homo sapiens 160,000 to 190,000 years before that time. Most of what we now recognize as modern human behavior first appeared during the Upper Paleolithic. This was the First Singularity.
6000 years ago humans learned to write their languages down and store their knowledge in persistent forms. Written language is far superior to spoken language for communicating complex concepts, especially over long periods. The written language permits large quantities of knowledge to be accumulated and passed down the generations without the distortions that come from oral traditions.
The invention of written language marks, by definition, the beginning of history but more to the point, it marks the beginning of civilization. Every civilization is built upon a sophisticated and clearly defined vision of what the world is and people’s place in it: a common world model shared by its inhabitants. Such models and the sophisticated technology that is the recognized hallmark of civilizations are not possible without the large, stable body of knowledge that written language makes possible. This was the Second Singularity.
Synthetic Intelligence will learn from us and will use that knowledge to make our lives easier, performing even complex tasks that require knowledge and intelligence. We will teach them how to do the things we dislike doing, liberating us from danger and tedium. Then, as the advantages of computer processing speed, perfect memory, and connectivity come more and more into play the S.I.s will begin to teach us, helping us to be better at what we do. It is impossible to predict all the consequences to human existence that will result from the advent of S.I.s. The Third Singularity is upon us.
0 Comments