The Brain in the Crystal
Another approach contemplates growing a computer as a crystal directly in three dimensions, with computing elements being the size of large molecules within the crystalline lattice. This is another approach to harnessing the third dimension.
Stanford Professor Lambertus Hesselink has described a system in which data is stored in a crystal as a hologram—an optical interference pattern.ʺ This three‐dimensional storage method requires only a million atoms for
each bit and thus could achieve a trillion bits of storage for each cubic centimeter. Other projects hope to harness the regular molecular structure of crystals as actual computing elements.
The Nanotube: A Variation of Buckyballs
Three professors—Richard Smalley and Robert Curl of Rice University, and Harold Kroto of the University of Sussex—shared the 1996 Nobel Prize in Chemistry for their 1985 discovery of soccer‐ball‐shaped molecules formed of
a large number of carbon atoms. Organized in hexagonal and pentagonal patterns like R. Buckminster Fullerʹs building designs, they were dubbed ʺbuckyballs.ʺ These unusual molecules, which form naturally in the hot fumes of
a furnace, are extremely strong—a hundred times stronger than steel—a property they share with Fullerʹs architectural innovations. [12]
More recently, Dr. Sumio Iijima of Nippon Electric Company showed that in addition to the spherical buckyballs,
the vapor from carbon arc lamps also contained elongated carbon molecules that looked like long tubes. [13] Called
nanotubes because of their extremely small size—fifty thousand of them side by side would equal the thickness of one human hair—they are formed of the same pentagonal patterns of carbon atoms as buckyballs and share the buckyballʹs unusual strength.
What is most remarkable about the nanotube is that it can perform the electronic functions of silicon‐based components. If a nanotube is straight, it conducts electricity as well as or better than a metal conductor. If a slight helical twist is introduced, the nanotube begins to act like a transistor. The full range of electronic devices can be built using nanotubes.
Since a nanotube is essentially a sheet of graphite that is only one atom thick, it is vastly smaller than the silicon transistors on an integrated chip. Although extremely small, they are far more durable than silicon devices. Moreover, they handle heat much better than silicon and thus can be assembled into three‐dimensional arrays more easily than
silicon transistors. Dr. Alex Zettl, a physics professor at the University of California at Berkeley, envisions three-dimensional arrays of nanotube‐based computing elements similar to—but far denser and faster than—the human brain.
QUANTUM COMPUTING: THE UNIVERSE IN A CUP
Quantum particles are the dreams that stuff is made of.
—David Moser
So far we have been talking about mere digital computing. There is actually a more powerful approach called quantum computing. It promises the ability to solve problems that even massively parallel digital computers cannot
solve. Quantum computers harness a paradoxical result of quantum mechanics. Actually, I am being redundant—all
results of quantum mechanics are paradoxical.
Note that the Law of Accelerating Returns and other projections in this book do not rely on quantum computing.
The projections in this book are based on readily measurable trends and are not relying on discontinuities in technological progress that nonetheless occurred in the twentieth century. There will inevitably be technological discontinuities in the twenty‐first century, and quantum computing would certainly qualify.
What is quantum computing? Digital computing is based on ʺbitsʺ of information which are either off or on—zero
or one. Bits are organized into larger structures such as numbers, letters, and words, which in turn can represent virtually any form of information: text, sounds, pictures, moving images. Quantum computing, on the other hand, is
based on qu‐bits (pronounced cue‐bits), which essentially are zero and one at the same time. The qu‐bit is based on the fundamental ambiguity inherent in quantum mechanics. The position, momentum, or other state of a fundamental particle remains ʺambiguousʺ until a process of disambiguation causes that particle to ʺdecideʺ where it is, where it has been, and what properties it has. For example, consider a stream of photons that strike a sheet of glass at a 45-degree angle. As each photon strikes the glass, it has a choice of traveling either straight through the glass or reflecting off the glass. Each photon will actually take both paths (actually more than this, see below) until a process of conscious observation forces each particle to decide which path it took. This behavior has been extensively confirmed in numerous contemporary experiments.
In a quantum computer, the qu‐bits would be represented by a property—nuclear spin is a popular choice—of individual electrons. If set up in the proper way, the electrons will not have decided the direction of their nuclear spin (up or down) and thus will be in both states at the same time. The process of conscious observation of the electronsʹ
spin states—or any subsequent phenomena dependent on a determination of thee states—causes the ambiguity to be
resolved. This process of disambiguation is called quantum decoherence. If it werenʹt for quantum decoherence, the
world we live in would be a baffling place indeed.
The key to the quantum computer is that we would present it with a problem, along with a way to test the answer.
We would set up the quantum decoherence of the qu‐bits in such a way that only an answer that passes the test survives the decoherence. The failing answers essentially cancel each other out. As with a number of other approaches (for example, recursive and genetic algorithms), one of the keys to quantum computing is, therefore, a careful statement of the problem, including a precise way to test possible answers.
The series of qu‐bits represents simultaneously every possible solution to the problem. A single qu‐bit represents
two possible solutions. Two linked qu‐bits represent four possible answers. A quantum computer with 1,000 qu‐bits
represents 21,000 (this is approximately equal to a decimal number consisting of 1, followed by 301 zeroes) possible solutions simultaneously. The statement of the problem—expressed as a test to be applied to potential answers—is presented to the string of qu‐bits so that the qu‐bits decohere (that is, each qu‐bit changes from its ambiguous 0‐1
state to an actual 0 or a 1), leaving a series of 0ʹs and 1ʹs that pass the test. Essentially all 21,000 possible solutions have been tried simultaneously, leaving only the correct solution.
This process of reading out the answer through quantum decoherence is obviously the key to quantum
computing. It is also the most difficult aspect to grasp. Consider the following analogy. Beginning physics students learn that if light strikes a mirror at an angle, it will bounce off the mirror in the opposite direction and at the same angle to the surface. But according to quantum theory, that is not what is happening. Each photon actually bounces
off every possible point on the mirror, essentially trying out every possible path. The vast majority of these paths cancel each other out, leaving only the path that classical physics predicts. Think of the mirror as representing a problem to be solved. Only the correct solution—light bounced off at an angle equal to the incoming angle—survives
all of the quantum cancellations. A quantum computer works the same way. The test of the correctness of the answer
to the problem is set up in such a way that the vast majority of the possible answers—those that do not pass the test—
cancel each other out, leaving only the sequence of bits that does pass the test. An ordinary mirror, therefore, can be thought of as a special example of a quantum computer, albeit one that solves a rather simple problem.
As a more useful example, encryption codes are based on factoring large numbers (factoring means determining
which smaller numbers, when multiplied together, result in the larger number). Factoring a number with several hundred bits is virtually impossible on any digital computer even if we had billions of years to wait for the answer. A quantum computer can try every possible combination of factors simultaneously and break the code in less than a billionth of a second (communicating the answer to human observers does take a bit longer). The test applied by the
quantum computer during its key disambiguation stage is very simple: just multiply one factor by the other and if the result equals the encryption code, then we have solved the problem.
It has been said that quantum computing is to digital computing as a hydrogen bomb is to a firecracker. This is a
remarkable statement when we consider that digital computing is quite revolutionary in its own right. The analogy is based on the following observation. Consider (at least in theory) a Universe‐sized (nonquantum) computer in which
every neutron, electron, and proton in the Universe is turned into a computer, and each one (that is, every particle in the Universe) is able to compute trillions of calculations per second. Now imagine certain problems that this Universe‐sized supercomputer would be unable to solve even if we ran that computer until either the next big bang
or until all the stars in the Universe died—about ten to thirty billion years. There are many examples of such massively intractable problems; for example, cracking encryption codes that use a thousand bits, or solving the traveling‐salesman problem with a thousand cities. While very massive digital computing (including our theoretical
Universe‐sized computer) is unable to solve this class of problems, a quantum computer of microscopic size could solve such problems in less than a billionth of a second.
Are quantum computers feasible? Recent advances, both theoretical and practical, suggest that the answer is yes.
Although a practical quantum computer has not been built, the means for harnessing the requisite decoherence has
been demonstrated. Isaac Chuang of Los Alamos National Laboratory and MITʹs Neil Gershenfeld have actually built
a quantum computer using the carbon atoms in the alanine molecule. Their quantum computer was only able to add
one and one, but thatʹs a start. We have, of course, been relying on practical applications of other quantum effects, such as the electron tunneling in transistors, for decades. [14]
A Quantum Computer in a Cup of Coffee
One of the difficulties in designing a practical quantum computer is that it needs to be extremely small, basically atom or molecule sized, to harness the delicate quantum effects. But it is very difficult to keep individual atoms and molecules from moving around due to thermal effects. Moreover, individual molecules are generally too unstable to
build a reliable machine. For these problems, Chuang and Gershenfeld have come up with a theoretical breakthrough.
Their solution is to take a cup of liquid and consider every molecule to be a quantum computer. Now instead of a single unstable molecule‐sized quantum computer, they have a cup with about a hundred billion trillion quantum computers. The point here is not more massive parallelism, but rather massive redundancy In this way, the inevitably erratic behavior of some of the molecules his no effect on the statistical behavior of all the molecules in the liquid.
This approach of using the statistical behavior of trillions of molecules to overcome the lack of reliability of a single molecule is similar to Professor Adlemanʹs use of trillions of DNA strands to overcome the comparable issue in DNA
computing.
This approach to quantum computing also solves the problem of reading out the answer bit by bit without causing those qu‐bits that have not yet been read to decohere prematurely. Chuang and Gershenfeld subject their liquid computer to radio‐wave pulses, which cause the molecules to respond with signals indicating the spin state of each electron. Each pulse does cause some unwanted decoherence, but, again, this decoherence does not affect the statistical behavior of trillions of molecules. In this way, the quantum effects become stable and reliable.
Chuang and Gershenfeld are currently building a quantum computer that can factor small numbers. Although this early model will not compete with conventional digital computers, it will be an important demonstration of the
feasibility of quantum computing. Apparently high on their list for a suitable quantum liquid is freshly brewed Java coffee, which, Gershenfeld notes, has ʺunusually even heating characteristics.ʺ
Quantum Computing with the Code of Life
Quantum computing starts to overtake digital computing when we can link at least 40 qu‐bits. A 40‐qu‐bit quantum
computer would be evaluating a trillion possible solutions simultaneously, which would match the fastest supercomputers. At 60 bits, we would be doing a million trillion simultaneous trials. When we get to hundreds of qu-bits, the capabilities of a quantum computer would vastly overpower any conceivable digital computer.
So hereʹs my idea. The power of a quantum computer depends on the number of qu‐bits that we can link together.
We need to find a large molecule that is specifically designed to hold large amounts of information. Evolution has designed just such a molecule: DNA. We can readily create any sized DNA molecule we wish from a few dozen nucleotide rungs to thousands. So once again we combine two elegant ideas—in this case the liquid‐DNA computer
and the liquid‐quantum computer—to come up with a solution greater than the sum of its parts. By putting trillions
of DNA molecules in a cup, there is the potential to build a highly redundant—and therefore reliable—quantum computer with as many qu‐bits as we care to harness. Remember you read it here first.
Suppose No One Ever Looks at the Answer
Consider that the quantum ambiguity a quantum computer relies on is decohered, that is, disambiguated, when a conscious entity observes the ambiguous phenomenon. The conscious entities in this case are us, the users of the quantum computer. But in using a quantum computer, we are not directly looking at the nuclear spin states of individual electrons. The spin states are measured by an apparatus that in turn answers some question that the quantum computer has been asked to solve. These measurements are then processed by other electronic gadgets, manipulated further by conventional computing equipment, and finally displayed or printed on a piece of paper.
Suppose no human or other conscious entity ever looks at the printout. In this situation, there has been no conscious observation, and therefore no decoherence. As I discussed earlier, the physical world only bothers to manifest itself in an unambiguous state when one of us conscious entities decides to interact with it. So the page with the answer is ambiguous, undetermined—until and unless a conscious entity looks at it. Then instantly all the ambiguity is retroactively resolved, and the answer is there on the page. The implication is that the answer is not there until we look at it. But donʹt try to sneak up on the page fast enough to see the answerless page; the quantum effects are instantaneous.
What Is It Good For?
A key requirement for quantum computing is a way to test the answer. Such a test does not always exist. However, a
quantum computer would be a great mathematician. It could simultaneously consider every possible combination of
axioms and previously solved theorems (within a quantum computerʹs qu‐bit capacity) to prove or disprove virtually
any provable or disprovable conjecture. Although a mathematical proof is often extremely difficult to come up with,
confirming its validity is usually straightforward, so the quantum approach is well suited.
Quantum computing is not directly applicable, however, to problems such as playing a board game. Whereas the
ʺperfectʺ chess move for a given board is a good example of a finite but intractable computing problem, there is no
easy way to test the answer. If a person or process were to present an answer, there is no way to test its validity other than to build the same move‐countermove tree that generated the answer in the first place. Even for mere ʺgoodʺ
moves, a quantum computer would have no obvious advantage over a digital computer.
How about creating art? Here a quantum computer would have considerable value. Creating a work of art involves solving a series, possibly an extensive series, of problems. A quantum computer could consider every possible combination of elements—words, notes, strokes—for each decision. We still need a way to test each answer
to the sequence of aesthetic problems, but the quantum computer would be ideal for instantly searching through a Universe of possibilities.
Encryption Destroyed and Resurrected
As mentioned above, the classic problem that a quantum computer is ideally suited for is cracking encryption codes,
which relies on factoring large numbers. The strength of an encryption code is measured by the number of bits that
needs to be factored. For example, it is illegal in the United States to export encryption technology using more than 40
bits (56 bits if you give a key to law‐enforcement authorities). A 40‐bit encryption method is not very secure. In September 1997, Ian Goldberg, a University of California at Berkeley graduate student, was able to crack a 40‐bit code in three and a half hours using a network of 250 small computers. [15] A 56‐bit code is a bit better (16 bits better, actually). Ten months later, John Gilmore, a computer privacy activist, and Paul Kocher, an encryption expert, were
able to break the 56‐bit code in 56 hours using a specially designed computer that cost them $250,000 to build. But a quantum computer can easily factor any sized number (within its capacity). Quantum computing technology would
essentially destroy digital encryption.
But as technology takes away, it also gives. A related quantum effect can provide a new method of encryption that
can never be broken. Again, keep in mind that, in view of the Law of Accelerating Returns, ʺneverʺ is not as long as it used to be.
This effect is called quantum entanglement. Einstein, who was not a fan of quantum mechanics, had a different name for it, calling it ʺspooky action at a distance.ʺ The phenomenon was recently demonstrated by Dr. Nicolas Gisin of the University of Geneva in a recent experiment across the city of Geneva. [16] Dr. Gisin sent twin photons in opposite directions through optical fibers. Once the photons were about seven miles apart, they each encountered a
glass plate from which they could either bounce off or pass through. Thus, they were each forced to make a decision
to choose among two equally probable pathways. Since there was no possible communication link between the two
photons, classical physics would predict that their decisions would be independent. But they both made the same decision. And they did so at the same instant in time, so even if there were an unknown communication path between
them, there was not enough time for a message to travel from one photon to the other at the speed of light. The two
particles were quantum entangled and communicated instantly with each other regardless of their separation. The effect was reliably repeated over many such photon pairs.
The apparent communication between the two photons takes place at a speed far greater than the speed of light.
In theory, the speed is infinite in that the decoherence of the two photon travel decisions, according to quantum theory, takes place at exactly the same instant. Dr. Gisinʹs experiment was sufficiently sensitive to demonstrate the communication was at least ten thousand times faster than the speed of light.
So, does this violate Einsteinʹs Special Theory of Relativity, which postulates the speed of light as the fastest speed at which we can transmit information? The answer is no—there is no information being communicated by the entangled photons. The decision of the photons is random—a profound quantum randomness—and randomness is precisely not information. Both the sender and the receiver of the message simultaneously access the identical random decisions of the entangled photons, which are used to encode and decode, respectively, the message. So we
are communicating randomness—not information—at speeds far greater than the speed of light. The only way we could convert the random decisions of the photons into information is if we edited the random sequence of photon
decisions. But editing this random sequence would require observing the photon decisions, which in turn would cause quantum decoherence, which would destroy the quantum entanglement. So Einsteinʹs theory is preserved.
Even though we cannot instantly transmit information using quantum entanglement, transmitting randomness is
still very useful. It allows us to resurrect the process of encryption that quantum computing would destroy. If the sender and receiver of a message are at the two ends of an optical fiber, they can use the precisely matched random
decisions of a stream of quantum entangled photons to respectively encode and decode a message. Since the encryption is fundamentally random and nonrepeating, it cannot be broken. Eavesdropping would also be
impossible, as this would cause quantum decoherence that could be detected at both ends. So privacy is preserved.
Note that in quantum encryption, we are transmitting the code instantly The actual message will arrive much more slowly—at only the speed of light.
Quantum Consciousness Revisited
The prospect of computers competing with the full range of human capabilities generates strong, often adverse feelings, as well as no shortage of arguments that such a specter is theoretically impossible. One of the more interesting such arguments comes from an Oxford mathematician and physicist, Roger Penrose.
In his 1989 best‐seller, The Emperorʹs New Mind, Penrose puts forth two conjectures. [17] The first has to do with an unsettling theorem proved by a Czech mathematician, Kurt Gödel. Gödelʹs famous ʺincompleteness theorem,ʺ which
has been called the most important theorem in mathematics, states that in a mathematical system powerful enough to
generate the natural numbers, there inevitably exist propositions that can be neither proved nor disproved. This was another one of those twentieth‐century insights that upset the orderliness of nineteenth‐century thinking.
A corollary of Gödelʹs theorem is that there are mathematical propositions that cannot be decided by an algorithm.
In essence, these Gödelian impossible problems require an infinite number of steps to be solved. So Penroseʹs first conjecture is that machines cannot do what humans can do because machines can only follow an algorithm. An algorithm cannot solve a Gödelian unsolvable problem. But humans can. Therefore, humans are better.
Penrose goes on to state that humans can solve unsolvable problems because our brains do quantum computing.
Subsequently responding to criticism that neurons are too big to exhibit quantum effects, Penrose cited small structures in the neurons called microtubules that may be capable of quantum computation.
However, Penroseʹs first conjecture—that humans are inherently superior to machines—is unconvincing for at least three reasons:
1. It is true that machines canʹt solve Gödelian impossible problems. But humans canʹt solve them either. Humans
can only estimate them. Computers can make estimates as well, and in recent years are doing a better job of this
than humans.
2. In any event, quantum computing does not permit solving Gödelian impossible problems either. Solving a Gödelian impossible problem requires an algorithm with an infinite number of steps. Quantum computing can
turn an intractable problem that could not be solved on a conventional computer in trillions of years into an instantaneous computation. But it still falls short of infinite computing.
3. Even if (1) and (2) above were wrong, that is, if humans could solve Gödelian impossible problems and do so
because of their quantum‐computing ability, that still does not restrict quantum computing from machines. The
opposite is the case. If the human brain exhibits quantum computing, this would only confirm that quantum computing is possible, that matter following natural laws can perform quantum computing. Any mechanisms in
human neurons capable of quantum computing, such as the microtubules, would be replicable in a machine.
Machines use quantum effects—tunneling—in trillions of devices (that is, transistors) today. [18] There is nothing to suggest that the human brain has exclusive access to quantum computing.
Penroseʹs second conjecture is more difficult to resolve. It is that an entity exhibiting quantum computing is conscious. He is saying that it is the humans quantum computing that accounts for her consciousness. Thus quantum
computing—quantum decoherence—yields consciousness.
Now we do know that there is a link between consciousness and quantum decoherence. That is, consciousness observing a quantum uncertainty causes quantum decoherence. Penrose, however, is asserting a link in the opposite
direction. This does not follow logically. Of course quantum mechanics is not logical in the usual sense—it follows quantum logic (some observers use the word ʺstrangeʺ to describe quantum logic). But even applying quantum logic,
Penroseʹs second conjecture does not appear to follow. On the other hand, I am unable to reject it out of hand because there is a strong nexus between consciousness and quantum decoherence in that the former causes the latter. I have
thought about this issue for three years, and have been unable to accept it or reject it. Perhaps before writing my next book I will have an opinion on Penroseʹs second conjecture.
IS THE BRAIN BIG ENOUGH?
Is our conception of human neuron functioning and our estimates of the number of neurons and connections
in the human brain consistent with what we know about the brain's capabilities? Perhaps human neurons are
far more capable than we think they are. If so, building a machine with human-level capabilities might take
longer than expected.
We find that estimates of the number of concepts—"chunks" of knowledge—that a human expert in a
particular field has mastered are remarkably consistent: about 50,000 to 100,000. This approximate range
appears to be valid over a wide range of human endeavors: the number of board positions mastered by a
chess grand master, the concepts mastered by an expert in a technical field, such as a physician, the
vocabulary of a writer (Shakespeare used 29,000 words;[19] this book uses a lot fewer).
This type of professional knowledge is, of course, only a small subset of the knowledge we need to function
as human beings. Basic knowledge of the world, including so-called common sense, is more extensive. We
also have an ability to recognize patterns: spoken language, written language, objects, faces. And we have
our skills: walking, talking, catching balls. I believe that a reasonably conservative estimate of the general
knowledge of a typical human is a thousand times greater than the knowledge of an expert in her professional
field. This provides us a rough estimate of 100 million chunks—bits of understanding, concepts, patterns,
specific skills—per human. As we will see below, even if this estimate is low (by a factor of up to a thousand),
the brain is still big enough.
The number of neurons in the human brain is estimated at approximately 100 billion, with an average of
1,000 connections per neuron, for a total of 100 trillion connections. With 100 trillion connections and 100
million chunks of knowledge (including patterns and skills), we get an estimate of about a million connections
Our computer simulations of neural nets use a variety of different types of neuron models, all of which are
relatively simple. Efforts to provide detailed electronic models of real mammalian neurons appear to show that
while animal neurons are more complicated than typical computer models, the difference in complexity is
modest. Even using our simpler computer versions of neurons, we find that we can model a chunk of
knowledge—a face, a character shape, a phoneme, a word sense—using as little as a thousand connections
per chunk. Thus our rough estimate of a million neural connections in the human brain per human knowledge
chunk appears reasonable.
Indeed it appears ample. Thus we could make my estimate (of the number of knowledge chunks) a
thousand times greater, and the calculation still works. It is likely, however, that the brain's encoding of
knowledge is less efficient than the methods we use in our machines. This apparent inefficiency is consistent
with our understanding that the human brain is conservatively designed. The brain relies on a large degree of
redundancy and a relatively low density of information storage to gain reliability and to continue to function
effectively despite a high rate of neuron loss as we age.
My conclusion is that it does not appear that we need to contemplate a model of information processing of
individual neurons that is significantly more complex than we currently understand in order to explain human
capability. The brain is big enough.
REVERSE ENGINEERING A PROVEN DESIGN: THE HUMAN BRAIN
For many people the mind is the last refuge of mystery against the encroaching spread of science, and they donʹt like the idea of science engulfing the last bit of terra incognita.
—Herb Simon as quoted by Daniel Dennett
Cannot we let people be themselves, and enjoy life in their own way? You are trying to make another you. Oneʹs enough.
—Ralph Waldo Emerson
For the wise men of old . . . the solution has been knowledge and self‐discipline, . . . and in the practice of this technique, are ready to do things hitherto regarded as disgusting and impious—such as digging up and mutilating the dead.
—C. S. Lewis
Intelligence is: (a) the most complex phenomenon in the Universe; or (b) a profoundly simple process.
The answer, of course, is (c) both of the above. Itʹs another one of those great dualities that make life interesting.
Weʹve already talked about the simplicity of intelligence: simple paradigms and the simple process of computation.
Letʹs talk about the complexity.
We come back to knowledge, which starts out with simple seeds but ultimately becomes elaborate as the knowledge‐gathering process interacts with the chaotic real world. Indeed, that is how intelligence originated. It was the result of the evolutionary process we call natural selection, itself a simple paradigm, that drew its complexity from the pandemonium of its environment. We see the same phenomenon when we harness evolution in the computer. We start with simple formulas, add the simple process of evolutionary iteration and combine this with the
simplicity of massive computation. The result is often complex, capable, and intelligent algorithms.
But we donʹt need to simulate the entire evolution of the human brain in order to tap the intricate secrets it contains. just as a technology company will take apart and ʺreverse engineerʺ (analyze to understand the methods of) a rivalʹs products, we can do the same with the human brain. It is, after all, the best example we can get our hands on of an intelligent process. We can tap the architecture, organization, and innate knowledge of the human brain in order to greatly accelerate our understanding of how to design intelligence in a machine. By probing the brainʹs circuits, we can copy and imitate a proven design, one that took its original designer several billion years to develop. (And itʹs not even copyrighted.)
As we approach the computational ability to simulate the human brain—weʹre not there today but we will begin
to be in about a decadeʹs time—such an effort will be intensely pursued. Indeed, this endeavor has already begun.
For example, Synapticsʹ vision chip is fundamentally a copy of the neural organization, implemented in silicon of
course, of not only the human retina, but the early stages of mammalian visual processing. It has captured the essence of the algorithm of early mammalian visual processing, an algorithm called center surround filtering. It is not a particularly complicated chip, yet it realistically captures the essence of the initial stages of human vision.
There is a popular conceit among observers, both informed and uninformed, that such a reverse engineering project is infeasible. Hofstadter worries that ʺour brains may be too weak to understand themselves.ʺ [20] But that is not what we are finding. As we probe the brainʹs circuits, we find that the massively parallel algorithms are far from incomprehensible. Nor is there anything like an infinite number of them. There are hundreds of specialized regions in the brain, and it does have a rather ornate architecture, the consequence of its long history. The entire puzzle is not beyond our comprehension. It will certainly not be beyond the comprehension of twenty‐first‐century machines.
The knowledge is right there in front of us, or rather inside of us. It is not impossible to get at. Letʹs start with the most straightforward scenario, one that is essentially feasible today (at least to initiate).
We start by freezing a recently deceased brain.
Now, before I get too many indignant reactions, let me wrap myself in Leonardo da Vinciʹs cloak. Leonardo also
received a disturbed reaction from his contemporaries. Here was a guy who stole dead bodies from the morgue, carted them back to his dwelling, and then took them apart. This was before dissecting dead bodies was in style. He
did this in the name of knowledge, not a highly valued pursuit at the time. He wanted to learn how the human body
works, but his contemporaries found his activities bizarre and disrespectful. Today we have a different view, that expanding our knowledge of this wondrous machine is the most respectful homage we can pay. We cut up dead bodies all the time to learn more about how living bodies work, and to teach others what we have already learned.
Thereʹs no difference here in what I am suggesting. Except for one thing: I am talking about the brain, not the body. This strikes closer to home. We identify more with our brains than our bodies. Brain surgery is regarded as more invasive than toe surgery. Yet the value of the knowledge to be gained from probing the brain is too valuable to ignore. So weʹll get over whatever squeamishness remains.
As I was saying, we start by freezing a dead brain. This is not a new concept—Dr. E. Fuller Torrey a former supervisor at the National Institute of Mental Health and now head of the mental health branch of a private research foundation, has 44 freezers filled with 226 frozen brains. [21] Torrey and his associates hope to gain insight into the causes of schizophrenia, so all of his brains are of deceased schizophrenic patients, which is probably not ideal for our purposes.
We examine one brain layer—one very thin slice—at a time. With suitably sensitive two‐dimensional scanning equipment we should be able to see every neuron and every connection represented in each synapse‐thin layer. When
a layer has been examined and the requisite data stored, it can be scraped away to reveal the next slice. This information can be stored and assembled into a giant three‐dimensional model of the brainʹs wiring and neural topology.
It would be better if the frozen brains were not already dead long before freezing. A dead brain will reveal a lot
about living brains, but it is clearly not the ideal laboratory. Some of that deadness is bound to reflect itself in a deterioration of its neural structure. We probably donʹt want to base our designs for intelligent machines on dead brains. We are likely to be able to take advantage of people who, facing imminent death, will permit their brains to be destructively scanned just slightly before rather than slightly after their brains would have stopped functioning on their own. Recently, a condemned killer allowed his brain and body to be scanned and you can access all 10 billion
bytes of him on, the Internet at the Center for Human Simulationʹs ʺVisible Human Projectʺ web site. [22] Thereʹs an even higher resolution 25‐billion‐byte female companion on the site as well. Although the scan of this couple is not high enough resolution for the scenario envisioned here, itʹs an example of donating oneʹs brain for reverse engineering. Of course we may not want to base our templates of machine intelligence on the brain of a convicted killer, anyway.
Easier to talk about are the emerging noninvasive means of scanning our brains. I began with the more invasive
scenario above because it is technically much easier. We have in fact the means to conduct a destructive scan today
(although not yet the bandwidth to scan the entire brain in a reasonable amount of time). In terms of noninvasive scanning, high‐speed, high‐resolution magnetic resonance imaging (MRI) scanners are already able to view individual somas (neuron cell bodies) without disturbing the living tissue being scanned. More powerful MRIS are being developed that will be capable of scanning individual nerve fibers that are only ten microns (millionths of a meter) in diameter. These will be available during the first decade of the twenty‐first century. Eventually we will be able to scan the presynaptic vesicles that are the site of human learning.
We can peer inside someoneʹs brain today with MRI scanners, which are increasing their resolution with each new
generation of this technology. There are a number of technical challenges in accomplishing this, including achieving suitable resolution, bandwidth (that is, speed of transmission), lack of vibration, and safety. For a variety of reasons it is easier to scan the brain of someone recently deceased than of someone still living. (It is easier to get someone deceased to sit still, for one thing.) But noninvasively scanning a living brain will ultimately become feasible as MRI and other scanning technologies continue to improve in resolution and speed.
A new scanning technology called optical imaging, developed by Professor Amiram Grinvald at Israelʹs
Weizmarm Institute, is capable of significantly higher resolution than MRI. Like MRI, it is based on the interaction between electrical activity in the neurons and blood circulation in the capillaries feeding the neurons. Grinvaldʹs device is capable of resolving features smaller than fifty microns, and can operate in real time, thus enabling scientists to view the firing of individual neurons. Grinvald and researchers at Germanyʹs Max Planck Institute were struck by
the remarkable regularity of the patterns of neural firing when the brain was engaged in processing visual information. [23] One of the researchers, Dr. Mark Hubener, commented that ʺour maps of the working brain are so
orderly they resemble the street map of Manhattan rather than, say, of a medieval European town.ʺ Grinvald, Hubener, and their associates were able to use their brain scanner to distinguish between sets of neurons responsible for perception of depth, shape, and color. As these neurons interact with one another, the resulting pattern of neural firings resembles elaborately linked mosaics. From the scans, it was possible for the researchers to see how the neurons were feeding information to each other. For example, they noted that the depth perception neurons were arranged in parallel columns, providing information to the shape‐detecting neurons that formed more elaborate pinwheel‐like patterns. Currently, the Grinvald scanning technology is only able to image a thin slice of the brain near its surface, but the Weizmann Institute is working on refinements that will extend its three‐dimensional capability.
Grinvaldʹs scanning technology is also being used to boost the resolution of MRI scanning. A recent finding that near-infrared light can pass through the skull is also fueling excitement about the ability of optical imaging as a high-resolution method of brain scanning.
The driving force behind the rapidly improving capability of noninvasive scanning technologies such as MRI is again the Law of Accelerating Returns, because it requires massive computational ability. To build the high-resolution, three‐dimensional images from the raw magnetic resonance patterns that an MRI scanner produces. The
exponentially increasing computational ability provided by the Law of Accelerating Returns (and for another fifteen
to twenty years, Mooreʹs Law) will enable us to continue to rapidly improve the resolution and speed of these noninvasive scanning technologies.
Mapping the human brain synapse by synapse may seem like a daunting effort, but so did the Human Genome
Project, an effort to map all human genes, when it was launched in 1991. Although the bulk of the human genetic code has still not been decoded, there is confidence at the nine American Genome Sequencing Centers that the task
will be completed, if not by 2005, then at least within a few years of that target date. Recently, a new private venture with funding from Perkin‐Elmer has announced plans to sequence the entire human genome by the year 2001. As I
noted above, the pace of the human genome scan was extremely slow in its early years, and has picked up speed with
improved technology, particularly computer programs that identify the useful genetic information. The researchers are counting on further improvements in their gene‐hunting computer programs to meet their deadline. The same will be true of the human‐brain‐mapping project, as our methods of scanning and recording the 100 trillion neural connections pick up speed from the Law of Accelerating Returns.
What to Do with the Information
There are two scenarios for using the results of detailed brain scans. The most immediate— scanning the brain to understand it—is to scan portions of the brain to ascertain the architecture and implicit algorithms of interneuronal connections in different regions. The exact position of each and every nerve fiber is not as important as the overall pattern. With this information we can design simulated neural nets that operate similarly. This process will be rather like peeling an onion as each layer of human intelligence is revealed.
This is essentially what Synaptics has done in its chip that mimics mammalian neural‐image processing. This is also what Grinvald, Hubener, and their associates plan to do with their visual‐cortex scans. And there are dozens of other contemporary projects designed to scan portions of the brain and apply the resulting insights to the design of intelligent systems.
Within a region, the brainʹs circuitry is highly repetitive, so only a small portion of a region needs to be fully scanned. The computationally relevant activity of a neuron or group of neurons is sufficiently straightforward that we can understand and model these methods by examining them. Once the structure and topology of the neurons, the organization of the interneuronal wiring, and the sequence of neural firing in a region have been observed, recorded, and analyzed, it becomes feasible to reverse engineer that regionʹs parallel algorithms. After the algorithms of a region are understood, they can be refined and extended prior to being implemented in synthetic neural equivalents.
The methods can certainly be greatly sped up given that electronics is already more than a million times faster than neural circuitry.
We can combine the revealed algorithms with the methods for building intelligent machines that we already understand. We can also discard aspects of human computing that may not be useful in a machine. Of course, weʹll
have to be careful that we donʹt throw the baby out with the bathwater.
Downloading Your Mind to Your Personal Computer
A more challenging but also ultimately feasible scenario will be to scan someoneʹs brain to map the locations, interconnections, and contents of the somas, axons, dendrites, presynaptic vesicles, and other neural components. Its entire organization could then be re‐created on a neural computer of sufficient capacity, including the contents of its memory.
This is harder in an obvious way than the scanning‐the‐brain‐to‐understand‐it scenario. In the former, we need only sample each region until we understand the salient algorithms. We can then combine those insights with knowledge we already have. In this— scanning the brain to download it—scenario, we need to capture every little detail.
On the other hand, we donʹt need to understand all of it; we need only to literally copy it, connection by connection, synapse by synapse, neurotransmitter by neurotransmitter. It requires us to understand local brain processes, but not necessarily the brainʹs global organization, at least not in full. It is likely that by the time we can do this, we will understand much of it, anyway.
To do this right, we do need to understand what the salient information‐processing mechanisms are. Much of a neuronʹs elaborate structure exists to support its own structural integrity and life processes and does not directly contribute to its handling of information. We know that neuron‐computing process based on hundreds of different neurotransmitters and that different neural mechanisms in different regions allow for different types of computing.
The early vision neurons for example, are good at accentuating sudden color changes to facilitate finding the edges of objects. Hippocampus neurons are likely to have structures for enhancing the long‐term retention of memories. We also know that neurons use a combination of digital and analog computing that needs to be accurately modeled. We
need to identify structures capable of quantum computing, if any. All of the key features that affect information processing need to be recognized if we are to copy them accurately.
How well will this work? Of course, like any new technology, it wonʹt be perfect at first, and initial downloads will be somewhat imprecise. Small imperfections wonʹt necessarily be immediately noticeable because people are always changing to some degree. As our understanding of the mechanisms of the brain improves and our ability to
accurately and noninvasively scan these features improves, reinstantiating (reinstalling) a personʹs brain should alter a personʹs mind no more than it changes from day to day
What Will We Find When We Do This?
We have to consider this question on both the objective and subjective levels. ʺObjectiveʺ means everyone except me, so letʹs start with that. Objectively, when we scan someoneʹs brain and reinstantiate their personal mind file into a suitable computing medium, the newly emergent ʺpersonʺ will appear to other observers to have very much the same
personality, history, and memory as the person originally scanned. Interacting with the newly instantiated person will feel like interacting with the original person. The new person will claim to be that same old person and will have a memory of having been that person, having grown up in Brooklyn, having walked into a scanner here, and woken
up in the machine there. Heʹll say, ʺHey, this technology really works.ʺ
There is the small matter of the ʺnew personʹsʺ body. What kind of body will a reinstantiated personal mind file
have: the original human body, an upgraded body, a synthetic body, a nanoengineered body, a virtual body in a virtual environment? This is an important question, which I will discuss in the next chapter.
Subjectively, the question is more subtle and profound. Is this the same consciousness as the person we just scanned? As we saw in chapter 3, there are strong arguments on both sides. The position that fundamentally we are
our ʺpatternʺ (because our particles are always changing) would argue that this new person is the same because their patterns are essentially identical. The counter argument, however, is the possible continued existence of the person who was originally scanned. If he—Jack—is still around, he will convincingly claim to represent the continuity of his consciousness. He may not be satisfied to let his mental clone carry on in his stead. Weʹll keep bumping into this issue as we explore the twenty‐first century.
But once over the divide, the new person will certainly think that he was the original person. There will be no ambivalence in his mind as to whether or not he committed suicide when he agreed to be transferred into a new computing substrate leaving his old slow carbon‐based neural‐computing machinery behind. To the extent that he wonders at all whether or not he is really the same person that he thinks he is, heʹll be glad that his old self took the plunge, because otherwise he wouldnʹt exist.
Is he—the newly installed mind—conscious? He certainly will claim to be. And being a lot more capable than his
old neural self, heʹll be persuasive and effective in his position. Weʹll believe him. Heʹll get mad if we donʹt.
A Growing Trend
In the second half of the twenty‐first century, there will be a growing trend toward making this leap. Initially, there will be partial porting—replacing aging memory circuits, extending pattern‐recognition and reasoning circuits through neural implants. Ultimately, and well before the twenty‐first century is completed, people will port their entire mind file to the new thinking technology.
There will be nostalgia for our humble carbon‐based roots, but there is nostalgia for vinyl records also. Ultimately, we did copy most of that analog music to the more flexible and capable world of transferable digital information. The leap to port our minds to a more capable computing medium will happen gradually but inexorably nonetheless.
As we port ourselves, we will also vastly extend ourselves. Remember that $1,000 of computing in 2060 will have
the computational capacity of a trillion human brains. So we might as well multiply memory a trillion fold, greatly extend recognition and reasoning abilities, and plug ourselves into the pervasive wireless‐communications network.
While we are at it, we can add all human knowledge—as a readily accessible internal database as well as already processed and learned knowledge using the human type of distributed understanding.
THE AGE OF NEURAL IMPLANT HAS ALREADY ARRIVED
The patients are wheeled in on stretchers. Suffering from an advanced stage of Parkinson's disease, they are
like statues, their muscles frozen, their bodies and faces totally immobile. Then in a dramatic demonstration
at a French clinic, the doctor in charge throws an electrical switch. The patients suddenly come to life, get up,
walk around, and calmly and expressively describe how they have overcome their debilitating symptoms. This
is the dramatic result of a new neural implant therapy that is approved in Europe, and still awaits FDA
approval in the United States.
The diminished levels of the neurotransmitter dopamine in a Parkinson's patient causes overactivation of
two tiny regions in the brain: the ventral posterior nucleus and the subthalmic nucleus. This overactivation in
turn causes the slowness, stiffness, and gait difficulties of the disease, and ultimately results in total paralysis and death. Dr. A. L. Benebid, a French physician at Fourier University in Grenoble, discovered that stimulating
these regions with a permanently implanted electrode paradoxically inhibits these overactive regions and
reverses the symptoms. The electrodes are wired to a small electronic control unit placed in the patient's
chest. Through radio signals, the unit can be programmed, even turned on and off. When switched off, the
symptoms immediately return. The treatment has the promise of controlling the most devastating symptoms
of the disease. [24]
Similar approaches have been used with other brain regions. For example, by implanting an electrode in
the ventral lateral thalamus, the tremors as associated with cerebral palsy, multiple sclerosis, and other
tremor-causing conditions can be suppressed.
"We used to treat the brain like soup, adding chemicals that enhance or suppress certain
neurotransmitters," says Rick Trosch, one of the American physicians helping to perfect "deep brain
stimulation" therapies. "Now we're treating it like circuitry." [25]
Increasingly, we are starting to combat cognitive and sensory afflictions by treating the brain and nervous
system like the complex computational system that it is. Cochlear implants together with electronic speech
processors perform frequency analysis of sound waves, similar to that performed by the inner ear. About 10
percent of the formerly deaf persons who have received this neural replacement device are now able to hear
and understand voices well enough that they can hold conversations using a normal telephone.
Neurologist and ophthalmologist at Harvard Medical School Dr. Joseph Rizzo and his colleagues have
developed an experimental retina implant. Rizzo's neural implant is a small solar-powered computer that
communicates to the optic nerve. The user wears special glasses with tiny television cameras that
communicate to the implanted computer by laser signal. [26]
Researchers at Germany's Max Planck Institute for Biochemistry have developed special silicon devices that
can communicate with neurons in both directions. Directly stimulating neurons with an electrical current is not
the ideal approach since it can cause corrosion to the electrodes and create chemical by-products that damage
the cells. In contrast, the Max Planck Institute devices are capable of triggering an adjacent neuron to fire
without a direct electrical link. The Institute scientists demonstrated their invention by controlling the
movements of a living leech from their computer.
Going in the opposite direction—from neurons to electronics—is a device called a "neuron transistor," [27]
which can detect the firing of a neuron. The scientists hope to apply both technologies to the control of
artificial human limbs by connecting spinal nerves to computerized prostheses. The Institute's Peter Fromherz
says, "These two devices join the two worlds of information processing: the silicon world of the computer and
the water world of the brain."
Neurobiologist Ted Berger and his colleagues at Hedco Neurosciences and Engineering have built integrated
circuits that precisely match the properties and information processing of groups of animal neurons. The chips
exactly mimic the digital and analog characteristics of the neurons they have analyzed. They are currently
scaling up their technology to systems with hundreds of neurons. [28] Professor Carver Mead and his
colleagues at the California Institute of Technology have also built digital-analog integrated circuits that match
the processing of mammalian neural circuits comprising hundreds of neurons. [29]
The age of neural implants is under way, albeit at an early stage. Directly enhancing the information
processing of our brain with synthetic circuits is focusing at first on correcting the glaring defects caused by
neurological and sensory diseases and disabilities. Ultimately we will all find the benefits of extending our
abilities through neural implants difficult to resist.
The New Mortality
Actually there wonʹt be mortality by the end of the twenty‐first century. Not in the sense that we have known it. Not if you take advantage of the twenty‐first centuryʹs brain‐porting technology. Up until now, our mortality was tied to the longevity of our hardware. When the hardware crashed, that was it. For many of our forebears, the hardware gradually deteriorated before it disintegrated. Yeats lamented our dependence on a physical self that was ʺbut a paltry thing, a tattered coat upon a stick.ʺ [30] As we cross the divide to instantiate ourselves into our computational technology, our identity will be based on our evolving mind file. We will be software, not hardware.
And evolve it will. Today, our software cannot grow. It is stuck in a brain of a mere 100 trillion connections and
synapses. But when the hardware is trillions of times more capable, there is no reason for our minds to stay so small.
They can and will grow.
As software, our mortality will no longer be dependent on the survival of the computing circuitry. There will still
be hardware and bodies, but the essence of our identity will switch to the permanence of our software. Just as, today, we donʹt throw our files away when we change personal computers—we transfer them, at least the ones we want to
keep. So, too, we wonʹt throw our mind file away when we periodically port ourselves to the latest, ever more capable, ʺpersonalʺ computer. Of course, computers wonʹt be the discrete objects they are today. They will be deeply embedded in our bodies, brains, and environment. Our identity and survival will ultimately become independent of
the hardware and its survival.
Our immortality will be a matter of being sufficiently careful to make frequent backups. if weʹre careless about this, weʹll have to load an old backup copy and be doomed to repeat our recent past.
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
LETʹS JUMP TO THE OTHER SIDE OF THIS COMING CENTURY. YOU SAID THAT BY 2099 A PENNY OF
COMPUTING WILL BE EQUAL TO A BILLION TIMES THE COMPUTING POWER OF ALL HUMAN BRAINS
COMBINED. SOUNDS LIKE HUMAN THINKING IS GOING TO BE PRETTY TRIVIAL.
Unassisted, thatʹs true.
SO HOW WILL WE HUMAN BEINGS FARE IN THE MIDST OF SUCH COMPETITION?
First, we have to recognize that the more powerful technology—the technologically more sophisticated civilization—
always wins. That appears to be what happened when our Homo sapiens sapiens subspecies met the Homo sapiens neanderthalensis and other nonsurviving subspecies of Homo sapiens. That is what happened when the more technologically advanced Europeans encountered the indigenous peoples of the Americas. This is happening today as
the more advanced technology is the key determinant of economic and military power.
SO WEʹRE GOING TO BE SLAVES TO THESE SMART MACHINES?
Slavery is not a fruitful economic system to either side in an age of intellect. We would have no value as slaves to machines. Rather, the relationship is starting out the other way.
ITʹS TRUE THAT MY PERSONAL COMPUTER DOES WHAT I ASK IT TO DO—SOMETIMES! MAYBE I SHOULD
START BEING NICER TO IT.
No, it doesnʹt care how you treat it, not yet. But ultimately our native thinking capacities will be no match for the all-encompassing technology weʹre creating.
MAYBE WE SHOULD STOP CREATING IT.
We canʹt stop. The Law of Accelerating Returns forbids it! Itʹs the only way to keep evolution going at an accelerating pace.
HEY, CALM DOWN. ITʹS FINE WITH ME IF EVOLUTION SLOWS DOWN A TAD. SINCE WHEN HAVE WE
ADOPTED YOUR ACCELERATION LAW AS THE LAW OF THE LAND?
We donʹt have to. Stopping computer technology, or any fruitful technology, would mean repealing basic realities of
economic competition, not to mention our quest for knowledge. Itʹs not going to happen. Furthermore, the road weʹre
going down is a road paved with gold. Itʹs full of benefits that weʹre never going to resist—continued growth in economic prosperity, better health, more intense communication, more effective education, more engaging
entertainment, better sex.
UNTIL THE COMPUTERS TAKE OVER.
Look, this is not an alien invasion. Although it sounds unsettling, the advent of machines with vast intelligence is not necessarily a bad thing.
I GUESS IF WE CANʹT BEAT THEM, WEʹLL HAVE TO JOIN THEM.
Thatʹs exactly what weʹre going to do. Computers started out as extensions of our minds, and they will end up extending our minds. Machines are already an integral part of our civilization, and the sensual and spiritual machines of the twenty‐first century will be an even more intimate part of our civilization.
OKAY, IN TERMS OF EXTENDING MY MIND, LETʹS GET BACK TO IMPLANTS FOR MY FRENCH LIT CLASS. IS
THIS GOING TO BE LIKE IʹVE READ THIS STUFF? OR IS IT JUST GOING TO BE LIKE A SMART PERSONAL
COMPUTER THAT I CAN COMMUNICATE WITH QUICKLY BECAUSE IT HAPPENS TO BE LOCATED IN MY
HEAD?
Thatʹs a key question, and I think it will be controversial. It gets back to the issue of consciousness. Some people will feel that what goes in their neural implants is indeed subsumed by their consciousness. Others will feel that it remains outside of their sense of self. Ultimately, I think that we will regard the mental activity of the implants as part of our own thinking. Consider that even without implants, ideas and thoughts are constantly popping into our heads,
and we have little idea of where they came from, or how they got there. We nonetheless consider all the mental phenomena that we become aware of as our own thoughts.
SO IʹLL BE ABLE TO DOWNLOAD MEMORIES OF EXPERIENCES IʹVE NEVER HAD?
Yes, but someone has probably had the experience. So why not have the ability to share it?
I SUPPOSE FOR SOME EXPERIENCES, IT MIGHT BE SAFER TO JUST DOWNLOAD THE MEMORIES OF IT.
Less time‐consuming also.
DO YOU REALLY THINK THAT SCANNING A FROZEN BRAIN IS FEASIBLE TODAY?
Sure, just stick your head in my freezer here.
GEE, ARE YOU SURE THIS IS SAFE?
Absolutely
WELL, I THINK IʹLL WAIT FOR FDA APPROVAL.
Okay, then youʹll have to wait a long time.
THINKING AHEAD, I STILL HAVE THIS SENSE THAT WEʹRE DOOMED. I MEAN, CAN UNDERSTAND HOW A
NEWLY INSTANTIATED MIND, AS YOU PUT IT, WILL BE HAPPY THAT SHE WAS CREATED AND WILL
THINK THAT SHE HAD BEEN ME PRIOR TO MY HAVING BEEN SCANNED AND IS STILL ME IN A SHINY
NEW BRAIN. SHEʹLL HAVE NO REGRETS AND WILL BE ON THE OTHER SIDE.ʺ BUT I DONʹT SEE HOW I CAN
GET ACROSS THE HUMAN‐MACHINE DIVIDE. AS YOU POINTED OUT, IF IʹM SCANNED, THAT NEW ME
ISNʹT ME BECAUSE IʹM STILL HERE IN MY OLD BRAIN.
Yes, thereʹs a little glitch in this regard. But Iʹm sure weʹll figure how to solve this thorny problem with a little more consideration.
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐

C H A P T E R S E V E N
. . . AND BODIES
THE IMPORTANCE OF HAVING A BODY
Letʹs start by taking a quick look at my readerʹs diary.
NOW WAIT JUST A MINUTE.
Is there a problem?
FIRST OF ALL, I HAVE A NAME.
Yes, it would be a good idea to introduce you by name at this point.
IʹM MOLLY.
Thank you, is there something else?
YES. IʹM NOT SURE IʹM PREPARED TO SHARE MY DIARY WITH YOUR OTHER READERS.
Most writers donʹt let their readers participate at all. Anyway, youʹre my creation, so I should be able to share your personal reflections if it serves a purpose here.
I MAY BE YOUR CREATION, BUT REMEMBER WHAT YOU SAID IN CHAPTER 2 ABOUT ONEʹS CREATIONS
EVOLVING TO SURPASS THEIR CREATORS.
True enough, so maybe I should be more sensitive to your needs.
GOOD IDEA—LETʹS START BY ALLOWING ME TO VET THOSE ENTRIES YOUʹRE SELECTING.
Very well. Here are some extracts from Mollyʹs diary, suitably edited:
Iʹve switched to nonfat muffins. This has two distinct benefits. First of all, they have half the number of calories. Secondly, they taste awful. That way Iʹm less tempted to eat them. But I wish people would stop
shoving food in my face. . . . Iʹm going to have trouble at this potluck dorm party tomorrow. I feel like I
have to try everything, and I kind of lose track of what Iʹm eating.
Iʹve got to drop at least half a dress size. A full size would be better. Then I could breathe more easily in
this new dress. That reminds me, I should stop at the health club on my way home. Maybe that new
trainer will notice me. Actually I did catch him looking at me, but I was being kind of spastic with those
new machines, and he looked the other way. . . . Iʹm not crazy about the neighborhood this place is in, I
donʹt really feel safe walking back to my car when itʹs late. Okay, hereʹs an idea—Iʹll ask that trainer—got
to get his name—to walk me to my car. Always a good idea to be safe, right?
. . . Iʹm a little nervous about this bump on my toe. But the doctor said that toe bumps are almost always benign. But he still wants to remove it and send it to a lab. He said I wonʹt feel a thing. Except, of
course, for the Novocain—I hate needles!
. . . It was a little strange seeing my old boyfriend, but Iʹm glad weʹre still friends. It did feel good when
he gave me a hug . . .
Thank you, Molly. Now consider: How many of Mollyʹs entries would make sense if she didnʹt have a body? Most
of Mollyʹs mental activities are directed toward her body and its survival, security, nutrition, image, not to mention related issues of affection, sexuality, and reproduction. But Molly is not unique in this regard. I invite my other readers to look at their own diaries. And if you donʹt have one, consider what you would write in it if you did. How many of your entries would make sense if you didnʹt have a body?
Our bodies are important in many ways. Most of those goals I spoke about at the beginning of the previous chapter—the ones we attempt to solve using our intelligence—have to do with our bodies: protecting them, providing
them with fuel, making them attractive, making them feel good, providing for their myriad needs, not to mention desires.
Some philosophers—professional artificial‐intelligence critic Hubert Dreyfus, for one—maintain that achieving human‐level intelligence is impossible without a body. [1] Certainly, if weʹre going to port a humanʹs mind to a new computational medium, weʹd better provide a body. A disembodied mind will quickly get depressed.
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
TWENTY‐FIRST CENTURY BODIES
What makes a soul? And if machines ever have souls what will be the equivalent of psychoactive drugs? Of pain? Of the physical/emotional high I get from having a clean office?
—Esther Dyson
What a strange machine man is. You fill him with bread, wine, fish, and radishes, and out come sighs, laughter and dreams.
—Nikos Kazantzakis
So what kind of bodies will we provide for our twenty‐first‐century machines? Later on, the question will become: What sort of bodies will they provide for themselves?
Letʹs start with the human body itʹs the body weʹre used to. It evolved along with its brain, so the human brain is
well suited to provide for its needs. The human brain and body kind of go together.
The likely scenario is that both body and brain will evolve together, will become enhanced together, will migrate
together toward new modalities and materials. As I discussed in the previous chapter, porting our brains to new computational mechanisms will not happen all at once. We will enhance our brains gradually through direct connection with machine intelligence until such time that the essence of our thinking has fully migrated to the far more capable and reliable new machinery. Again, if we find this notion troublesome, a lot of this uneasiness has to do with our concept of the word machine. Keep in mind that our concept of this word will evolve along with our minds.
In terms of transforming our bodies, we are already further along in this process than we are in advancing our minds. We have titanium devices to replace our jaws, skulls, and hips. We have artificial skin of various kinds. We have artificial heart valves. We have synthetic vessels to replace arteries and veins, along with expandable stents to provide structural support for weak natural vessels. We have artificial arms, legs, feet, and spinal implants. We have all kinds of joints: jaws, hips, knees, shoulders, elbows, wrists, fingers, and toes. We have implants to control our bladders. We are developing machines—some made of artificial materials, others combining new materials with cultured cells—that will ultimately be able to replace organs such as the liver and pancreas. We have penile prostheses with little pumps to simulate erections. And we have long had implants for teeth and breasts.
Of course, the notion of completely rebuilding our bodies with synthetic materials, even if superior in certain ways, is not immediately compelling. We like the softness of our bodies. We like bodies to be supple and cuddly and
warm. And not a superficial warmth, but the deep and intimate heat drawn from its trillions of living cells.
So letʹs consider enhancing our bodies cell by cell. We have started down that road as well. We have written down
a portion of the entire genetic code that describes our cells, and weʹve started the process of understanding it. In the near future, we hope to design genetic therapies to improve our cells, to correct such defects as the insulin resistance associated with Type II diabetes, and the loss of control over self‐replication associated with cancer. An early method of delivering gene therapies was to infect a patient with special viruses containing the corrective DNA. A more effective method developed by Dr. Clifford Steer at the University of Minnesota utilizes RNA molecules to deliver the desired DNA directly. [2] High on researchersʹ list for future cellular improvements through genetic engineering is to counteract our genes for cellular suicide. These strands of genetic beads, called telomeres, get shorter every time a cell divides. When the telomere beads count down to zero, a cell is no longer able to divide, and destroys itself. Thereʹs a long list of diseases, aging conditions, and limitations that we intend to address by altering the genetic code that controls our cells.
But there is only so far we can go with this approach. Our DNA‐based cells depend on protein synthesis, and while protein is a marvelously diverse substance, it suffers from severe limitations. Hans Moravec, one of the first serious thinkers to realize the potential of twenty‐first‐century machines, points out that ʺprotein is not an ideal material. It is stable only in a narrow temperature and pressure range, is very sensitive to radiation, and rules out many construction techniques and components. . . . A genetically engineered superhuman would be just a second‐rate
kind of robot, designed under the handicap that its construction can only be by DNA‐guided protein synthesis. Only
in the eyes of human chauvinists would it have an advantage.ʺ [3]
One of evolutionʹs ideas that is worth keeping, however, is building our bodies from cells. This approach would
retain many of our bodiesʹ beneficial qualities: redundancy, which provides a high degree of reliability; the ability to regenerate and repair itself; and softness and warmth. But just as we will eventually relinquish the extremely slow speed of our neurons, we will ultimately be forced to abandon the other restrictions of our protein‐based chemistry.
To reinvent our cells, we look to one of the twenty‐first centuryʹs primary technologies: nanotechnology.
NANOTECHNOLOGY:
REBUILDING THE WORLD, ATOM BY ATOM
The problems of chemistry and biology can be greatly helped if . . . doing things on an atomic level is ultimately developed—a development which I think cannot be avoided.
—Richard Feynman, 1959
Suppose someone claimed to have a microscopically exact replica (in marble, even) of Michelangeloʹs David in his home. When you go to see this marvel, you find a twenty‐foot‐tall, roughly rectilinear hunk of pure white marble standing in his living room. ʺI havenʹt gotten around to unpacking it yet,ʺ he says, ʺbut I know itʹs in there.ʺ
—Douglas Hofstadter
What advantages will nanotoasters have over conventional macroscopic toaster technology? First, the savings in counter space will be substantial. One philosophical point that must not be overlooked is that the creation of the worldʹs smallest toaster implies the existence of the worldʹs smallest slice of bread. In the quantum limit we must necessarily encounter fundamental toast particles, which we designate here as croutons.
—Jim Cser, Annals of Improbable Research, edited by Marc Abrahams
Humankindʹs first tools were found objects: sticks used to dig up roots and stones used to break open nuts. It took our forebears tens of thousands of years to invent a sharp blade. Today we build machines with finely designed intricate mechanisms, but viewed on an atomic scale, our technology is still crude. ʺCasting, grinding, milling, and even lithography move atoms in great thundering statistical herds,ʺ says Ralph Merkle, a leading nanotechnology theorist at Xeroxʹs Palo Alto Research Center. He adds that current manufacturing methods are ʺlike trying to make
things out of Legos with boxing gloves on. . . . In the future, nanotechnology will let us take off the boxing gloves.ʺ [4]
Nanotechnology is technology built on the atomic level: building machines one atom at a time. ʺNanoʺ refers to a
billionths of a meter, which is the width of five carbon atoms. We have one existence proof of the feasibility of nanotechnology: life on Earth. Little machines in our cells called ribosomes build organisms such as humans one molecule, that is one amino acid, at a time, following digital templates coded in another molecule called DNA. Life on Earth has mastered the ultimate goal of nanotechnology, which is self‐replication.
But as mentioned above, Earthly life is limited by the particular molecular building block it has selected. Just as
our human‐created computational technology will ultimately exceed the capacity of natural computation (electronic
circuits are already millions of times faster than human neural circuits), our twenty‐first‐century physical technology will also greatly exceed the capabilities of the amino acid‐based nanotechnology of the natural world.
The concept of building machines atom by atom was first described in a 1959 talk at Cal Tech titled ʺThereʹs Plenty
of Room at the Bottom,ʺ by physicist Richard Feynman, the same guy who first suggested the possibility of quantum
computing. [5] The idea was developed in some detail by Eric Drexler twenty years later in his book Engines of Creation. [6] The book actually inspired the cryonics movement of the 1980s, in which people had their heads (with or without bodies) frozen in the hope that a future time would possess the molecule‐scale technology to overcome their
mortal diseases, as well as undo the effects of freezing and defrosting. Whether a future generation would be motivated to revive all these frozen brains was another matter.
After publication of Engines of Creation, the response to Drexlerʹs ideas was skeptical and he had difficulty filling out his MIT Ph.D. committee despite Marvin Minskyʹs agreement to supervise it. Drexlerʹs dissertation, published in
1992 as a book titled Nanosystems: Molecular Machinery, Manufacturing, and Computation, provided a comprehensive proof of concept, including detailed analyses and specific designs. [7] A year later, the first nanotechnology conference attracted only a few dozen researchers. The fifth annual conference, held in December 1997, boasted 350
scientists who were far more confident of the practicality of their tiny projects. Nanothinc, an industry think tank, estimated in 1997 that the field already produces $5 billion in annual revenues for nanotechnology‐related technologies, including micromachines, microfabrication techniques, nanolithography, nanoscale microscopes, and others. This figure has been more than doubling each year. [8]
The Age of Nanotubes
One key building material for tiny machines is, again, nanotubes. Although built on an atomic scale, the hexagonal patterns of carbon atoms are extremely strong and durable. ʺYou can do anything you damn well want with these tubes and theyʹll just keep on truckinʹ,ʺ says Richard Smalley, one of the chemists who received the Nobel Prize for discovering the buckyball molecule. [9] A car made of nanotubes would be stronger and more stable than a car made
with steel, but would weigh only fifty pounds. A spacecraft made of nanotubes could be of the size and strength of
the U.S. space shuttle, but weigh no more than a conventional car. Nanotubes handle heat extremely well, far better
than the fragile amino acids that people are built out of. They can be assembled into all kinds of shapes: wirelike strands, sturdy girders, gears, etcetera. Nanotubes are formed of carbon atoms, which are in plentiful supply in the natural world.
As I mentioned earlier, the same nanotubes can be used for extremely efficient computation, so both the structural
and computational technology of the twenty‐first century will likely be constructed from the same stuff. In fact, the same nanotubes used to form physical structures can also be used for computation, so future nanomachines can have
their brains distributed throughout their bodies.
The best‐known examples of nanotechnology to date, while not altogether practical, are beginning to show the feasibility of engineering at the atomic level. IBM created its corporate logo using individual atoms as pixels. [10] In 1996, Texas instruments built a chip‐sized device with half a million moveable mirrors to be used in a tiny high-resolution projector.ʺ [11] TI sold $100 million worth of their nanomirrors in 1997.
Chih‐Ming Ho of UCLA is designing flying machines using surfaces covered with microflaps that control the flow
of air in a similar manner to conventional flaps on a normal airplane. [12] Andrew Berlin at Xeroxʹs Palo Alto Research Center is designing a printer using microscopic air valves to move paper documents precisely. [13]
Cornell graduate student and rock musician Dustin Carr built a realistic‐looking but microscopic guitar with strings only fifty manometers in diameter. Carrʹs creation is a fully functional musical instrument, but his fingers are too large to play it. Besides, the strings vibrate at 10 million vibrations per second, far beyond the twenty‐thousand-cycles‐per‐second limit of human hearing. [14]
The Holy Grail of Self‐Replication:
Little Fingers and a Little Intelligence
Tiny fingers represent something of a holy grail for nanotechnologists. With little fingers and computation, nanomachines would have in their Lilliputian world what people have in the big world: intelligence and the ability to manipulate their environment. Then these little machines could build replicas of themselves, achieving the fieldʹs key objective.
The reason that self‐replication is important is that it is too expensive to build these tiny machines one at a time.
To be effective, nanometer‐sized machines need to come in the trillions. The only way to achieve this economically is through combinatorial explosion: let the machines build themselves.
Drexler, Merkle (a coinventor of public key encryption, the primary method of encrypting messages), and others
have convincingly described how such a self‐replicating nanorobot— nanobot—could be constructed. The trick is to provide the nanobot with sufficiently flexible manipulators—arms and hands—so that it is capable of building a copy
of itself. It needs some means for mobility so that it can find the requisite raw materials. It requires some intelligence so that it can solve the little problems that will arise when each nanobot goes about building a complicated little machine like itself. Finally, a really important requirement is that it needs to know when to stop replicating.
Morphing in the Real World
Self‐replicating machines built at the atomic level could truly transform the world we live in. They could build extremely inexpensive solar cells, allowing the replacement of messy fossil fuels. Since solar cells require a large surface area to collect sufficient sunlight, they could be placed in orbit, with the energy beamed down to Earth.
Nanobots launched into our bloodstreams could supplement our natural immune system and seek out and
destroy pathogens, cancer cells, arterial plaque, and other disease agents. In the vision that inspired the cryonics enthusiasts, diseased organs can be rebuilt. We will be able to reconstruct any or all of our bodily organs and systems, and do so at the cellular level. I talked in the last chapter about reverse engineering and emulating the salient computational functionality of human neurons. In the same way, it will become possible to reverse engineer and replicate the physical and chemical functionality of any human cell. In the process we will be in a position to greatly extend the durability, strength, temperature range, and other qualities and capabilities of our cellular building blocks.
We will then be able to grow stronger, more capable organs by redesigning the cells that constitute them and building them with far more versatile and durable materials. As we go down this road, weʹll find that some redesign
of the body makes sense at multiple levels. For example, if our cells are no longer vulnerable to the conventional pathogens, we may not need the same kind of immune system. But we will need new nanoengineered protections for
a new assortment of nanopathogens.
Food, clothing, diamond rings, buildings could all assemble themselves molecule by molecule. Any sort of product could be instantly created when and where we need it. Indeed, the world could continually reassemble itself
to meet our changing needs, desires, and fantasies. By the late twenty‐first century, nano‐technology will permit objects such as furniture, buildings, clothing, even people, to change their appearance and other characteristics—
essentially to change into something else—in a split second.
These technologies will emerge gradually (I will attempt to delineate the different gradations of nanotechnology
as I talk about each of the decades of the twenty‐first century in Part III of this book). There is a clear incentive to go down this path. Given a choice, people will prefer to keep their bones from crumbling, their skin supple, their life systems strong and vital. Improving our lives through neural implants on the mental level, and nanotechnology-enhanced bodies on the physical level, will be popular and compelling. It is another one of those slippery slopes—
there is no obvious place to stop this progression until the human race has largely replaced the brains and bodies that evolution first provided.
A Clear and Future Danger
Without self‐replication, nanotechnology is neither practical nor economically feasible. And therein lies the rub. What happens if a little software problem (inadvertent or otherwise) fails to halt the self‐replication? We may have more nanobots than we want. They could eat up everything in sight.
The movie The Blob (of which there are two versions) was a vision of nano‐technology run amok. The movieʹs villain was this intelligent self‐replicating gluttonous stuff that fed on organic matter. Recall that nanotechnology is likely to be built from carbon‐based nanotubes, so, like the Blob, it will build itself from organic matter, which is rich in carbon. Unlike mere animal‐based cancers, an exponentially exploding nanomachine population would feed on any carbon‐based matter. Tracking down all of these bad nanointelligences would be like trying to find trillions of microscopic needles—rapidly moving ones at that—in at least as many haystacks. There have been proposals for nanoscale immunity technologies: good little antibody machines that would go after the bad little machines. The nanoantibodies would, of course, have to scale up at least as quickly as the epidemic of marauding nanomiscreants.
There could be a lot of collateral damage as these trillions of machines battle it out.
Now that I have raised this specter, I will try, unconvincingly perhaps, to put the peril in perspective. I believe that it will be possible to engineer self‐replicating nanobots in such a way that an inadvertent, undesired population explosion would be unlikely. I realize that this may not be completely reassuring, coming from a software developer
whose products (like those of my competitors) crash once in a while (but rarely—and when they do, itʹs the fault of
the operating system!). There is a concept in software development of ʺmission criticalʺ applications. These are software programs that control a process on which people are heavily dependent. Examples of mission‐critical software include life‐support systems in hospitals, automated surgical equipment, autopilot flying and landing systems, and other software‐based systems that affect the well‐being of a person or organization. It is feasible to create extremely high levels of reliability in these programs. There are examples of complex technology in use today in which a mishap would severely imperil public safety. A conventional explosion in an atomic power plant could spray deadly plutonium across heavily populated areas. Despite a near meltdown at Chernobyl, this apparently has
only occurred twice in the decades that we have had hundreds of such plants operating, both incidents involving recently acknowledged reactor calamities in the Chelyabinsk region of Russia. [15] There are tens of thousands of nuclear weapons, and none has ever exploded in error.
I admit that the above paragraph is not entirely convincing. But the bigger danger is the intentional hostile use of nanotechnology. Once the basic technology is available, it would not be difficult to adapt it as an instrument of war or terrorism. It is not the case that someone would have to be suicidal to use such weapons. The nanoweapons could easily be programmed to replicate only against an enemy; for example, only in a particular geographical area. Nuclear weapons, for all their destructive potential, are at least relatively local in their effects. The self‐replicating nature of nanotechnology makes it a far greater danger.
VIRTUAL BODIES
We donʹt always need real bodies. If we happen to be in a virtual environment, then a virtual body will do just fine.
Virtual reality started with the concept of computer games, particularly ones that provided a simulated environment.
The first was Space War, written by early artificial‐intelligence researchers to pass the time while waiting for programs to compile on their slow 1960s computers. [16] The synthetic space surroundings were easy to render on low‐resolution monitors: Stars and other space objects were just illuminated pixels.
Computer games and computerized video games have become more realistic over time, but you cannot
completely immerse yourself in these imagined worlds, not without some imagination. For one thing, you can see the
edges of the screen, and the all too real world that you have never left is still visible beyond these borders.
If weʹre going to enter a new world, we had better get rid of traces of the old. In the 1990s the first generation of virtual reality has been introduced in which you don a special visual helmet that takes over your entire visual field.
The key to visual reality is that when you move your head, the scene instantly repositions itself so that you are now looking at a different region of a three‐dimensional scene. The intention is to simulate what happens when you turn
your real head in the real world: The images captured by your retinas rapidly change. Your brain nonetheless understands that the world has remained stationary and that the image is sliding across your retinas only because your head is rotating.
Like most first generation technologies, virtual reality has not been fully convincing. Because rendering a new scene requires a lot of computation, there is a lag in producing the new perspective. Any noticeable delay tips off your brain that the world youʹre looking at is not entirely real. The resolution of virtual reality displays has also been inadequate to create a fully satisfactory illusion. Finally, contemporary virtual reality helmets are bulky and uncomfortable.
Whatʹs needed to remove the rendering delay and to boost display resolution is yet faster computers, which we
know are always on the way. By 2007, high‐quality virtual reality with convincing artificial environments, virtually instantaneous rendering, and high‐definition displays will be comfortable to wear and available at computer game prices.
That takes care of two of our senses—visual and auditory. Another high‐resolution sense organ is our skin, and
ʺhapticʺ interfaces to provide a virtual tactile interface are also evolving. One available today is the Microsoft force-feedback joystick, derived from 1980s research at the MIT Media Lab. A force‐feedback joystick adds some tactile realism to computer games, so you feel the rumble of the road in a car‐driving game or the pull of the line in a fishing simulation. Emerging in late 1998 is the ʺtactile mouse,ʺ which operates like a conventional mouse but allows the user to feel the texture of surfaces, objects, even people. One company that I am involved in, Medical Learning Company,
is developing a simulated patient to help train doctors, as well as enable nonphysicians to play doctor. It will include a haptic interface so that you can feel a knee joint for a fracture or a breast for lumps. [17]
A force‐feedback joystick in the tactile domain is comparable to conventional monitors in the visual domain. The
force‐feedback joystick provides a tactile interface, but it does not totally envelop you. The rest of your tactile world is still reminding you of its presence. In order to leave the real world, at least temporarily, we need a tactile environment that takes over your sense of touch.
So letʹs invent a virtual tactile environment. Weʹve seen aspects of it in science fiction films (always a good source for inventing the future). We can build a body suit that will detect your own movements as well as provide high resolution tactile stimulation. The suit will also need to provide sufficient force‐feedback to actually prevent your movements if you are pressing against a virtual obstacle in the virtual environment. If you are giving a virtual companion a hug, for example, you donʹt want to move right through his or her body. This will require a force-feedback structure outside the suit, although obstacle resistance could be provided by the suit itself. And since your body inside the suit is still in the real world, it would make sense to put the whole contraption in a booth so that your movements in the virtual world donʹt knock down lamps and people in your ʺrealʺ vicinity. Such a suit could also provide a thermal response and thereby allow the simulation of feeling a moist surface—or even immersing your hand or your whole body in water—which is indicated by a change in temperature and a decrease in surface tension.
Finally, we can provide a platform consisting of a rotating treadmill device for you to stand (or sit or lie) on, which will allow you to walk or move around (in any direction) in your virtual environment.
So with the suit, the outer structure, the booth, the platform, the goggles, and the earphones, we just about have
the means to totally envelop your senses. Of course, we will need some good virtual reality software, but thereʹs certain to be hot competition to provide a panoply of realistic and fantastic new environments as the requisite hardware becomes available.
Oh yes, there is the sense of smell. A completely flexible and general interface for our fourth sense will require a reasonably advanced nanotechnology to synthesize the wide variety of molecules that we can detect with our olfactory sense. In the meantime, we could provide the ability to diffuse a variety of aromas in the virtual reality booth.
Once we are in a virtual reality environment, our own bodies—at least the virtual versions—can change as well.
We can become a more attractive version of ourselves, a hideous beast, or any creature real or imagined as we interact with the other inhabitants in each virtual world we enter.
Virtual reality is not a (virtual) place you need go to alone. You can interact with your friends there (who would
be in other virtual reality booths, which may be geographically remote). You will have plenty of simulated companions to choose from as well.
Directly Plugging In
Later in the twenty‐first century, as neural implant technologies become ubiquitous, we will be able to create and interact with virtual environments without having to enter a virtual reality booth. Your neural implants will provide the simulated sensory inputs of the virtual environment—and your virtual body—directly in your brain. Conversely,
your movements would not move your ʺrealʺ body, but rather your perceived virtual body. These virtual environments would also include a suitable selection of bodies for yourself. Ultimately, your experience would be highly realistic, just like being in the real world. More than one person could enter a virtual environment and interact with each other. In the virtual world, you will meet other real people and simulated people—eventually, there wonʹt
be much difference.
This will be the essence of the Web in the second half of the twenty‐first century. A typical ʺweb siteʺ will be a perceived virtual environment, with no external hardware required. You ʺgo thereʺ by mentally selecting the site and then entering that world. Debate Benjamin Franklin on the war powers of the presidency at the history society site.
Ski the Alps at the Swiss Chamber of Commerce site (while feeling the cold spray of snow on your face). Hug your
favorite movie star at the Columbia Pictures site. Get a little more intimate at the Penthouse or Playgirl site. Of course, there may be a small charge.
Real Virtual Reality
In the late twenty‐first century, the ʺrealʺ world will take on many of the characteristics of the virtual world through the means of nanotechnology ʺswarms.ʺ Consider, for example, Rutgers University computer scientist J. Storrs Hallʹs
concept of ʺUtility Fog.ʺ [18] Hallʹs conception starts with a little robot called a Foglet, which consists of a human‐cell-sized device with twelve arms pointing in all directions. At the end of the arms are grippers so that the Foglets can grasp one another to form larger structures. These nanobots are intelligent and can merge their computational capacities with each other to create a distributed intelligence. A space filled with Foglets is called Utility Fog and has some interesting properties.
First of all, the Utility Fog goes to a lot of trouble to simulate its not being there. Hall describes a detailed scenario that lets a real human walk through a room filled with trillions of Foglets and not notice a thing. When desired (and itʹs not entirely clear who is doing the desiring), the Foglets can quickly simulate any environment by creating all sorts of structures. As Hall puts it, ʺFog city can look like a park, or a forest, or ancient Rome one day and Emerald City the next.ʺ
The Foglets can create arbitrary wave fronts of light and sound in any direction to create any imaginary visual and
auditory environment. They can exert any pattern of pressure to create any tactile environment. In this way, Utility Fog has all the flexibility of a virtual environment, except it exists in the real physical world. The distributed intelligence of the Utility Fog can simulate the minds of scanned (Hall calls them ʺuploadedʺ) people who are re-created in the Utility Fog as ʺFog people.ʺ In Hallʹs scenario, ʺa biological human can walk through Fog walls, and a Fog (uploaded) human can walk through dumb‐matter walls. Of course Fog people can walk through Fog walls, too.ʺ
The physical technology of Utility Fog is actually rather conservative. The Foglets are much bigger machines than
most nanotechnology conceptions. The software is more challenging, but ultimately feasible. Hall needs a bit of work on his marketing angle: Utility Fog is a rather dull name for such versatile stuff.
There are a variety of proposals for nanotechnology swarms, in which the real environment is constructed from interacting multitudes of nanomachines. In all of the swarm conceptions, physical reality becomes a lot like virtual reality. You can be sleeping in your bed one moment, and have the room transform into your kitchen as you awake.
Actually, change that to a dining room as thereʹs no need for a kitchen. Related nanotechnology will instantly create whatever meal you desire. When you finish eating, the room can transform into a study, or a game room, or a swimming pool, or a redwood forest, or the Taj Mahal. You get the idea.
Mark Yim has built a large‐scale model of a small swarm showing the feasibility of swarm interaction. [19] Joseph
Michael has actually received a U.K. patent on his conception of a nanotechnology swarm, but it is unlikely that his design will be commercially realizable in the twenty‐year life of his patent. [20]
It may seem that we will have too many choices. Today, we have only to choose our clothes, makeup, and destination when we go out. In the late twenty‐first century, we will have to select our body, our personality, our environment—so many difficult decisions to make! But donʹt worry—weʹll have intelligent swarms of machines to guide us.
THE SENSUAL MACHINE
Made double by his lust
he sounds a womanʹs groans.
A figment of his flesh.
—from Barry Spacksʹs poem ʺThe Solitary at Seventeenʺ
I can predict the future by assuming that money and male hormones are the driving forces for new technology.
Therefore, when virtual reality gets cheaper than dating, society is doomed.
—Dogbert
The first book printed from a moveable type press may have been the Bible, but the century following Gutenbergʹs epochal invention saw a lucrative market for books with more prurient topics. [21] New communication
technologies—the telephone, motion pictures, television, videotape—have always been quick to adopt sexual themes.
The Internet is no exception, with 1998 market estimates of adult online entertainment ranging from $185 million by
Forrester Research to $1 billion by Inter@active Week. These figures are for customers, mostly men, paying to view and interact with performers—live, recorded, and simulated. One 1998 estimate cited 28,000 web sites that offer sexual entertainment. [22] These figures do not include couples who have expanded their phone sex to include moving pictures via online video conferencing.
CD‐ROMS and DVD disks constitute another technology that has been exploited for erotic entertainment.
Although the bulk of adult‐oriented disks are used as a means for delivering videos with a bit of interactivity thrown in, a new genre of CD‐ROM and DVD provides virtual sexual companions that respond to some mouse‐administered
fondling. [23] Like most first‐generation technologies, the effect is less than convincing, but future generations will eliminate some of the kinks, although not the kinkiness. Developers are also working to exploit the force‐feed mouse so that you can get some sense of what your virtual partner feels like.
Late in the first decade of the twenty‐first century, virtual reality will enable you to be with your lover—romantic partner, sex worker, or simulated companion—with full visual and auditory realism. You will be able to do anything
you want with your companion except touch, admittedly an important limitation.
Virtual touch has already been introduced, but the all‐enveloping, highly realistic, visual‐auditory‐tactile virtual environment will not be perfected until the second decade of the twenty‐first century. At this point, virtual sex becomes a viable competitor to the real thing. Couples will be able to engage in virtual sex regardless of their physical proximity. Even when proximate, virtual sex will be better in some ways and certainly safer. Virtual sex will provide sensations that are more intense and pleasurable than conventional sex, as well as physical experiences that currently do not exist. Virtual sex is also the ultimate in safe sex, as there is no risk of pregnancy or transmission of disease.
Today, lovers may fantasize their partners to be someone else, but users of virtual sex communication will not need as much imagination. You will be able to change the physical appearance and other characteristics of both yourself and your partner. You can make your lover look and feel like your favorite star without your partnerʹs permission or knowledge. Of course, be aware that your partner may be doing the same to you.
Group sex will take on a new meaning in that more than one person can simultaneously share the experience of
one partner. Since multiple real people cannot all control the movements of one virtual partner, there needs to be a way of sharing the decision making of what the one virtual body is doing. Each participant sharing a virtual body would have the same visual, auditory, and tactile experience, with shared control, of their shared virtual body (perhaps the one virtual body will reflect a consensus of the attempted movements of the multiple participants). A whole audience of people—who may be geographically dispersed—could share one virtual body while engaged in a
sexual experience with one performer.
Prostitution will be free of health risks, as will virtual sex in general. Using wireless, very‐high‐bandwidth communication technologies, neither sex workers nor their patrons need leave their homes. Virtual prostitution is likely to be legally tolerated, at least to a far greater extent than real prostitution is today, as the virtual variety will be impossible to monitor or control. With the risks of disease and violence having been eliminated, there will be far less rationale for proscribing it.
Sex workers will have competition from simulated—computer generated—partners. In the early stages, ʺrealʺ
human virtual partners are likely to be more realistic than simulated virtual partners, but that will change over time.
Of course, once the simulated virtual partner is as capable, sensual, and responsive as a real human virtual partner, whoʹs to say that the simulated virtual partner isnʹt a real, albeit virtual, person?
Is virtual rape possible? In the purely physical sense, probably not. Virtual reality will have a means for users to immediately terminate their experience. Emotional and other means of persuasion and pressure are another matter.
How will such an extensive array of sexual choices and opportunities affect the institution of marriage and the concept of commitment in a relationship? The technology of virtual sex will introduce an array of slippery slopes, and the definition of a monogamous relationship will become far less clear. Some people will feel that access to intense sexual experiences at the click of a mental button will destroy the concept of a sexually committed relationship.
Others will argue, as proponents of sexual entertainment and services do today, that such diversions are healthy outlets and serve to maintain healthy relationships. Clearly, couples will need to reach their own understandings, but drawing clear lines will become difficult with the level of privacy that this future technology affords. It is likely that society will accept practices and activities in the virtual arena that it frowns on in the physical world, as the consequences of virtual activities are often (a though not always) easier to undo.
In addition to direct sensual and sexual contact, virtual reality will be a great place for romance in general. Stroll with your lover along a virtual Champs‐Elysees, take a walk along a virtual Cancun beach, mingle with the animals
in a simulated Mozambique game reserve. Your whole relationship can be in Cyberland.
Virtual reality using an external visual‐auditory‐haptic interface is not the only technology that will transform the nature of sexuality in the twenty‐first century. Sexual robots—sexbots—will become popular by the beginning of the
third decade of the new century. Today, the idea of intimate relations with a robot or doll is not generally appealing because robots and dolls are so, well, inanimate. But that will change as robots gain the softness, intelligence, pliancy, and passion of their human creators. (By the end of the twenty‐first century, there wonʹt be a clear difference between humans and robots. What, after all, is the difference between a human who has upgraded her body and brain using
new nanotechnology and computational technologies, and a robot who has gained an intelligence and sensuality surpassing her human creators?)
By the fourth decade, we will move to an era of virtual experiences through internal neural implants. With this technology, you will be able to have almost any kind of experience with just about anyone, real or imagined, at any
time. Itʹs just like todayʹs online chat rooms, except that you donʹt need any equipment thatʹs not already in your head, and you can do a lot more than just chat. You wonʹt be restricted by the limitations of your natural body as you and your partners can take on any virtual physical form. Many new types of experiences will become possible: A man
can feel what it is like to be a woman, and vice versa. Indeed, thereʹs no reason why you canʹt be both at the same time, making real, or at least virtually real, our solitary fantasies.
And then, of course, in the last half of the century, there will be the nanobot swarms—good old sexy Utility Fog,
for example. The nanobot swarms can instantly take on any form and emulate any sort of appearance, intelligence, and personality that you or it desires‐the human form, say, if thatʹs what turns you on.
THE SPIRITUAL MACHINE
We are not human beings trying to be spiritual. We are spiritual beings trying to be human.
—Jacquelyn Small
Body and soul are twins. God only knows which is which.
—Charles A. Swinburne
Weʹre all lying in the gutter, but some of us are gazing at the stars.
—Oscar Wilde
Sexuality and spirituality are two ways that we transcend our everyday physical reality. Indeed, there are links between our sexual and our spiritual passions, as the ecstatic rhythmic movements associated with some varieties of
spiritual experience suggest.
Mind Triggers
We are discovering that the brain can be directly stimulated to experience a wide variety of feelings that we originally thought could only be gained from actual physical or mental experience. Take humor, for example. In the journal Nature, Dr. Itzhak Fried and his colleagues at UCLA tell how they found a neurological trigger for humor. They were
looking for possible causes for a teenage girlʹs epileptic seizures and discovered that applying an electric probe to a specific point in the, supplementary motor area of her brain caused her to laugh. Initially, the researchers thought that the laughter must be just an involuntary motor response, but they soon realized they were triggering the genuine perception of humor, not just forced laughter. When stimulated in just the right spot of her brain, she found everything funny ʺYou guys are just so funny—standing aroundʺ was a typical comment. [24]
Triggering a perception of humor without circumstances we normally consider funny is perhaps disconcerting (although personally, I find it humorous). Humor involves a certain element of surprise. Blue elephants. The last two words were intended to be surprising, but they probably didnʹt make you laugh (or maybe they did). In addition to
surprise, the unexpected event needs to make sense from an unanticipated but meaningful perspective. And there are
some other attributes that humor requires that we donʹt understand just yet. The brain apparently has a neural net that detects humor from our other perceptions. If we directly stimulate the brainʹs humor detector, then an otherwise ordinary situation will seem pretty funny.
The same appears to be true of sexual feelings. In experiments with animals, stimulating a specific small area of
the hypothalamus with a tiny injection of testosterone causes the animals to engage in female sexual behavior, regardless of gender. Stimulating a different area of the hypothalamus produces male sexual behavior.
These results suggest that once neural implants are commonplace, we will have the ability to produce not only virtual sensory experiences but also the feelings associated with these experiences. We can also create some feelings not ordinarily associated with the experience. So you will be able to add some humor to your sexual experiences, if
desired (of course, for some of us humor may already be part of the picture).
The ability to control and to reprogram our feelings will become even more profound in the late twenty‐first century when technology moves beyond mere neural implants and we fully install our thinking processes into a new
computational medium—that is, when we become software.
We work hard to achieve feelings of humor, pleasure, and well‐being. Being able to call them up at will may seem
to rob them of their meaning. Of course, many people use drugs today to create and enhance certain desirable feelings, but the chemical approach comes bundled with many undesirable effects. With neural implant technology,
you will be able to enhance your feelings of pleasure and well‐being without the hangover. Of course, the potential
for abuse is even greater than with drugs. When psychologist James Olds provided rats with the ability to press a button and directly stimulate a pleasure center in the limbic system of their brains, the rats pressed the button endlessly, as often as five thousand times an hour, to the exclusion of everything else, including eating. Only falling asleep caused them to stop temporarily. [25]
Nonetheless, the benefits of neural implant technology will be compelling. As just one example, millions of people
suffer from an inability to experience sufficiently intense feelings of sexual pleasure, which is one important aspect of impotence. People with this disability will not pass up the opportunity to overcome their problem through neural implants, which they may already have in place for other purposes. Once a technology is developed to overcome a disability, there is no way to restrict its use from enhancing normal abilities, nor would such restrictions necessarily be desirable. The ability to control our feelings will be just another one of those twenty‐first‐century slippery slopes.
So What About Spiritual Experiences?
The spiritual experienced—a feeling of transcending oneʹs everyday physical and mortal bounds to sense a deeper reality—plays a fundamental role in otherwise disparate religions and philosophies. Spiritual experiences are not all of the same sort but appear to encompass a broad range of mental phenomena. The ecstatic dancing of a Baptist revival appears to be a different phenomenon than the quiet transcendence of a Buddhist monk. Nonetheless, the notion of the spiritual‐experience has been reported so consistently throughout history, and in virtually all cultures and religions, that it represents a particularly brilliant flower in the phenomenological garden.
Regardless of the nature and derivation of a mental experience, spiritual or otherwise, once we have access to the
computational processes that give rise to it, we have the opportunity to understand its neurological correlates. With the understanding of our mental processes will come the opportunity to capture our intellectual, emotional, and spiritual experiences, to call them up at will, and to enhance them.
Spiritual Experience Through Brain Generated Music
There is already one technology that appears to generate at least one aspect of a spiritual experience. This experimental technology is called Brain Generated Music (BGM), pioneered by Neurosonics, a small company in Baltimore, Maryland, of which I am a director. BGM is a brain‐wave biofeedback system capable of evoking an experience called the Relaxation Response, which is associated with deep relaxations The BGM user attaches three disposable leads to her head. A personal computer then monitors the userʹs brain waves to determine her unique alpha wavelength. Alpha waves, which are in the range of eight to thirteen cycles per second (cps), are associated with a deep meditative state, as compared to beta waves (in the range of thirteen to twenty‐eight cps), which are associated with routine conscious thought. Music is then generated by the computer, according to an algorithm that
transforms the userʹs own brain wave signal.
The BGM algorithm is designed to encourage the generation of alpha waves by producing pleasurable harmonic
combinations upon detection of alpha waves, and less pleasant sounds and sound combinations when alpha detection
is low. In addition, the fact that the sounds are synchronized to the userʹs own alpha wavelength to create a resonance with the userʹs own alpha rhythm also encourages alpha production.
Dr. Herbert Benson, formerly the director of the hypertension section of Bostonʹs Beth Israel Hospital and now at
New England Deaconess Hospital in Boston, and other researchers at the Harvard Medical School and Beth Israel, discovered the neurological‐physiological mechanism of the Relaxation Response, which is described as the opposite
of the ʺfight or flight,ʺ or stress response. [27] The Relaxation Response is associated with reduced levels of epinephrine (adrenaline) and norepinephrine (noradrenaline), blood pressure, blood sugar, breathing, and heart rates. Regular elicitation of this response is reportedly able to produce permanently lowered blood‐pressure levels (to the extent that hypertension is caused by stress factors) and other health benefits. Benson and his colleagues have catalogued a number of techniques that can elicit the Relaxation Response, including yoga and a number of forms of
meditation.
I have had experience with meditation, and in my own experience with BGM, and in observing others, BGM does
appear to evoke the Relaxation Response, The music itself feels as if it is being generated from inside your mind.
Interestingly, if you listen to a tape recording of your own brain‐generated music when you are not hooked up to the computer, you do not experience the same sense of transcendence. Although the recorded BGM is based on your personal alpha wavelength, the recorded music was synchronized to the brain waves that were produced by your brain when the music was first generated, not to the brain waves that are produced while listening to the recording.
You need to listen to ʺliveʺ BGM to achieve the resonance effect.
Conventional music is generally a passive experience. Although a performer may be influenced in subtle ways by
her audience, the music we listen to generally does not reflect our response. Brain Generated Music represents a new modality of music that enables the music to evolve continually based on the interaction between it and our own mental responses to it.
Is BGM producing a spiritual experience? Itʹs hard to say. The feelings produced while listening to ʺliveʺ BGM are
similar to the deep transcendent feelings I can sometimes achieve with meditation, but they appear to be more reliably produced by BGM.
The God Spot
Neuroscientists from the University of California at San Diego have found what they call the God module, a tiny locus of nerve cells in the frontal lobe that appears to be activated during religious experiences. They discovered this neural machinery while studying epileptic patients who have intense mystical experiences during seizures.
Apparently the intense neural storms during a seizure stimulate the God module. Tracking surface electrical activity in the brain with highly sensitive skin monitors, the scientists found a similar response when very religious nonepileptic persons were shown words and symbols evoking their spiritual beliefs. A neurological basis for spiritual experience has long been postulated by evolutionary biologists, because of the social utility of religious belief. In response to reports of the San Diego research, Richard Harries, the Bishop of Oxford, said through a spokesman that
ʺit would not be surprising if God had created us with a physical facility for belief.ʺ [28]
When we can determine the neurological correlates of the variety of spiritual experiences that our species is capable of, we are likely to be able to enhance these experiences in the same way that we will enhance other human
experiences. With the next stage of evolution creating a new generation of humans that will be trillions of times more capable and complex than humans today, our ability for spiritual experience and insight is also likely to gain in power and depth.
Just being—experiencing, being conscious—is spiritual, and reflects the essence of spirituality. Machines, derived
from human thinking and surpassing humans in their capacity for experience, will claim to be conscious, and thus to
be spiritual. They will believe that they are conscious. They will believe that they have spiritual experiences. They will be convinced that these experiences are meaningful. And given the historical inclination of the human race to anthropomorphize the phenomena we encounter, and the persuasiveness of the machines, weʹre likely to believe them when they tell us this.
Twenty‐first‐century machines—based on the design of human thinking—will do as their human progenitors
have done—going to real and virtual houses of worship, meditating, praying, and transcending—to connect with their spiritual dimension.
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
LETʹS JUST GET ONE THING STRAIGHT: THEREʹS NO WAY IʹM GOING TO HAVE SEX WITH A COMPUTER.
Hey, letʹs not jump to conclusions. You should keep an open mind.
IʹLL TRY TO HAVE AN OPEN MIND. AN OPEN BODY IS ANOTHER MATTER. THE IDEA OF GETTING
INTIMATE WITH SOME GADGET, NO MATTER HOW CLEVER, IS NOT VERY APPEALING.
Have you ever spoken to a phone?
TO A PHONE? I MEAN I TALK TO PEOPLE USING A PHONE.
Okay, so a computer circa 2015—in the form of a visual‐auditory‐tactile virtual reality communication device—is just a telephone for you and your lover. But you can do more than just talk.
I LIKE TO TALK TO MY LOVER—WHEN I HAVE ONE—BY PHONE. AND LOOKING AT EACH OTHER WITH A
PICTURE PHONE, OR EVEN A FULL VIRTUAL REALITY SYSTEM, SOUNDS PRETTY COZY. AS FOR YOUR
TACTILE IDEA, HOWEVER, I THINK IʹLL STICK TO TOUCHING MY FRIENDS AND LOVERS WITH REAL
FINGERS.
You can use real fingers with virtual reality, or at least real virtual fingers. But what about when you and your lover are separated?
YOU KNOW, DISTANCE MAKES THE HEART GROW FONDER. ANYWAY, WE DONʹT HAVE TO TOUCH ALL
THE TIME, I MEAN IʹLL BE ABLE TO WAIT UNTIL I GET BACK FROM MY BUSINESS TRIP, WHILE HEʹS
TAKING CARE OF THE KIDS!
When virtual reality does evolve into a convincing, all‐encompassing tactile interface, are you going to go out of your way to avoid any physical contact?
I SUPPOSE IT WOULDNʹT HURT TO KISS GOODNIGHT.
Ah‐ha—the slippery slope! So why stop there?
OKAY, TWO KISSES.
Sure, like I just said, keep an open mind.
SPEAKING OF AN OPEN MIND, YOUR DESCRIPTION OF THE ʺGOD SPOTʺ SEEMS TO TRIVIALIZE THE
SPIRITUAL EXPERIENCE.
I wouldnʹt overreact to this one piece of research. Clearly, somethingʹs going on in the brains of people who are having a spiritual experience. Whatever the neurological process is, once we capture and understand it, we should be able to enhance the spiritual experiences in a re‐created brain running in its new computational medium.
SO THESE RE‐CREATED MINDS WILL REPORT HAVING SPIRITUAL EXPERIENCES. AND I SUPPOSE THEY
WILL ACT IN THE SAME SORT OF TRANSCENDENT, RAPTUROUS WAY THAT PEOPLE DO TODAY WHEN
REPORTING SUCH EXPERIENCES. BUT WILL THESE MACHINES REALLY BE TRANSCENDING, AND
EXPERIENCING THE FEELING OF GODʹS PRESENCE? WHAT WILL THEY BE EXPERIENCING, ANYWAY?
We keep coming back to the issue of consciousness. Machines in the twenty‐first century will report the same range of experiences that humans do. In accordance with the Law of Accelerating Returns, they will report an even broader range. And they will be very convincing when they speak of their experiences. But what will they really be feeling?
As I said earlier, thereʹs just no way to truly penetrate another entityʹs subjective experience, at least not in a scientific way I mean, we can observe the patterns of neural firings, and so forth, but thatʹs still just an objective observation.
WELL, THATʹS JUST THE LIMITATION OF SCIENCE.
Yes, thatʹs where philosophy and religion are supposed to take over. Of course, itʹs hard enough to get agreement on scientific issues.
THAT OFTEN APPEARS TO BE TRUE. NOW, ANOTHER THING IʹM NOT TOO HAPPY ABOUT IS THESE
PILLAGING NANOBOTS THAT ARE GOING TO MULTIPLY WITHOUT END. WEʹLL END UP WITH A HUGE
SEA OF NANOBOTS. WHEN THEYʹRE DONE WITH US, THEYʹLL START EATING EACH OTHER.
There is that danger. But if we write the software carefully . . .
OH SURE, LIKE MY OPERATING SYSTEM. ALREADY I HAVE LITTLE SOFTWARE VIRUSES THAT MULTIPLY
THEMSELVES UNTIL THEY CLOG UP MY HARD DRIVE.
I still think the bigger danger is in their intentional hostile use.
I KNOW YOU SAID THAT, BUT THATʹS NOT EXACTLY REASSURING. AGAIN, WHY DONʹT WE JUST NOT GO
DOWN THIS PARTICULAR ROAD?
Okay, you tell that to the old woman whose crumbling bones will be effectively treated using a nanotechnology-based treatment, or the cancer patient whose cancer is destroyed by little nanobots that swim through his blood vessels.
I REALIZE THERE ARE A LOT OF POTENTIAL BENEFITS, BUT THE EXAMPLES YOU JUST GAVE CAN ALSO BE
ADDRESSED THROUGH OTHER, MORE CONVENTIONAL, TECHNOLOGIES, LIKE BIOENGINEERING.
Iʹm glad you mentioned bioengineering, because we see a very similar problem with bioengineered weapons. Weʹre
very close to the point where the knowledge and equipment in a typical graduate‐school biotechnology program will
be sufficient to create self‐replicating pathogens. Whereas a nanoengineered weapon could replicate across any matter, living and dead, a bioengineered weapon would only replicate across living matter, probably just its human
targets. I understand thatʹs not much comfort. In either case, the potential for uncontrolled self‐replication greatly multiplies the danger.
But youʹre not going to stop bioengineering—itʹs the cutting edge of our medical research. It has already greatly
contributed to the AIDS treatments we have today; diabetic patients use bioengineered forms of human insulin; there
are effective cholesterol‐lowering drugs; there are promising new cancer treatments; and the list of advances is rapidly growing. There is genuine optimism among otherwise skeptical scientists that we will make dramatic gains against cancer and other scourges with bioengineered treatments.
SO HOW ARE WE GOING TO PROTECT OURSELVES FROM BIOENGINEERED WEAPONS?
With more bioengineering—antiviral drugs, for example.
AND NANOENGINEERED WEAPONS?
Same thing—more nanotechnology.
I HOPE THE GOOD NANOBOTS PREVAIL, BUT JUST WONDER HOW WEʹRE GOING TO TELL THE GOOD
NANOBOTS FROM THE BAD ONES.
Itʹs going to be hard to tell, particularly since the nanobots are too small to see.
EXCEPT BY OTHER NANOBOTS, RIGHT?
Good point.
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐

C H A P T E R E I G H T
1999
THE DAY THE COMPUTERS STOPPED
The digitization of information in all of its forms will probably be known as the most fascinating development of the twentieth century.
—An Wang
Economics, sociology, geopolitics, art, religion all provide powerful tools that have sufficed for centuries to explain the essential surfaces of life. To many observers, there seems nothing truly new under the sun—no need for a deep understanding of manʹs new tools—no requirement to descend into the microcosm of modern electronics in order to
comprehend the world. The world is all too much with us.
—George Gilder
If all the computers in 1960 stopped functioning, few people would have noticed. A few thousand scientists would have seen a delay in getting printouts from their last submission of data on punch cards. Some business reports would have been held up. Nothing to worry about.
Circa 1999 is another matter. If all computers stopped functioning, society would grind to a halt. First of all, electric power distribution would fail. Even if electrical power continued (which it wouldnʹt), virtually everything would still break down. Most motorized vehicles have embedded microprocessors, only cars that would run would
be quite old. There would be almost no functioning trucks, buses, railroads, subways, or airplanes. There would be no electronic communication: Telephones, radio, television, fax machines, pagers, e‐mail, and of course the Web would
all cease functioning. You wouldnʹt get your paycheck. You couldnʹt cash it if you did. You wouldnʹt be able to get
your money out of your bank. Business and government would operate at only the most primitive level. And if all the
data in all the computers vanished, then weʹd really be in trouble.
There has been substantial concern with Y2K (Year 2000 Problem), that at least some computer processes will be
disrupted as we approach the year 2000. Y2K primarily concerns software developed one or more decades ago in which date fields used only two digits, which will cause these programs to behave erratically when the year becomes
ʺ00.ʺ I am more sanguine than some about this particular issue. Y2K is causing the urgent rewriting of old business
programs that needed to be dusted off and redesigned anyway. There will be some disruptions (and a lot of litigation), but in my view Y2K is unlikely to cause the massive economic problems that are feared. [1]
In less than forty years, we have gone from manual methods of controlling our lives and civilization to becoming
totally dependent on the continued operation of our computers. Many people are comforted by the fact that we still
have our hand on the ʺplug,ʺ that we can turn our computers off if they get too uppity. In actuality, itʹs the computers that have their figurative hands on our plug. (Give them a couple more decades and their hands wonʹt be so figurative.)
There is little concern about this today—computers circa 1999 are dependable, docile, and dumb. The
dependability (albeit not perfect) is likely to remain. The dumbness will not. It will be the humans, at least the nonupdated ones, who will seem dumb several decades from now. The docility will not remain, either.
For a rapidly increasing array of specific tasks the intelligence of contemporary computers appears impressive, even formidable, but machines today remain narrow‐minded and brittle. In contrast, we humans have softer landings
when we wander outside our own narrow areas of expertise. Unlike Deep Blue, Gary Kasparov is not incompetent in
matters outside of chess.
Computers are rapidly moving into increasingly diverse realms. I could fill a dozen books with examples of the
intellectual prowess of computers circa end of the twentieth century, but I have only a contract for one, so letʹs take a look at a few artful examples.
THE CREATIVE MACHINE
At a time like ours, in which mechanical skill has attained unsuspected perfection, the most famous works may be heard as easily as one may drink a glass of beer and it only costs ten centimes, like the automatic weighing machines.
Should we not fear this domestication of sound, this magic that anyone can bring from a disk at will? Will it not bring to waste the mysterious force of an art which one might have thought indestructible?
—Claude Debussy Collaboration with machines!
What is the difference between manipulation of the machine and collaboration with it? . . . Suddenly, a window would open into a vast field of possibilities; the time limits would vanish, and the machines would seem to become humanized components of the interactive network now consisting of oneself and the machine still obedient but full of suggestions of the master controls of the imagination.
—Vladimir Ussachevsky
Somebody was saying to Picasso that he ought to make pictures of things the way they are—objective pictures. He
mumbled he wasnʹt quite sure what that would be. The person who was bullying him produced a photograph of his
wife from his wallet and said, ʺThere, you see, that is a picture of how she really is.ʺ Picasso looked at it and said, ʺShe is rather small, isnʹt she? And flat?ʺ
—Gregory Bateson
The age of the cybernetic artist has begun, although it is at an early stage. As with human artists, you never know
what these creative systems are going to do next. To date, however, none of them has cut off an ear or run naked through the streets. They donʹt yet have bodies to demonstrate that sort of creativity.
The strength of these systems is reflected by an often startling originality in a turn of a phrase, shape, or musical line. Their weakness has to do, again, with context, or the lack thereof. Since these creative computers are deficient in the real‐world experience of their human counterparts, they often lose their train of thought and ramble off into incoherence. Perhaps the most successful in terms of maintaining thematic consistency throughout a work of art is Harold Cohenʹs robotic painter named Aaron, which I discuss below. The primary reason Aaron is so successful is the
thoroughness of its extensive knowledge base, which Cohen has been building, rule by rule, for three decades.
Jamming with Your Computer
The frequent originality of these systems makes them great collaborators with human artists, and in this manner, computers have already had a transforming effect on the arts. This trend is furthest along in the musical arts. Music has always used the most advanced technologies available; the cabinet‐making crafts of the eighteenth century; the metalworking industries of the nineteenth century; and the analog electronics of the 1960s. Today, virtually all commercial music—recordings, movie and television soundtracks—is created on computer music workstations,
which synthesize and process the sounds, record and manipulate the note sequences, generate notation, even automatically generate rhythmic patterns, walking bass lines, and melodic progressions and variations.
Up until recently, instrument‐playing technique was inextricably linked to the sounds created. If you wanted violin sounds, you had to play the violin. The playing techniques derived from the physical requirements of creating the sounds. Now that link has been broken, If you like flute‐playing technique, or just happen to have learned it, you can now use an electronic wind controller that plays just like an acoustic flute yet creates the sounds not only of a variety of flutes, but also of virtually any other instrument, acoustic or electronic. There are now controllers that emulate the playing technique of most popular acoustic instruments, including piano, violin, guitar, drums, and a variety of wind instruments. Since we are no longer limited by the physics of creating sounds acoustically, a new generation of controllers is emerging that bears no resemblance to any conventional acoustic instruments, but instead attempts to optimize the human factors of creating music with our fingers, arms, feet, mouth, and head. All sounds
can now be played polyphonically and can be layered (played simultaneously) and sequenced with one another.
Also, it is no longer necessary to play music in real time—music can be performed at one speed and played back at
another, without changing the pitch or other characteristics of the notes. All sorts of age‐old limitations have been overcome, allowing a teenager in her bedroom to sound like a symphony orchestra or rock band.
A Musical Turing Test
In 1997, Steve Larson, a University of Oregon music professor, arranged a musical variation of the Turing Test by having an audience attempt to determine which of three pieces of music had been written by a computer and which
one of the three had been written two centuries ago by a human named Johann Sebastian Bach. Larson was only slightly insulted when the audience voted that his own piece was the computer composition, but he felt somewhat vindicated when the audience selected the piece written by a computer program named EMI (Experiments in Musical
Intelligence) to be the authentic Bach composition. Douglas Hofstadter, a longtime observer of (and contributor to) the progression of machine intelligence, calls EMI, created by the composer David Cope, ʺthe most thought-provoking project in artificial intelligence that I have ever come across.ʺ [2]
Perhaps even more successful is a program called Improvisor, written by Paul Hodgson, a British jazz saxophone
player. Improvisor can emulate styles ranging from Bach to jazz greats Louis Armstrong and Charlie Parker. The program has attracted its own following. Hodgson himself says, ʺIf I was new in town and heard someone playing like Improvisor, Iʹd be happy to join in.ʺ [3]
The weakness of todayʹs computerized composition is, again, a weakness of context. ʺIf I turn on three seconds of
EMI and ask myself, ʺWhat was that?ʹ I would say Bach,ʺ says Hofstadter. Longer passages are not always so successful. Often ʺitʹs like listening to random lines from a Keats sonnet. You wonder what was happening to Keats
that day. Was he completely drunk?ʺ
The Literary Machine
Hereʹs a question for you: What kind of murderer has fiber?
The answer: A cereal killer.
I hasten to admit that I did not make up this pun myself. It was written by a computer program called JAPE (Joke
Analysis and Production Engine), created by Kim Binsted. JAPE is the state of the art in the automatic writing of bad puns. Unlike EMI, JAPE did not pass a modified Turing Test when it was recently paired with human comedian Steve
Martin. The audience preferred Martin. [4]
The literary arts lag behind the musical arts in the use of technology. This may have to do with the depth and complexity of even routine prose, a quality which Turing recognized when he based his Turing Test on the ability of
humans to generate convincing written language. Computers are nonetheless of significant practical benefit to those
of us who create written works. Of greatest impact is the simple word processor. Not an artificial technology per se, word processing was derived from the text editors developed during the 1960s at the AI labs at MIT and elsewhere.
This book certainly benefited from the availability of linguistic databases, spell checkers, online dictionaries, not to mention the vast research resources of the World Wide Web. Much of this book was dictated to my personal computer using a continuous speech‐recognition program called Voice Xpress Plus from the dictation division of Lernout & Hauspie (formerly Kurzweil Applied Intelligence), which became available in the middle of my writing the book. With regard to automatic grammar and style checkers, I was forced to turn that particular Microsoft Word feature off, as it seemed to dislike most of my sentences. Iʹll leave the stylistic criticism of this book to my human readers (at least this time around).
A variety of programs help writers brainstorm. Paramind, for example, produces new ideas from your ideas,ʺ
according to its own literature. [50] Other programs allow writers to track the complex histories, characterizations, and interactions of characters in such extended works of fiction as long novels, series of novels, and television drama series.
Programs that write completely original works are particularly challenging because human readers are keenly aware of the myriad syntactic and semantic requirements for sensible written language. Musicians, cybernetic or otherwise, can get away with a bit more inconsistency than authors.
With that in mind, consider the following:
A Story of Betrayal
Dave Striver loved the university. He loved its ivy‐covered clock towers, its ancient and sturdy brick, and its sun-splashed verdant greens and eager youth. He also loved the fact that the university is free of the stark unforgiving trials of the business world—only this isnʹt a fact: Academia has its own tests, and some are as merciless as any in the marketplace. A prime example is the dissertation defense: To earn the Ph.D., to become a doctor, one must pass an oral examination on oneʹs dissertation. This was a test Professor Edward Hart enjoyed giving.
Dave wanted desperately to be a doctor. But he needed the signatures of three people on the first page of his dissertation, the priceless inscriptions which, together, would certify that he had passed his defense. One of the signatures had to come from Professor Hart, and Hart had often said—to others and to himself—that he was honored
to help Dave secure his well‐earned dream.
Well before the defense, Striver gave Hart a penultimate copy of his thesis. Hart read it and told Dave that it was
absolutely first‐rate, and that he would gladly sign it at the defense. They even shook hands in Hartʹs book‐lined office. Dave noticed that Hartʹs eyes were bright and trustful, and his bearing paternal.
At the defense, Dave thought that he eloquently summarized chapter 3 of his dissertation. There were two questions, one from Professor Rogers and one from Dr. Meteer; Dave answered both, apparently to everyoneʹs satisfaction. There were no further objections.
Professor Rogers signed. He slid the tome to Meteer; she too signed, and then slid it in front of Hart. Hart didnʹt
move.
ʺEd?ʺ Rogers said.
Hart still sat motionless. Dave felt slightly dizzy ʺEdward, are you going to sign?ʺ
Later, Hart sat alone in his office, in his big leather chair, saddened by Daveʹs failure. He tried to think of ways he could help Dave achieve his dream.
Okay, thatʹs the end. Admittedly the story kind of peters out, ending with a whimper rather than a bang. Seattle
writer and editor Susan Mulcahy called the story ʺamateurish,ʺ criticizing the authorʹs grammar and word choice. But Mulcahy was nonetheless surprised and impressed when she learned the author was a computer. The program that
wrote the story, named BRUTUS.1, was created by Selmer Bringsjord, Dave Ferucci, and a team of software engineers
at Rensselaer Polytechnic Institute. Apparently, BRUTUS.1 is an expert on betrayal, a concept that Bringsjord and Ferucci spent eight years painstakingly teaching the computer. The researchers acknowledge that their program needs to learn about other issues besides betrayal. ʺInterestingness really combines all of the emotions,ʺ say Bringsjord and Ferucci, and that is something the cybernetic authors are not yet capable of achieving. [6]
The Cybernetic Poet
Another example of a computerized author is a computer program that I designed called Ray Kurzweilʹs Cybernetic
Poet (RKCP). RKCP is a computer‐generated poetry system, which uses language‐modeling techniques to
automatically generate completely original poetry based on poems that it has ʺread. [7]
RKCP reads a selection of poems by a particular author or authors (preferably an extensive selection) and then creates a ʺlanguage modelʺ of that authorʹs work based on Markov models, a mathematical cousin of neural nets.
RKCP can then write original poems from that model. As I discussed earlier, RKCP uses a recursive poetry-generation algorithm to achieve the language style, rhythm patterns, and poem structure of the original authors whose poems were analyzed. There are also algorithms to maintain thematic consistency through the poem. The poems are in a similar style to the author(s) originally analyzed but are completely original new poetry. The system even has rules to discourage itself from plagiarizing.
You can download a free copy of RKCP at www.kurzweiltech.com.
Following is a small sampling of poems written by Ray Kurzweilʹs Cybernetic Poet after it had ʺreadʺ poems by
several famous poets, as well as, lesser‐known contemporary poets.
Poems by Ray Kurzweilʹs Cybernetic Poet
PAGE
A haiku written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Wendy Dennis
Sashay down the page
through the lioness
nestled in my soul.
IMAGINE NOW AND SING
A poem written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Wendy Dennis and Ray Kurzweil and love poems by various authors
Imagine now and sing,
creating myths
forming jewels from the falling snow.
SOUL
A haiku written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
John Keats and Wendy Dennis
You broke my soul
the juice of eternity,
the spirit of my lips.
I THINK IʹLL CRASH
A poem written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Ray Kurzweil, Robert Frost, and Wendy Dennis
I think Iʹll crash.
Just for myself with God
peace on a curious sound
for myself in my heart?
And life is weeping
From a bleeding heart
of boughs bending
such paths of them,
of boughs bending
such paths of breeze
knows weʹve been there
THE STIFLING STUFFY
A haiku written by Ray Kurzweilʹs Cybernetic Poet after reading poems
by Patricia Camarena Rose and Wendy Dennis
The stifling stuffy
Catholic schoolroom,
where I cannot be real.
WONDERED
A poem written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Dave Gitomer
today i wondered
if i mused
today i saw you
i learned
in awe and you
if i mused
today i had one wish
if i saw you
if i saw you
if i had one wish
MOON CHILD
A haiku written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Kathleen Frances Wheeler
Crazy moon child
Hide from your coffin
To spite your doom.
ANGEL
A poem written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Patricia Camarena Rose and Sue Klapes Kelly
Beams of the dawn at the angel
with a calm, silent sea
with a hundred times we write,
with a chance we can open up
a steady rhythm in his face
silent room
desolate beach,
Scattering remains of love.
TOMCAT
A haiku written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Randi and Kathryn Lynn
An old yellow tomcat
lies sleeping content,
he rumbles a heart
LONG YEARS HAVE PASSED
A poem written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Randi and Kathryn Lynn
Long years have passed.
I think of goodbye.
Locked tight in the night
I think of passion;
Drawn to for blue, the night
During the page
My shattered pieces of life
watching the joy
shattered pieces of love
My shattered pieces of love
gone stale.
SANDALS
A haiku written by Ray Kurzweilʹs Cybernetic Poet after reading poems by
Kimberly Mclauchlin and Ray Kurzweil
Scattered sandals
a call back to myself,
so hollow I would echo.
The Visual Arts
The visual arts lie in between the musical arts and literary arts in terms of the impact of computers. A great deal of visual art is created using paint and illustrator programs, which can simulate the effects of conventional materials such as paint strokes as well as implement a wide range of techniques that could only be executed on a computer.
Recently, computers have also taken over most video and film editing.
The Web is filled with the artistic musings of cybernetic artists. A popular technique is the evolutionary algorithm, which allows the computer to evolve a picture by redoing it hundreds or thousands of times. Humans would find this
approach difficult they would waste a lot of paint, for one thing. Mutator, the creation of sculptor William Latham and software engineer Stephen Todd at IBM in Winchester, England, uses the evolutionary approach, as does a program written by Karl Sims, an artist and scientist at Genetic Arts, in Cambridge, Massachusetts. [8]
Probably the leading practitioner of computer‐generated visual art is Harold Cohen. His computerized robot named Aaron has been evolving and creating drawings and paintings for twenty years. These works of visual art are
completely original, created entirely by the computer, and rendered with real paint. Cohen has spent more than three decades endowing his program with a knowledge of many aspects of the artistic process, including composition, drawing, perspective, and color, as well as a variety of styles. While Cohen wrote the program, the pictures created are nonetheless always a surprise to him.


Cohen is frequently asked who should be given credit for the results of his enterprise, which have been displayed
in museums around the world. [9] Cohen is happy to take the credit, and Aaron has not been programmed to complain. Cohen boasts that he will be the first artist in history who will be able to have a posthumous exhibition of completely original works. [10]
Paintings by Aaron by Cohen
These five original paintings were painted by Aaron, a computerized robot built and programmed by Harold Cohen.
These color paintings are reproduced here in black and white. You can see the color versions on this bookʹs web site, at www.penguinputnam.com/kurzweil. [11]
Full‐size versions of these, along with 11 more Aaron paintings, can be downloaded at
http://www.kurzweilcyberart.com/aaron/static.html –ed.



PREDICTIONS OF THE PRESENT
With the impending millennium change there are no shortage of anticipations of what the next century will be like.
Futurism has a long history, but not a particularly impressive one. One of the problems with predictions of the future is that by the time itʹs clear that they have had little resemblance to actual events, itʹs too late to get your money back.
Perhaps the problem is that we let just anyone make predictions. Maybe we should require futurism certification
to be allowed to prognosticate. One of the requirements could be that in retrospect, at least half of your ten‐or‐more-year‐ahead predictions have not been completely embarrassing. Such a certification program would be a slow process, however, and I suspect unconstitutional.
To see why futurism has such a spotty reputation, here is a small sample of predictions from some otherwise intelligent people:
ʺThe telephone has too many shortcomings to be seriously considered as a means of communication.ʺ
—Western Union executive, 1876
ʺHeavier‐than‐air flying machines are not possible.ʺ
—Lord Kelvin, 1895
ʺThe most important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplemented by new discoveries is exceedingly remote.ʺ
—Albert Abraham Michelson, 1903
ʺAirplanes have no military value.ʺ
—Professor Marshal Foch, 1912
ʺI think there is a world market for maybe five computers.ʺ
—IBM Chairman Thomas Watson, 1943
ʺComputers in the future may weigh no more than 1.5 tons.ʺ
—Popular Mechanics, 1949
ʺIt would appear that we have reached the limits of what is possible to achieve with computer technology, although one should be careful with such statements, as they tend to sound pretty silly in five years.ʺ
—John von Neumann, 1949
ʺThereʹs no reason for individuals to have a computer in their home.ʺ
—Ken Olson, 1977
ʺ640,000 bytes of memory ought to be enough for anybody.ʺ
—Bill Gates, 1981
ʺLong before the year 2000, the entire antiquated structure of college degrees, majors and credits will be a shambles.ʺ
—Alvin Toffler
ʺThe Internet will catastrophically collapse in 1996.ʺ
—Robert Metcalfe (inventor of Ethernet), who, in 1997,
ate his words (literally) in front of an audience
Now I get to toot my own horn, and can share with you those predictions of mine that worked out particularly well. But in looking back at the many predictions Iʹve made over the past twenty years, I will say that I havenʹt found any that I find particularly embarrassing (except, maybe, for a few early business plans).
The Age of Intelligent Machines, which I wrote in 1987 through 1988, as well as other articles and speeches I wrote in the late 1980s, contained a lot of my predictions about the 1990s, which included the following: [12]
• Prediction: A computer will defeat the human world chess champion around 1998, and weʹll think less of
chess as a result.
What Happened: As I mentioned, this one was a year off. Sorry.
• Prediction: There will be a sustained decline in the value of commodities (that is, material resources) with most new wealth being created in the knowledge content of products and services, leading to sustained economic growth and prosperity.
What Happened: As predicted, everything is coming up roses (except, as also predicted, for long‐term investors in commodities, which are down 40 percent over the past decade). Even the approval ratings of
politicians from the president to the Congress are at an all‐time high. But the strong economy has more to
do with the Bill in the west coast Washington than the Bill in the east coast Washington. Not that Mr. Gates
deserves primary credit, but the driving economic force in the world today is information, knowledge, and
related computer technologies. Federal Reserve Chairman Alan Greenspan recently acknowledged that
todayʹs unprecedented sustained prosperity and economic expansion is due to the increased efficiency
provided by information technology. But thatʹs only half right. Greenspan ignores the fact that most of the
new wealth that is being created is itself comprised of information and knowledge—a trillion dollars in Silicon Valley alone. Increased efficiency is only part of the story. The new wealth in the form of the market
capitalization of computer‐related (primarily software) companies is real and substantial and is lifting all boats.
• The U.S. House Subcommittee on Banking reported that in the eight‐year period between 1989 and 1997, the
total value of U.S. real estate and durable goods increased only 33 percent, from $9.1 trillion to $12.1 trillion.
The value of bank deposits and credit market instruments increased only 27 percent, from $4.5 trillion to $5.7 trillion. The value of equity shares, however, increased a staggering 239 percent, from $3.4 trillion to $11.4 trillion! The primary engine of this increase is the rapidly increasing knowledge content of products
and services, as well as the increased efficiencies fostered by information technology. This is where new wealth is being created.
Information and knowledge are not limited by the availability of material resources, and in accordance
with the Law of Accelerating Returns will continue to grow exponentially The Law of Accelerating Returns
includes financial returns. Thus a key implication of the law is continuing economic growth.
As this book is being written, there has been considerable attention on an economic crisis in Japan and
other countries in Asia. The United States has been pressing Japan to stimulate its economy with tax cuts and government spending. Little attention is being paid, however, to the root cause of the crisis, which is
the state of the software industry in Asia, and the need for effective entrepreneurial institutions that promote the creation of software and other forms of knowledge. These include venture and angel capital,
[13] widespread distribution of employee‐stock options, and incentives that encourage and reward risk
taking. Although Asia has been moving in this direction, these new economic imperatives have grown more
rapidly than most observers expected (and their importance will continue to escalate in accordance with the
Law of Accelerating Returns).
• Prediction: A worldwide information network linking almost all organizations and tens of millions of individuals will emerge (admittedly, not by the name World Wide Web)
What Happened: The Web emerged in 1994 and took off in 1995 through 1996. The Web is truly a worldwide
phenomenon, and products and services in the form of information swirl around the globe oblivious to
borders of any kind. A 1998 report by the U.S. Commerce Department credited the Internet as a key factor
in spurring economic growth and curbing inflation. It predicted that commerce on the Internet will surpass
$300 billion by 2000. Industry reports put the figure at around $1 trillion, when all business‐to‐business transactions conducted over the Web are taken into consideration.
• Prediction: There will be a national movement to wire our classrooms.
What Happened: Most states (with the exception, unfortunately, of my own state of Massachusetts) have $50
to $100 million annual budgets to wire classrooms and install related computers and software. It is a national priority to provide computer and Internet access to all students. Many teachers remain relatively computer illiterate, but the kids are providing much of the needed expertise.
• Prediction: In warfare, there will be almost total reliance on digital imaging, pattern recognition, and other software‐based technologies. The side with the smarter machines will win. ʺA profound change in military
strategy will arrive in the early 1990s. The more developed nations will increasingly rely on ʹsmart
weapons,ʹ which incorporate electronic copilots, pattern‐recognition techniques, and advanced technologies
for tracking, identification, and destruction.ʺ
What Happened: Several years after I wrote the Age of Intelligent Machines, the Gulf War was the first to clearly establish this paradigm. Today, the United States has the most advanced computer‐based weaponry
and remains unchallenged in its status as a military superpower.
• Prediction: The vast majority of commercial music will be created on computer‐based synthesizers.
What Happened: Most of the musical sounds you hear on television, in the movies, and in recordings are
now created on digital synthesizers, along with computer‐based sequencers and sound processors.
• Prediction: Reliable person identification, using pattern‐recognition techniques applied to visual and speech patterns, will replace locks and keys in many instances.
What Happened: Person‐identification technologies that use speech patterns and facial appearance have
begun to be used today in check‐cashing machines and to control entry into secure buildings and sites. [14]
• Prediction: With the advent of widespread electronic communication in the Soviet Union, uncontrollable political forces will be unleashed. These will be ʺmethods far more powerful than the copiers the authorities
have traditionally banned.ʺ The authorities will be unable to control it. Totalitarian control of information
will have been broken.
What Happened: The attempted coup against Gorbachev in August 1991 was undone primarily by cellular telephones, fax machines, electronic mail, and other forms of widely distributed and previously unavailable
electronic communication. Overall, decentralized communication contributed significantly to the crumbling
of centralized totalitarian political and economic government control in the former Soviet Union.
• Prediction: Many documents never exist on paper because they incorporate information in the form of audio and video pieces.
What Happened: Web documents routinely include audio and video pieces, which can only exist in their web
form.
• Prediction: Around the year 2000, chips with more than a billion components will emerge.
What Happened: Weʹre right on schedule.
• Prediction: The technology for the ʺcybernetic chauffeurʺ (self‐driving cars using special sensors in the roads) will become available by the end of the 1990s with implementation on major highways feasible
during the first decade of the twenty‐first century.
What Happened: Self‐driving cars are being tested in Los Angeles, London, Tokyo, and other cities. There were extensive successful tests on Interstate 15 in southern California during 1997. City planners now realize that automated driving technologies will greatly expand the capacity of existing roads. Installing the
requisite sensors on a highway costs only about $10,000 per mile, compared to $1 to $10 million per mile for
building new highways. Automated highways and self‐driving cars will also eliminate most accidents on
these roads. The U.S. National Automated Highway System (NAHS) consortium is predicting
implementation of these systems during the first decade of the twenty‐first century.
• Prediction: Continuous speech recognition (CSR) with large vocabularies for specific tasks will emerge in the early 1990s.
What Happened: Whoops. Large‐vocabulary domain‐specific CSR did not emerge until around 1996. By late
1997 and early 1998, large‐vocabulary CSR without a domain limitation for dictating written documents
(like this book) was commercially introduced. [16]
• Prediction: The three technologies required for a translating telephone (where you speak and listen in one language such as English, and your caller hears you and replies in another language such as German)—
speaker‐independent (not requiring training on a new speaker), continuous, large‐vocabulary speech
recognition; language translation; and speech synthesis—will each exist in sufficient quality for a first generation system by the late 1990s. Thus, we can expect ʺtranslating telephones with reasonable levels of
performance for at least the more popular languages early in the first decade of the twenty‐first century.ʺ
What Happened: Effective, speaker‐independent speech recognition, capable of handling continuous speech
and a large vocabulary, has been introduced. Automatic language translation, which rapidly translates web
sites from one language to another, is available directly from your web browser. Text‐to‐speech synthesis for a wide variety of languages has been available for many years. All of these technologies run on personal
computers. At Lernout & Hauspie (which acquired my speech‐recognition company, Kurzweil Applied
Intelligence, in 1997), we are putting together a technology demonstration of a translating telephone. We expect such a system to be commercially available early in the first decade of the twenty‐first century. [17]
MY LIFE WITH MACHINES: SOME HIGHLIGHTS