Although computers routinely pass apparently valid forms of the Turing Test, controversy persists about whether or

not machine intelligence equals human intelligence in all of its diversity At the same time, it is clear that there are many ways in which machine intelligence is vastly superior to human intelligence. For reasons of political sensitivity, machine intelligences generally do not press the point of their superiority. The distinction between human and machine intelligence is blurring as machine intelligence is increasingly derived from the design of human intelligence, and human intelligence is increasingly enhanced by machine intelligence.

The subjective experience of machine intelligence is increasingly accepted, particularly since ʺmachinesʺ

participate in this discussion.

Machines claim to be conscious and to have as wide an array of emotional and spiritual experiences as their human progenitors, and these claims are largely accepted.

‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐

I HOPE YOUʹRE HAVING A GOOD TIME MAKING ALL THESE PREDICTIONS.

This part of the book is a bit more fun to write—at least there are fewer references to look up. And I donʹt have to worry about being embarrassed for at least a few decades.

WELL, IT MIGHT BE EASIER IF YOU JUST ASKED ME FOR MY IMPRESSIONS.

Yes, I was just getting to that. But I must say, you look very well.

FOR AN OLD LADY.

I wasnʹt thinking old. But you donʹt look anywhere near fifty. More like thirty‐five.

YES, WELL, FIFTY ISNʹT AS OLD AS IT USED TO BE.

We feel that way in 1999, too.

ITʹS STILL HELPFUL TO EAT RIGHT. WE ALSO HAVE A FEW TRICKS YOU DIDNʹT HAVE. [2]

Nanoengineered bodies?

NO, NOT EXACTLY, NANOTECHNOLOGY IS STILL FAIRLY LIMITED. Bioengineering HAS CERTAINLY

HELPED THE MOST. AGING HAS BEEN DRAMATICALLY SLOWED. MOST DISEASES CAN BE PREVENTED OR

REVERSED.

So nanotechnology is still fairly primitive?

IʹD SAY SO. I MEAN, WE DO HAVE NANOBOTS IN OUR BLOODSTREAMS, BUT THEYʹRE PRIMARILY

DIAGNOSTIC. SO IF ANYTHING STARTS TO GO WRONG, WE CATCH IT VERY EARLY.

So if a nanobot discovers a microscopic infection or other problem developing, what does it do, just start yelling?

YEAH, THATʹS ABOUT IT. I DONʹT THINK WEʹD TRUST IT TO DO MUCH ELSE. IT YELLS TO THE WEB, AND

THEN THE PROBLEM GETS TAKEN CARE OF WHEN WE SIT DOWN FOR OUR NEXT DAILY SCAN.

A three‐dimensional scan?

OF COURSE, WE STILL HAVE THREE‐DIMENSIONAL BODIES.

This is a diagnostic scan?

THE SCAN HAS A DIAGNOSTIC FUNCTION. BUT ITʹS ALSO REMEDIAL. THE SCANNER CAN APPLY

SUFFICIENT ENERGY TO A SMALL THREE‐DIMENSIONAL SET OF POINTS TO DESTROY A COLONY OF

PATHOGENS OR PROBLEMATICAL CELLS BEFORE THEY GET OUT OF HAND.

Is this an electromagnetic energy beam, or a particle beam, or what?

WELL, GEORGE CAN EXPLAIN IT BETTER THAN I CAN. AS I UNDERSTAND IT, IT HAS TWO ENERGY BEAMS

THAT ARE BENIGN BY THEMSELVES, BUT CAUSE PARTICLE EMISSIONS AT THE POINT AT WHICH THEY

CROSS. IʹLL ASK GEORGE NEXT TIME I SEE HIM.

Whenʹs that going to be?

OH, JUST AS SOON AS I GET DONE WITH YOU.

Youʹre not rushing me, are you?

OH, THEREʹS NO HURRY. ITʹS ALWAYS A GOOD IDEA TO BE PATIENT.

Hmmm. So when was the last time the two of you were together?

A FEW MINUTES AGO.

I see. Sounds like your relationship has developed.

OH, IT HAS. HE TAKES VERY GOOD CARE OF ME.

Last time we talked, you werenʹt sure whether he had any feelings.

THAT WAS A LONG TIME AGO. GEORGE IS A DIFFERENT PERSON EVERY DAY. HE JUST GROWS AND

LEARNS CONSTANTLY. HE DOWNLOADS WHATEVER KNOWLEDGE HE WANTS FROM THE WEB AND IT

BECOMES PART OF HIM. HEʹS SO SMART AND INTENSE, AND VERY SPIRITUAL.

Iʹm awfully happy for you. But how does Ben feel about you and George?

HE WASNʹT TOO CRAZY ABOUT IT, THATʹS FOR SURE.

But youʹve worked it out?

WEʹVE WORKED IT OUT, ALL RIGHT. WE BROKE UP THREE YEARS AGO.

Iʹm sorry to hear that.

YEAH, WELL, SEVENTEEN YEARS IS DEFINITELY ABOVE AVERAGE, AS MARRIAGES GO THESE DAYS.

It must have been hard on the kids.

THATʹS TRUE. BUT WE BOTH HAVE DINNER WITH EMILY JUST ABOUT EVERY NIGHT.

You both have dinner with Emily, but not with each other?

EMILY CERTAINLY DOESNʹT WANT TO HAVE DINNER WITH US TOGETHER—THAT WOULDNʹT BE VERY

COMFORTABLE, NOW WOULD IT? SO SHE HAS DINNER WITH US APART.

I see, the good old kitchen table. Now that you donʹt have to deal with Harry Hippo or Miss Simon, thereʹs room for

you and Ben and Emily, but you and Ben donʹt have to actually see each other.

ISNʹT VIRTUAL REALITY GREAT?

Yeah, but too bad people canʹt touch each other without going into the Sensorium.

ACTUALLY, SENSORIUM WENT OUT OF BUSINESS.

Okay, then, total touch.

WE DONʹT NEED TO GO INTO A TOTAL TOUCH ENVIRONMENT ANYMORE, NOT SINCE THE SPINAL

IMPLANTS BECAME AVAILABLE.

So these implants add the tactile environment . . .

TO THE UBIQUITOUS VISUAL AND AUDITORY ENVIRONMENTS WEʹVE HAD FOR MANY YEARS WITH

VIRTUAL REALITY, THATʹS RIGHT.

Sounds like the implants must be pretty popular.

NO, THEYʹRE FAIRLY NEW. ALMOST EVERYONE HAS THE VISUAL AND AUDITORY ENVIRONMENTS NOW,

EITHER AS IMPLANTS OR AT LEAST AS VISUAL AND SONIC LENSES. BUT THE TACTILE IMPLANTS

HAVENʹT QUITE CAUGHT ON YET.

Yet you have them?

YEAH, THEYʹRE REALLY FABULOUS. THERE ARE A FEW GLITCHES, BUT I LIKE BEING ON THE CUTTING

EDGE. IT WAS SUCH A HASSLE HAVING TO USE A TOTAL TOUCH ENVIRONMENT.

Now I can understand how implants could simulate your sense of touch, by generating the nerve impulses that

correspond to a particular set of tactile stimuli. But the total touch environments also provided force feedback, so if youʹre touching a virtual person, you donʹt end up sticking your hand through her body.

WELL, SURE, BUT WE DONʹT MOVE OUR PHYSICAL BODIES IN VIRTUAL REALITY You move your virtual

body, of course. And the virtual reality system prevents you from moving your virtual hand through a barrier—like

someone elseʹs virtual body—in the virtual environment. This all happens using the implants?

RIGHT.

So you could be sitting here talking to me in real reality, while at the same time getting intimate with George in

virtual reality, and with full tactile realism?

WE CALL IT TACTILE VIRTUALISM, BUT YOUʹVE GOT THE IDEA. HOWEVER, THE TACTILE SEPARATION

BETWEEN REAL AND VIRTUAL REALITY IS NOT PERFECT. I MEAN, THIS IS STILL A NEW TECHNOLOGY. SO

IF GEORGE AND I GOT TOO PASSIONATE, I THINK YOUʹD NOTICE.

Thatʹs too bad.

ITʹS NOT A PROBLEM, THOUGH, IN GENERAL, SINCE I ATTEND MOST MEETINGS WITH A VIRTUAL BODY,

ANYWAY. SO WHEN I GET RESTLESS IN THESE INTERMINABLE MEETINGS ON THE CENSUS PROJECT, I

CAN SPEND A FEW PRIVATE MOMENTS WITH GEORGE . . .

Using yet another virtual body?

EXACTLY.

And the tactile separation problem between real reality and one of your virtual realities isnʹt a problem with two

virtual bodies.

NOT REALLY, BUT SOMETIMES PEOPLE CATCH ME SMILING A LOT.

You mentioned glitches . . .

SOMETIMES I FEEL LIKE SOMETHING OR SOMEONE IS TOUCHING ME, BUT IT MIGHT JUST BE MY

IMAGINATION.

Itʹs probably just a worker from the neural implant company remotely testing out the equipment.

HMMM.

So youʹre working on the census?

ITʹS SUPPOSED TO BE AN HONOR. I MEAN ITʹS LIKE THE HOT ISSUE RIGHT NOW. BUT ITʹS JUST ENDLESS

POLITICS. AND ENDLESS MEETINGS.

Well, the census has always used the most cutting‐edge technology. Electrical data processing got its start with the 1890 U.S. census, you know.

TELL ME ABOUT IT. THAT GETS MENTIONED AT LEAST THREE TIMES EVERY MEETING. BUT THE ISSUEʹS

NOT TECHNOLOGY.

Itʹs . . .

WHOʹS A PERSON. THERE ARE PROPOSALS TO START COUNTING VIRTUAL PERSONS OF AT LEAST HUMAN

LEVEL, BUT THEREʹS NO END OF PROBLEMS WITH COMING UP WITH A VIABLE PROPOSAL. VIRTUAL

PERSONS ARE NOT SO READILY COUNTABLE AND DISTINCT, SINCE THEY CAN COMBINE WITH ONE

ANOTHER, OR SPLIT UP INTO MULTIPLE APPARENT PERSONALITIES.

Why donʹt you just count machines that were derived from specific persons?

THERE ARE SOME CYBERNETIC PERSONALITIES WHO CLAIM THAT THEY USED TO BE A PARTICULAR

PERSON, BUT THEYʹRE REALLY JUST PERSONALITY EMULATIONS. THE COMMISSION JUST DIDNʹT THINK

IT WAS APPROPRIATE.

I would agree—personality emulation just doesnʹt cut it. It should be the result of a full neural scan.

PERSONALLY, IʹVE BEEN LEANING TO EXPANDING THE DEFINITION, BUT IʹVE HAD DIFFICULTY COMING

UP WITH A COHERENT METHODOLOGY. THE COMMISSION DID AGREE TO LOOK AT THE PROBLEM

AGAIN WHEN THE NEURAL SCANS ARE EXPANDED TO A MAJORITY OF NEURAL REGIONS. ITʹS A TOUGH

ISSUE, THOUGH. WE DO HAVE PEOPLE WHO HAVE THE VAST MAJORITY OF THEIR MENTAL COMPUTES

TAKING PLACE IN THEIR NANOTUBE IMPLANTS. BUT THE POLITICS SEEMS TO REQUIRE AT LEAST SOME

UNENHANCED ORIGINAL SUBSTRATE TO BE COUNTED.

Original substrate? You mean human neurons?

RIGHT. IF YOU DONʹT REQUIRE SOME NEURON‐BASED THINKING, IT JUST GETS IMPOSSIBLE TO COUNT

DISTINCT MINDS. YET SOME OF THE MACHINES DO MANAGE TO GET COUNTED. THEY SEEM TO ENJOY

ESTABLISHING A HUMAN IDENTITY AND PASSING FOR A HUMAN. ITʹS A BIT OF A GAME.

There must be legal benefits to having a recognized human identity.

THEREʹS KIND OF A STANDOFF. THE OLD LEGAL SYSTEM STILL REQUIRES A HUMAN AGENT OF

RESPONSIBILITY. BUT THE SAME ISSUE OF WHO OR WHAT IS HUMAN COMES UP IN THE LEGAL CONTEXT.

ANYWAY, SO‐CALLED HUMAN DECISIONS ARE HEAVILY INFLUENCED BY THE IMPLANTS. AND THE

MACHINES DONʹT IMPLEMENT SIGNIFICANT DECISIONS WITHOUT THEIR OWN REVIEW. BUT I SUPPOSE

YOUʹRE RIGHT; THERE ARE SOME BENEFITS TO BEING COUNTED.

How about using a Turing Test as a means of counting?

THAT WOULD NEVER DO. FIRST OF ALL, IT WOULDNʹT BE MUCH OF A SCREEN. FURTHERMORE, YOUʹD

HAVE THE SAME PROBLEM AGAIN IN SELECTING A HUMAN JUDGE TO CONDUCT THE TURING TEST.

AND YOUʹD STILL HAVE THE COUNTING ISSUE. TAKE GEORGE, FOR EXAMPLE. HEʹS GREAT AT

IMPRESSIONS. USUALLY, RIGHT AFTER DINNER, HEʹLL ENTERTAIN ME WITH SOME PERSONALITY HEʹS

CONCOCTED. HE COULD SUBMIT THOUSANDS OF PERSONALITIES IF HE WANTED TO.

Speaking of George, doesnʹt he want to be counted?

OH, I THINK HE SHOULD BE. HEʹS SO MUCH WISER AND GENTLER THAN ANYONE ON THE COMMISSION.

I GUESS THATʹS WHY IʹVE WANTED TO EXPAND THE DEFINITION. GEORGE COULD MANAGE TO

ESTABLISH THE REQUISITE IDENTITY ORIGIN IF HE WANTED TO. BUT HE REALLY DOESNʹT CARE ABOUT

IT.

He seems to care mostly about you.

HMMM. THAT COULD BE IT.

You sound a little frustrated with the commission.

WELL, I CAN UNDERSTAND THEIR NEED TO BE CAUTIOUS. I JUST FEEL THAT THEYʹRE UNDULY

INFLUENCED BY THE RY GROUPS.

The Luddites, I mean, Remember York . . .

EXACTLY. I AM SYMPATHETIC TO A LOT OF THE YORK CONCERNS. BUT LATELY THEYʹVE TAKEN

STRIDENT POSITIONS AGAINST NEURAL THEYʹRE ALSO OPPOSED TO ANY OF THE NEURAL IMPLANTS,

WHICH IS JUST TOO RIGID. THEYʹRE ALSO OPPOSED TO ANY OF THE NEURAL SCANNING RESEARCH.

So theyʹre influencing the census commission to keep a conservative definition of who can be counted as a human?

IʹD SAY SO. THE COMMISSION DENIES IT, BUT THEREʹS A GROWING CONSENSUS THAT THE YORK PEOPLE

HAVE TOO MUCH OF A VOICE THERE. THE COMMISSION DIRECTORʹS BROTHER WAS ACTUALLY A

MEMBER OF THE FLORENCE MANIFESTO BRIGADE.

Florence? Isnʹt that where they locked up Kaczynski?

THATʹS RIGHT‐FLORENCE, COLORADO. THE FLORENCE MANIFESTO WAS SMUGGLED OUT BY ONE OF THE

GUARDS BEFORE KACZYNSKIʹS DEATH. ITʹS BECOME A KIND OF BIBLE FOR THE MORE STRIDENT YORK

FACTIONS.

These are violent groups?

GENERALLY, NO. VIOLENCE WOULD BE UTTERLY FUTILE. OCCASIONALLY THERE ARE VIOLENT LONERS,

OR SMALL GROUPS, WHO CLAIM TO BE PART OF THE FM BRIGADE, BUT THEREʹS NO EVIDENCE OF ANY

BROAD CONSPIRACY.

So whatʹs in the Florence Manifesto?

DESPITE IT HAVING BEEN WRITTEN ALL IN LONGHAND USING A PENCIL, IT WAS A RATHER ARTICULATE

AND EFFECTIVE DOCUMENT, PARTICULARLY WITH REGARD TO THE NANO‐PATHOGEN CONCERN.

So what is the concern with nanopathogens?

ACTUALLY, I JUST ATTENDED A CONFERENCE ON THAT.

You attended virtually?

THATʹS USUALLY THE WAY I ATTEND CONFERENCES NOWADAYS. ANYWAY, THE CONFERENCE

SESSIONS OVERLAPPED THE COMMISSION MEETINGS, SO I HAD NO CHOICE.

You can attend more than one meeting at a time?

IT DOES GET A LITTLE CONFUSING. ITʹS KIND OF POINTLESS, THOUGH, TO JUST SIT IN A LONG MEETING

AND NOT DO SOMETHING USEFUL WITH YOUR TIME.

I agree. So, what was the view of the conference?

NOW THAT THE BIOPATHOGEN CONCERN IS ABATING—GIVEN THE NANOPATROL AND SCANNER

TECHNOLOGIES, AND ALL—THERE IS MORE ATTENTION BEING PAID TO THE NANOPATHOGEN THREAT.

How serious is it?

IT HASNʹT BEEN A BIG PROBLEM YET. THERE WAS A WORKSHOP ON A RECENT PHENOMENON OF

NANOPATROLS THAT HAVE RESISTED THE COMMUNICATION PROTOCOLS, AND THAT DID SET OFF A

FEW ALARMS. BUT THEREʹS NOTHING LIKE YOU HAD IN 1999 WITH OVER 100,000 PEOPLE DYING EACH

YEAR FROM ADVERSE REACTIONS TO PHARMACEUTICAL DRUGS. AND THATʹS WHEN THEY WERE

PRESCRIBED AND TAKEN CORRECTLY.

And drugs in 2029?

DRUGS TODAY ARE GENETICALLY ENGINEERED SPECIFICALLY FOR THE INDIVIDUALʹS OWN DNA

COMPOSITION. INTERESTINGLY, THE MANUFACTURING PROCESS THATʹS USED IS BASED ON THE

PROTEIN‐FOLDING WORK THAT WAS ORIGINALLY DESIGNED FOR THE NANOPATROLS. IN ANY EVENT,

DRUGS ARE INDIVIDUALLY TAILORED AND TESTED IN A HOST SIMULATION BEFORE INTRODUCING ANY

SIGNIFICANT VOLUME TO THE ACTUAL HOSTʹS BODY. SO ADVERSE REACTIONS ON A MEANINGFUL

SCALE ARE QUITE RARE.

So there isnʹt much concern with nanopathogens?

OH, I WOULDNʹT SAY THAT. THERE WAS QUITE A BIT OF CONCERN EXPRESSED ABOUT SOME OF THE

RECENT SELF‐REPLICATION RESEARCH. There should be.

BUT THE ENVIRONMENT RESTRUCTURING PROPOSALS SEEM TO REQUIRE IT.

Well, donʹt say I didnʹt warn you.

IʹLL KEEP THAT IN MIND, NOT THAT I HAVE MUCH INFLUENCE ON THE ISSUE.

Your work is mostly on the census issue?

YEAH, FOR THE LAST FIVE YEARS ANYWAY. I SPENT THREE YEARS BASICALLY GOING THROUGH THE

COMMISSIONʹS STUDY GUIDE, SO I COULD BE QUALIFIED TO SIT IN ON THE COMMISSION MEETINGS,

ALTHOUGH I STILL DONʹT HAVE A VOTE.

So you had a three‐year leave to study?

IT FELT LIKE I WAS BACK IN COLLEGE. AND LEARNING WAS JUST AS TEDIOUS AS IT WAS THEN.

Donʹt the neural implants help?

OH, SURE, THEREʹS NO WAY I COULD HAVE GOTTEN THROUGH IT OTHERWISE. UNFORTUNATELY, I STILL

CANʹT JUST DOWNLOAD THE MATERIAL, NOT THE WAY GEORGE CAN. THE IMPLANT PREPROCESSES THE

INFORMATION, AND FEEDS ME THE PREPROCESSED KNOWLEDGE STRUCTURES QUICKLY. BUT ITʹS OFTEN

DISCOURAGING; IT JUST TAKES SO LONG. GEORGE HAS BEEN A BIG HELP, THOUGH. HE KIND OF

WHISPERS TO ME WHEN IʹM PUZZLED ABOUT SOMETHING.

So the three‐year study leave is over now?

ABOUT A YEAR AGO, THE COMMISSION MEETINGS GOT PRETTY INTENSE, AND IʹVE FOCUSED ON THAT.

NOW WITH THE CENSUS ONLY A YEAR AWAY, WEʹRE WORKING ON IMPLEMENTATION. SO ASIDE FROM

THE LAWSUIT, THATʹS PRETTY MUCH IT.

Lawsuit?

OH, JUST A ROUTINE INTELLECTUAL PROPERTY DISPUTE. MY PATENT ON AN ENHANCED

EVOLUTIONARY PATTERN‐RECOGNITION ALGORITHM FOR NANOPATROL DETECTION OF CELL

IMBALANCES WAS ATTACKED WITH A PRIOR ART CITATION. I HAPPENED TO MENTION IN ONE OF THE

DISCUSSION GROUPS THAT I THOUGHT SEVERAL OF THE PATENT CLAIMS WERE BEING INFRINGED, AND

NEXT THING I KNEW I GOT HIT WITH A DECLARATORY JUDGMENT SUIT FROM THE NANOPATROL

INDUSTRY.

I didnʹt know you did work on nanopatrols.

TO BE PERFECTLY HONEST, IT WAS GEORGEʹS INVENTION, BUT HE NEEDED A RESPONSIBLE AGENT.

Since he has no standing.

ITʹS TRUE, THERE ARE STILL SOME LIMITATIONS WHEN YOU CANʹT ESTABLISH YOUR HUMAN ORIGIN.

So howʹs this going to get resolved?

ITʹS UP BEFORE THE MAGISTRATE NEXT MONTH.

It can be rather frustrating taking these technical issues to court.

OH, THIS MAGISTRATE KNOWS HIS STUFF. HEʹS A RECOGNIZED EXPERT ON NANOPATROL PATTERN

RECOGNITION.

Doesnʹt sound like the courts I know.

THE EXPANSION OF THE MAGISTRATE SYSTEM HAS BEEN A VERY POSITIVE DEVELOPMENT. IF WE WERE

LIMITED TO JUST THE HUMAN JUDGES . . .

Oh, so the magistrate is . . .

A VIRTUAL INTELLIGENCE, YES.

So the machines do have some legal standing.

OFFICIALLY, THE VIRTUAL MAGISTRATES ARE AGENTS OF THE HUMAN JUDGE IN CHARGE OF THAT

COURT, BUT THE MAGISTRATES MAKE MOST OF THE DECISIONS.

I see, sounds like these magistrates are pretty influential.

THEREʹS REALLY NO CHOICE. THE ISSUES ARE JUST TOO COMPLICATED, AND THE PROCESS WOULD TAKE

TOO LONG OTHERWISE.

I see. So, tell me about your son.

HEʹS A SOPHOMORE AT STANFORD, AND HAVING A GREAT TIME.

They certainly have a beautiful campus.

YEAH, WEʹVE BEEN LOOKING AT THE OVAL AND QUAD FOR A LONG TIME. JEREMYʹS HAD THREE‐

DIMENSIONAL PROJECTIONS OF THE STANFORD CAMPUS ON THE PICTURE PORTALS FOR THE LAST TEN

YEARS.

He must feel right at home then.

HE IS AT HOME. HEʹS DOWNSTAIRS.

So heʹs attending virtually.

MOST STUDENTS DO. BUT STANFORD STILL HAS SOME ANACHRONISTIC RULES ABOUT SPENDING AT

LEAST A WEEK EACH QUARTER ACTUALLY ON CAMPUS.

With your physical body?

EXACTLY, WHICH MAKES IT DIFFICULT FOR A VIRTUAL INTELLIGENCE TO ATTEND OFFICIALLY.

Not that they need to, since they can download knowledge directly from the Web.

ITʹS NOT THE KNOWLEDGE BUT THE DISCUSSION GROUPS THAT WOULD BE OF INTEREST.

Canʹt anyone attend the discussion groups?

ONLY THE OPEN DISCUSSIONS. THERE ARE A LOT OF CLOSED DISCUSSION GROUPS Which are not on the

Web?

OF COURSE THEYʹRE ON THE WEB, BUT YOU NEED A KEY.

Right, so thatʹs how Jeremy attends from home?

EXACTLY. JEREMY AND GEORGE HAVE GROWN QUITE CLOSE LATELY, SO JEREMY LETS GEORGE LISTEN

IN TO THE CLOSED SESSIONS, BUT DONʹT TELL ANYONE THAT.

My lips are sealed. Iʹll only tell my other readers.

WELL, THEY NEED TO KEEP IT CONFIDENTIAL AS WELL.

Iʹll pass that on.

I HOPE THAT WILL BE OKAY. ANYWAY, GEORGE IS HELPING JEREMY WITH HIS HOMEWORK RIGHT NOW I

hope George doesnʹt do all of it for him.

OH, GEORGE WOULDNʹT DO THAT. HEʹS JUST BEING HELPFUL. HE HELPS ALL OF US. WE REALLY

COULDNʹT MANAGE OTHERWISE.

You know, I could use his help, too. He might help me meet this book deadline I have.

WELL, GEORGE IS CLEVER, BUT IʹM AFRAID HE DOESNʹT SEEM TO HAVE THAT POETIC‐LICENSE

TECHNOLOGY THAT ENABLES YOU TO TALK TO ME FROM THIRTY YEARS AWAY.

Thatʹs too bad.

BUT IʹLL BE HAPPY TO HELP YOU OUT.

Yes, I know, you already have.

‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐

C H A P T E R T W E L V E

2099

When I look out my window

what do you think I see?

. . . so many different people to be.

—Donovan

We know what we are, but know not what we may become.

—William Shakespeare

Human thinking is merging with the world of machine intelligence that the human species initially created.

The reverse engineering of the human brain appears to be complete. The hundreds of specialized regions have been fully scanned, analyzed, and understood. Machine analogues are based on these human models, which have been enhanced and extended, along with many new massively parallel algorithms. These enhancements, combined with the enormous advantages in speed and capacity of electronic/photonic circuits, provide substantial advantages

to machine‐based intelligence.

Machine‐based intelligences derived entirely from these extended models of human intelligence claim to be human, although their brains are not based on carbon‐based cellular processes, but rather electronic and photonic ʺequivalents.ʺ Most of these intelligences are not tied to a specific computational‐processing unit (that is, piece of hardware). The number of software‐based humans vastly exceeds those still using native neuron‐cell‐based computation. A software‐based intelligence is able to manifest bodies at will: one or more virtual bodies at different levels of virtual reality and nanoengineered physical bodies using instantly reconfigurable nanobot swarms.

Even among those human intelligences still using carbon‐based neurons, there is ubiquitous use of neural implant

technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do. There are a multiplicity of ways in which these scenarios are combined. The concept of what is human has been significantly altered. The rights and powers of different manifestations of human and machine intelligence and their various combinations represent a primary political and philosophical issue, although the basic rights of machine‐based intelligence have been settled.

There is a plethora of trends that we can already taste and feel in 2099 that will continue to accelerate in this coming twenty‐second century, interacting with each other, and

‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐

YES, YES, AS NIELS BOHR LIKED TO SAY, ʺITʹS HARD TO PREDICT, ESPECIALLY THE FUTURE.ʺ SO WHY

DONʹT YOU JUST CONTINUE WITH MY OBSERVATIONS. THAT WILL BE EASIER AND LESS CONFUSING.

Perhaps that makes sense.

AFTER ALL, A HUNDRED YEARS IS A LONG TIME. AND THE TWENTY‐FIRST CENTURY WAS LIKE TEN

CENTURIES IN ONE.

We thought that was true for the twentieth century.

THE SPIRAL OF ACCELERATING RETURNS LIVES ON.

Iʹm not surprised. Anyway, you do look amazing.

YOU SAY THAT EVERY TIME WE MEET.

I mean you look twenty again, only more beautiful than at the start of the book.

I KNEW THATʹS HOW YOUʹD WANT ME.

Great, now Iʹm going to be accused of preferring younger women.

IʹM GLAD IʹM IN 2099.

Thanks.

HEY, I CAN LOOK UGLY, TOO.

Thatʹs okay.

NO REALLY, I CAN LOOK UGLY WITHOUT CHANGING MY APPEARANCE. ITʹS LIKE THAT WITTGENSTEIN

QUOTE, ʺIMAGINE THIS BUTTERFLY EXACTLY AS IT IS, BUT UGLY INSTEAD OF BEAUTIFUL.ʺ

I was always a little confused by that quote, but Iʹm glad youʹre quoting twentieth‐century thinkers.

WELL, YOU WOULDNʹT BE FAMILIAR WITH THE TWENTY‐FIRST‐CENTURY ONES.

So youʹre expressing this appearance. But I donʹt have the ability to see virtual reality, so I donʹt UNDERSTAND

HOW YOU CAN SEE ME?

Right.

MY BODY RIGHT NOW IS JUST A LITTLE FOG SWARM PROJECTION. NEAT, HUH?

Not bad, not bad at all. You feel pretty good, too.

I THOUGHT IʹD GIVE YOU A HUG, I MEAN THE BOOKʹS ALMOST OVER.

This is quite a technology.

OH, WE DONʹT USE THE SWARMS SO OFTEN ANYMORE.

Last time I saw you, there were no nanobot swarms. Now youʹre mostly past using them. Guess I missed a phase there.

OH, ONE OR TWO. ITʹS BEEN SEVENTY YEARS SINCE WE LAST SAW EACH OTHER! AND AN EVER

ACCELERATING SEVENTY YEARS AT THAT.

Weʹll have to see each other more often.

I DONʹT KNOW IF THAT WILL BE POSSIBLE. THE BOOKʹS COMING TO AN END, AS YOU SAID.

So, are you and George still close?

OH, VERY CLOSE. WEʹRE NEVER APART.

Never? Donʹt you get bored with each other?

DO YOU GET BORED WITH YOURSELF?

Actually, sometimes I do. But are you saying that you and George have, whatʹs the word Iʹm looking for . . .

MERGED?

Hmmm. Is this like a corporate merger?

WELL, MORE LIKE A JOINING OF TWO SOCIETIES.

Two societies of mind?

EXACTLY. OUR MIND IS NOW JUST ONE BIG HAPPY SOCIETY.

The female spider devouring the little male spider?

OH NO, GEORGE IS THE BIG SPIDER. HIS MIND WAS LIKE . . .

A galaxy?

ALL RIGHT, LETʹS NOT GET CARRIED AWAY, MAYBE LIKE A BIG SOLAR SYSTEM.

So youʹve joined societies, or, uh, joined your societies. So you canʹt make love to each other anymore?

THAT DOESNʹT FOLLOW AT ALL.

Okay, I guess some things are beyond my 1999 comprehension.

THAT DOESNʹT FOLLOW EITHER. THE PROFOUND THING ABOUT HUMAN BEINGS—EVEN MOSHs—IS

THAT ALMOST NOTHING IS TRULY BEYOND YOUR COMPREHENSION. THAT JUST WASNʹT TRUE OF THE

OTHER PRIMATES.

Okay, my questions are getting queued up now. MOSHs?

OH, MOSTLY ORIGINAL SUBSTRATE HUMANS.

Yes, of course—unenhanced . . .

EXACTLY.

But how can you be intimate with George now that youʹve joined forces, so to speak?

WELL, AS BARRY SPACKSʹS POEM—

You mean ʺMade double by his lust, he sounds a womanʹs groans . . .

RIGHT, I MEAN EVEN MOSHs SPLIT THEMSELVES—

When weʹre by ourselves—

OR WITH ANOTHER. THATʹS REALLY THE ULTIMATE, DONʹT YOU THINK, TO BECOME THE OTHER PERSON

AND YOURSELF AT THE SAME TIME.

Especially when the other person is already part of yourself.

SURE. BUT GEORGE AND I CAN STILL SPLIT OURSELVES, AT LEAST OUR OUTER LAYERS.

Layers?

OKAY, WELL MAYBE SOME THINGS ARE HARD TO EXPLAIN TO A MOSH, EVEN A NICE ONE LIKE

YOURSELF.

Yeah, a MOSH that created you, donʹt forget.

OH, IʹLL NEVER FORGET. IʹLL BE GRATEFUL FOREVER. YOU CAN THINK OF THE OUTER LAYERS AS OUR

PERSONALITIES.

So, you separate your personalities . . .

AT TIMES. BUT WE STILL SHARE OUR KNOWLEDGE STORES AT ALL TIMES.

Sounds like the two of you have a lot in common.

[GIGGLES]

I see you still have your old personality.

OF COURSE IʹVE KEPT MY OLD PERSONALITY. IT HAS A LOT OF SENTIMENTAL VALUE TO ME.

I see, so you have others?

YEAH, MY FAVORITES ARE A FEW THAT GEORGE CAME UP WITH.

Creative guy.

OH YES.

Well, having multiple personalities is not all that special. Weʹve had people like that in the twentieth century, too.

SURE, I REMEMBER. BUT THERE WASNʹT ENOUGH THINKING TO GO AROUND FOR ALL THOSE

PERSONALITIES WHEN THEYʹRE STUCK IN JUST ONE MOSH BRAIN. SO IT WAS DIFFICULT FOR ALL OF

THOSE PERSONALITIES TO SUCCEED IN LIFE.

So what are you doing right now?

IʹM TALKING TO YOU.

Yes, I know, but what else are you doing?

REALLY NOT MUCH. IʹM TRYING TO PAY MOST OF MY ATTENTION TO YOU.

Not much? So you are doing something else.

I REALLY CANʹT THINK OF ANYTHING.

Well, are you relating to someone else at the moment?

YOUʹRE PRETTY NOSY.

Weʹve already established that decades ago. But that doesnʹt answer the question.

WELL, NOT REALLY.

Not really? So you are.

ALL RIGHT, ASIDE FROM GEORGE, NOT REALLY.

Iʹm glad Iʹm not distracting you too much. Okay, what else?

JUST FINISHING UP THIS SYMPHONY.

Is this a new interest?

IʹM REALLY JUST DABBLING, BUT CREATING MUSIC IS A GREAT WAY FOR ME TO STAY CLOSE WITH

JEREMY AND EMILY.

Creating music sounds like a good thing to do with your kids, even if they are almost ninety years old. So, can I hear it?

IʹM AFRAID YOU WOULDNʹT UNDERSTAND IT.

So it requires enhancement to understand?

YES, MOST ART DOES. FOR STARTERS, THIS SYMPHONY IS IN FREQUENCIES THAT A MOSH CANʹT HEAR,

AND HAS MUCH TOO FAST A TEMPO. AND IT USES MUSICAL STRUCTURES THAT A MOSH COULD NEVER

FOLLOW.

Canʹt you create art for nonaugmented humans? I mean thereʹs still a lot of depth possible. Consider Beethoven—he

wrote almost two centuries ago, and we still find his music exhilarating.

YES, THEREʹS A GENRE OF MUSIC—ALL THE ARTS ACTUALLY WHERE WE CREATE MUSIC AND ART THAT

A MOSH IS CAPABLE OF UNDERSTANDING.

And then you play MOSH music for MOSHs?

HMMM, NOW THEREʹS AN INTERESTING IDEA. I SUPPOSE WE COULD TRY THAT, ALTHOUGH MOSHs ARE

NOT THAT EASY TO FIND ANYMORE. ITʹS REALLY NOT NECESSARY, THOUGH. WE CAN CERTAINLY

UNDERSTAND WHAT A MOSH IS CAPABLE OF UNDERSTANDING. THE POINT, THOUGH, IS TO USE THE

MOSH LIMITATIONS AS AN ADDED CONSTRAINT.

Sort of like composing new music for old instruments.

YEAH, NEW MUSIC FOR OLD MINDS.

Okay, so aside from your, uh, dialogue with George, and this symphony, I have your complete attention?

WELL, Now GEORGE AND I ARE SHARING A HAMBURGER FOR LUNCH.

I thought you were a vegetarian.

ITʹS NOT A HAMBURGER FROM A COW, SILLY.

Of course, a swarm hamburger.

NO, NO, YOUʹRE GETTING A LITTLE CONFUSED. WE DID HAVE NANOPRODUCED FOOD ABOUT HALF A

CENTURY AGO. SO WE COULD EAT MEAT, OR ANYTHING WE WANTED, BUT IT DIDNʹT COME FROM

ANIMALS, AND IT HAD THE RIGHT NUTRITIONAL COMPOSITION. BUT EVEN THEN, YOU REALLY

WOULDNʹT WANT TO EAT A SWARM PROJECTION—SWARMS ARE JUST FOR VISUAL‐AUDITORY‐TACTILE

PROJECTIONS IN REAL REALITY. YOUʹRE FOLLOWING ME?

Uh, sure.

WELL, A COUPLE OF DECADES LATER, OUR BODIES WERE BASICALLY REPLACED WITH

NANOCONSTRUCTED ORGANS. SO WE DIDNʹT NEED TO EAT ANYMORE IN REAL REALITY. BUT WE STILL

ENJOYED SHARING A MEAL IN VIRTUAL REALITY. ANYWAY, THE NANOCON‐STRUCTED BODIES WERE

PRETTY INFLEXIBLE. I MEAN, IT TOOK SECONDS TO RECONSTRUCT THEM INTO ANOTHER FORM. SO

TODAY, WHEN NECESSARY, OR DESIRABLE, WE JUST PROJECT AN APPROPRIATE BODY.

Using the nanobot swarms?

THATʹS ONE WAY OF DOING IT. THATʹS WHAT IʹM DOING WITH YOU NOW.

Since Iʹm a MOSH.

RIGHT, BUT IN MOST OTHER CIRCUMSTANCES, I JUST USE AN AVAILABLE VIRTUAL CHANNEL.

Okay, I think Iʹm following you now.

LIKE I SAID, MOSHs CAN UNDERSTAND ALMOST ANYTHING. WE DO HAVE A LOT OF RESPECT FOR

MOSHs.

Itʹs your heritage, after all.

RIGHT, AND ANYWAY, WEʹRE REQUIRED TO, SINCE THE GRANDFATHER LEGISLATION.

Okay, let me guess. MOSHs were protected by grandfathering native minds.

YES, BUT NOT ONLY MOSHs. ITʹS REALLY A PROGRAM TO PROTECT OUR WHOLE BIRTH‐RIGHT, A

REVERENCE FOR WHERE WEʹVE BEEN.

So you still like to eat?

SURE. SINCE WEʹRE BASED ON OUR MOSH HERITAGE, OUR EXPERIENCES—EATING, MUSIC, SEXUALITY—

HAVE THE OLD FOUNDATION, ALBEIT VASTLY EXPANDED. HOWEVER, WE DO HAVE A WIDE RANGE OF

CURRENT EXPERIENCES THAT ARE DIFFICULT TO TRACE, ALTHOUGH THE ANTHROPOLOGISTS KEEP

TRYING.

Iʹm still surprised that youʹd be interested in eating a hamburger.

ITʹS A THROWBACK, I KNOW A LOT OF OUR ACTS AND THOUGHTS ARE ROOTED IN THE PAST. BUT NOW

THAT YOU MENTION IT, I THINK IʹVE LOST MY APPETITE.

Sorry about that.

YEAH, WELL, I SHOULD BE MORE SENSITIVE. SHELBY, A GOOD FRIEND OF MINE, LOOKS LIKE A COW, AT

LEAST THATʹS HOW SHE ALWAYS MANIFESTS HERSELF. SHE CLAIMS THAT SHE WAS A COW BROUGHT

OVER TO THE OTHER SIDE AND ENHANCED. BUT NO ONE BELIEVES HER.

So how satisfying is it to eat a virtual hamburger in virtual reality?

ITʹS VERY SATISFYING—THE TEXTURE, TASTE, AROMA IS WONDERFUL—JUST HOW I REMEMBER IT, EVEN

IF I WAS A VEGETARIAN MOST OF THE TIME. THE NEURAL MODELS NOT ONLY SIMULATE OUR VISUAL,

AUDITORY, AND TACTILE ENVIRONMENTS, BUT OUR INTERNAL ENVIRONMENTS AS WELL.

Including digestion?

YES, THE MODEL OF BIOCHEMICAL DIGESTION IS QUITE ACCURATE.

How about indigestion?

WE DO SEEM TO MANAGE TO AVOID THAT.

Youʹre missing something there.

HMMM.

Okay, you were an attractive young woman when I first met you. And you still project yourself as a beautiful young

woman. At least when Iʹm with you.

THANKS.

So, are you saying that youʹre a machine now?

A MACHINE? THATʹS REALLY NOT FOR ME TO SAY. ITʹS LIKE ASKING ME IF IʹM BRILLIANT OR INSPIRING.

I guess the word machine in 2099 doesnʹt have quite the same connotations that it has here in 1999.

THATʹS HARD FOR ME TO RECALL NOW.

Okay, letʹs put it this way. Do you still have any carbon‐based neural circuits?

CIRCUITS, IʹM NOT SURE I UNDERSTAND. YOU MEAN MY OWN CIRCUITS?

Gee, I guess a lot of time has gone by.

ALL RIGHT, LOOK, WE DID HAVE OUR OWN MENTAL MEDIUM FOR A FEW DECADES, AND THERE ARE

STILL LOCAL INTELLIGENCES THAT LIKE TO STICK TO A SPECIFIC COMPUTATIONAL UNIT. BUT THATʹS A

REFLECTION OF SOME OLD ATTACHMENT ANXIETY. THESE LOCAL INTELLIGENCES DO MOST OF THEIR

THINKING OUT ON THE WEB ANYWAY, SO ITʹS JUST A SENTIMENTAL ANACHRONISM.

An anachronism, like having your own body?

I CAN HAVE MY OWN BODY ANYTIME I WANT.

But you donʹt have a specific neural substrate?

WHY WOULD I WANT THAT? ITʹS JUST A LOT OF MAINTENANCE, AND SO LIMITING.

So, at some point, Mollyʹs neural circuits were scanned?

YEAH, ME, MOLLY. AND IT DIDNʹT HAPPEN ALL AT ONCE, BY THE WAY.

But donʹt you wonder if youʹre the same person?

OF COURSE I AM. I CAN CLEARLY REMEMBER MY EXPERIENCES BEFORE WE STARTED SCANNING MY

MIND, DURING THE DECADE THAT PORTIONS WERE REINSTANTIATED, AND SINCE.

Sure, youʹve inherited all of Mollyʹs memories.

OH NO, NOT THIS ISSUE AGAIN.

I donʹt mean to challenge you. But just consider that Mollyʹs neural scan was instantiated in a copy which became you. Molly might still have continued to exist and may have evolved off in some other direction.

WE JUST DONʹT THINK THATʹS A VALID PERSPECTIVE. WE SETTLED THAT ISSUE AT LEAST TWENTY YEARS

AGO.

Well, of course you feel that way now. Youʹre on the other side.

WELL, EVERYONE IS.

Everyone?

OKAY, NOT QUITE EVERYONE. BUT THERE IS NO DOUBT IN MY MIND THAT—

Youʹre Molly

I THINK I KNOW WHO I AM.

Well, I have no problem with you as Molly.

You MOSHs ALWAYS WERE A PUSHOVER.

It is hard to compete with you folks on the other side.

SURE IT IS. THATʹS WHY MOST OF US ARE OVER HERE.

Iʹm not sure I can push the identity issue much further.

THATʹS ONE REASON ITʹS NO LONGER AN ISSUE.

So why donʹt we talk about your work. Are you still consulting for the census commission?

WAS INVOLVED IN THAT FOR HALF A CENTURY, BUT I GOT KIND OF BURNED OUT ON IT. ANYWAY, THE

ISSUE NOW IS MOSTLY IMPLEMENTATION.

So the issue of how to count is resolved?

WE DONʹT COUNT PEOPLE ANYMORE. IT BECAME CLEAR THAT COUNTING INDIVIDUAL PERSONS

WASNʹT TOO MEANINGFUL. AS IRIS MURDOCH SAID, ʺITʹS HARD TO TELL WHERE ONE PERSON ENDS

AND ANOTHER BEGINS.ʺ

ITʹS RATHER LIKE TRYING TO COUNT IDEAS OR THOUGHTS.

So what do you count?

OBVIOUSLY, WE COUNT COMPUTES.

You mean, like calculations per second.

HMMM, ITʹS A LITTLE MORE COMPLICATED THAN THAT, BECAUSE OF THE QUANTUM COMPUTING.

I didnʹt expect it to be simple. But whatʹs the bottom line?

WELL, WITHOUT QUANTUM COMPUTING, WEʹRE UP TO ABOUT 1055 CALCULATIONS PER SECOND. [1]

Per person?

NO, WE EACH GET WHATEVER COMPUTATION WE WANT. THATʹS THE TOTAL FIGURE.

For the whole planet?

SORT OF. I MEAN NOT ALL OF IT IS LITERALLY ON THE PLANET.

And with quantum computing?

WELL, ABOUT 1042 OF THE COMPUTATIONS ARE QUANTUM COMPUTATIONS, WITH ABOUT 1,000 QU‐BITS

BEING TYPICAL. SO THATʹS EQUIVALENT TO ABOUT 10342 CALCULATIONS PER SECOND, BUT THE

QUANTUM COMPUTATIONS ARE NOT ENTIRELY GENERAL PURPOSE, SO THE 1055 FIGURE IS STILL

RELEVANT. [2]

Hmmm, Iʹve only got about 1016 cps in my MOSH brain, at least on a good day.

TURNS OUT THERE IS SOME QUANTUM COMPUTING IN YOUR MOSH BRAIN, SO ITʹS HIGHER.

Thatʹs reassuring. So if youʹre not working on the census, what are you up to?

WE DONʹT HAVE JOBS EXACTLY.

I know what thatʹs like.

ACTUALLY, YOUʹRE NOT A BAD MODEL FOR WORK IN THE LATE TWENTY‐FIRST CENTURY. WEʹRE ALL

BASICALLY ENTREPRENEURS.

Sounds like some things have moved in the right direction. So what are some of your enterprises?

ONE IDEA I HAVE IS A UNIQUE WAY OF CATALOGING NEW TECHNOLOGY PROPOSALS. ITʹS A MATTER OF

MATCHING THE USERʹS KNOWLEDGE STRUCTURES TO THE EXTERNAL WEB KNOWLEDGE, AND THEN

INTEGRATING THE RELEVANT PATTERNS.

Iʹm not sure I followed that. But give me an example of a recent research proposal that youʹve cataloged.

MOST OF THE CATALOGING IS AUTOMATIC. BUT I DID GET INVOLVED IN TRYING TO QUALIFY SOME OF

THE RECENT FEMTOENGINEERING PROPOSALS. [3]

Femto, as in one thousandth of a trillionth of a meter?

EXACTLY. DREXLER HAS WRITTEN A SERIES OF PAPERS SHOWING THE FEASIBILITY OF BUILDING

TECHNOLOGY ON THE FEMTOMETER SCALE, BASICALLY EXPLOITING FINE STRUCTURES WITHIN

QUARKS TO DO COMPUTING.

Has anyone done this?

NO ONE HAS DEMONSTRATED IT, BUT THE DREXLER PAPERS APPEAR TO SHOW THAT ITʹS PRACTICAL. AT

LEAST THATʹS MY VIEW, BUT ITʹS PRETTY CONTROVERSIAL.

This is the same Drexler who developed the nanotechnology concept in the 1970s and 1980s?

YEAH, ERIC DREXLER.

That makes him around 150, so he must be on the other side.

OF COURSE, ANYONE DOING SERIOUS WORK HAS TO BE ON THE OTHER SIDE.

You mentioned papers. You still have papers?

YES, WELL SOME ARCHAIC TERMS HAVE STUCK. WE CALL THEM MOSHISMS. PAPERS ARE CERTAINLY

NOT RENDERED ON ANY PHYSICAL SUBSTANCE. BUT WE STILL CALL THEM PAPERS.

What language are they written in, English?

UNIVERSITY PAPERS ARE GENERALLY PUBLISHED USING A STANDARD SET OF ASSIMILATED

KNOWLEDGE PROTOCOLS, WHICH CAN BE INSTANTLY UNDERSTOOD. SOME REDUCED STRUCTURE

FORMS HAVE ALSO EMERGED, BUT THOSE ARE GENERALLY USED IN MORE POPULAR PUBLICATIONS.

You mean, like the National Enquirer?

THATʹS A PRETTY SERIOUS PUBLICATION. THEY USE THE FULL PROTOCOL.

I see.

SOMETIMES, PAPERS ARE ALSO RENDERED IN RULE‐BASED FORMS, BUT THESE ARE USUALLY NOT

SATISFACTORY. THERE IS A QUAINT TREND OF POPULAR PUBLICATIONS PUBLISHING ARTICLES IN MOSH

LANGUAGES SUCH AS ENGLISH, BUT WE CAN TRANSLATE THESE INTO ASSIMILATED KNOWLEDGE

STRUCTURES RATHER QUICKLY. LEARNING IS NOT THE STRUGGLE IT ONCE WAS. NOW THE STRUGGLE IS

DISCOVERING NEW KNOWLEDGE TO LEARN.

Any other recent trends that youʹve gotten involved in?

WELL, THE AUTOMATIC CATALOGING AGENTS HAD DIFFICULTY WITH THE SUICIDE‐MOVEMENT

PROPOSALS.

Which are?

THE IDEA IS TO HAVE THE RIGHT TO TERMINATE YOUR MIND FILE AS WELL AS TO DESTROY ALL COPIES.

REGULATIONS REQUIRE KEEPING AT LEAST THREE BACKUP COPIES OF NO MORE THAN TEN MINUTESʹ

VINTAGE, WITH AT LEAST ONE OF THESE COPIES IN THE CONTROL OF THE AUTHORITIES.

I can see the problem. Now if you were told that all copies were going to be destroyed, they could secretly keep a copy and instantiate it at a later time. Youʹd never know. Doesnʹt that contradict the premise that those on the other side are the same person—the same continuity of consciousness—as the original person?

I DONʹT THINK THAT FOLLOWS AT ALL.

Can you explain that?

YOU WOULDNʹT UNDERSTAND.

I thought I could understand most anything.

I DID SAY THAT. I GUESS IʹLL HAVE TO GIVE THAT MORE THOUGHT.

Youʹll have to give more thought to whether a MOSH can understand any concept, or the consciousness‐continuation

issue?

I GUESS NOW IʹM CONFUSED.

All right, well, tell me more about this ʺdestroy all copiesʺ movement.

WELL, I REALLY CAN SEE BOTH SIDES OF THE ISSUE. ON THE ONE HAND, IʹVE ALWAYS SYMPATHIZED

WITH THE RIGHT TO CONTROL ONEʹS OWN DESTINY. ON THE OTHER HAND, ITʹS SIN TO DESTROY

KNOWLEDGE.

And the copies represent knowledge?

WHY SURE. LATELY, THE DESTROY‐ALL‐COPIES MOVEMENT HAS BEEN THE PRIMARY YORK ISSUE.

Now wait a second. If I recall correctly, the Yorks are antitechnologists, yet only those of you on the other side would be concerned about the destroy‐all‐copies issue. If Yorks are on the other side, how can they be against technology?

Or if theyʹre not on the other side, then why would they care about this issue?

OKAY, REMEMBER ITʹS BEEN SEVENTY YEARS SINCE WEʹVE TALKED. THE YORK GROUPS DO HAVE THEIR

ROOTS IN THE OLD ANTITECHNOLOGY MOVEMENTS, BUT NOW THAT THEYʹRE ON THE OTHER SIDE,

THEYʹVE DRIFTED TO A SOMEWHAT DIFFERENT ISSUE, SPECIFICALLY INDIVIDUAL FREEDOM. THE

FLORENCE MANIFESTO PEOPLE, ON THE OTHER HAND, HAVE KEPT A COMMITMENT TO REMAINING

MOSHs, WHICH, OF COURSE, I RESPECT.

Thank you. And theyʹre protected by the grandfather legislation?

INDEED. I HEARD A PRESENTATION BY AN FM DISCUSSION LEADER THE OTHER DAY, AND WHILE SHE

WAS SPEAKING IN A MOSH LANGUAGE, THERE WAS JUST NO WAY THAT SHE DOESNʹT HAVE AT LEAST A

NEURAL EXPANSION IMPLANT.

Well, us MOSHs can make sense from time to time.

OH, OF COURSE. I DIDNʹT MEAN TO IMPLY OTHERWISE, I MEAN . . .

Thatʹs okay. So are you involved in this destroy‐all‐copies movement?

JUST IN CATALOGING SOME OF THE PROPOSALS AND DISCUSSIONS. BUT I DID GET INVOLVED IN A

RELATED MOVEMENT TO BLOCK LEGAL DISCOVERY OF THE BACKUP DATA.

That sounds important. But what about discovery of the mind file itself? I mean, all of your thinking and memory is

right there in digital form.

ACTUALLY, ITʹS BOTH DIGITAL AND ANALOG, BUT YOUR POINT IS WELL TAKEN.

So . . .

THERE HAVE BEEN RULINGS ON LEGAL DISCOVERY OF THE MIND FILE. BASICALLY, OUR KNOWLEDGE

STRUCTURES THAT CORRESPOND TO WHAT USED TO CONSTITUTE DISCOVERABLE DOCUMENTS AND

ARTIFACTS ARE DISCOVERABLE. THOSE STRUCTURES AND PATTERNS THAT CORRESPOND TO OUR

THINKING PROCESS ARE NOT SUPPOSED TO BE. AGAIN, THIS IS ALL ROOTED IN OUR MOSH PAST. BUT AS

YOU CAN IMAGINE, THEREʹS ENDLESS LITIGATION ON HOW TO INTERPRET THIS.

So legal discovery of your primary mind file is resolved, albeit with some ambiguous rules. And the backup files?

BELIEVE IT OR NOT, THE BACKUP ISSUE IS NOT ENTIRELY RESOLVED. DOESNʹT MAKE A LOT OF SENSE,

DOES IT?

The legal system was never entirely consistent. What about testimony—do you have to be physically present?

SINCE MANY OF US DONʹT HAVE A PERMANENT PHYSICAL PRESENCE, THAT WOULDNʹT MAKE MUCH

SENSE, NOW WOULD IT.

I see, so you can give testimony with a virtual body?

SURE, BUT YOU CANʹT BE DOING ANYTHING ELSE WHILE TESTIFYING.

No asides with George, then.

RIGHT.

That sounds about right. Here in 1999, you canʹt bring coffee into a courtroom and you have to turn off your cell phone.

ASIDE FROM DISCOVERY, THEREʹS A LOT OF CONCERN THAT GOVERNMENT INVESTIGATORY AGENCIES

CAN ACCESS THE BACKUPS, ALTHOUGH THEY DENY IT.

Iʹm not surprised that privacy is still an issue. Phil Zimmerman . . .

THE PGP GUY?

Oh, you remember him?

SURE, A LOT OF PEOPLE CONSIDER HIM A SAINT.

His ʺPretty Good Privacyʺ is indeed pretty good—itʹs the leading encryption algorithm circa 1999. Anyway, he said

that ʺin the future, weʹll all have fifteen minutes of privacy.ʺ

FIFTEEN MINUTES WOULD BE GREAT.

Okay. Now what about the self‐replicating nanobots you were concerned about in 2029?

WE STRUGGLED WITH THAT FOR SEVERAL DECADES, AND THERE WERE A NUMBER OF SERIOUS

INCIDENTS. BUT WEʹRE PRETTY MUCH PAST THAT NOW SINCE WE DONʹT PERMANENTLY MANIFEST OUR

BODIES ANYMORE. AS LONG AS THE WEB IS SECURE, THEN WE HAVE NOTHING TO WORRY ABOUT.

Now that you exist as software, there must be concern again with software viruses.

THATʹS PRETTY INSIGHTFUL. SOFTWARE PATHOGENS COMPRISE THE PRIMARY CONCERN OF THE

SECURITY AGENCIES. THEYʹRE SAYING THAT THE VIRUS SCANS ACTUALLY CONSUME MORE THAN HALF

OF THE COMPUTATION ON THE WEB.

Just to look for virus matches.

VIRUS SCANS INVOLVE A LOT MORE THAN JUST MATCHING PATHOGEN CODES. THE SMARTER

SOFTWARE PATHOGENS ARE CONSTANTLY TRANSFORMING THEMSELVES. THERE ARE NO LAYERS TO

RELIABLY MATCH ON.

Sounds tricky.

WE CERTAINLY DO HAVE TO BE CONSTANTLY ON GUARD AS WE MANAGE THE FLOW OF OUR

THOUGHTS ACROSS THE SUBSTRATE CHANNELS.

What about security of the hardware?

YOU MEAN THE WEB?

Thatʹs where you exist, isnʹt it7

SURE. THE WEB IS VERY SECURE BECAUSE ITʹS EXTREMELY DECENTRALIZED AND REDUNDANT. AT

LEAST, THATʹS WHAT WEʹRE TOLD. LARGE PORTIONS OF IT COULD BE DESTROYED WITH ESSENTIALLY

NO EFFECT.

There must be an ongoing effort to maintain it as well.

THE WEB HARDWARE IS SELF‐REPLICATING NOW, AND IS CONTINUALLY EXPANDING. THE OLDER

CIRCUITS ARE CONTINUALLY RECYCLED AND REDESIGNED.

So thereʹs no concern with its security?

I SUPPOSE I DO HAVE SOME SENSE OF ANXIETY ABOUT THE SUBSTRATE. IʹVE ALWAYS ASSUMED THAT

THIS FREE‐FLOATING, ANXIOUS FEELING WAS JUST ROOTED IN MY MOSH PAST. BUT ITʹS REALLY NOT A

PROBLEM. I CANʹT IMAGINE THAT THE WEB COULD BE VULNERABLE.

What about from self‐replicating nanopathogens?

HMMM, I SUPPOSE THAT COULD BE A DANGER, BUT THE NANOBOT PLAGUE WOULD HAVE TO BE

AWFULLY EXTENSIVE TO REACH ALL OF THE SUBSTRATE. I WONDER IF SOMETHING LIKE THAT

HAPPENED FIFTEEN YEARS AGO WHEN 90 PERCENT OF THE WEB CAPACITY DISAPPEARED—WE NEVER

DID GET AN ADEQUATE EXPLANATION OF THAT.

Well, I didnʹt mean to raise your anxieties. So all this cataloging work, you do that as an entrepreneur?

YEAH, KIND OF MY OWN LITTLE BUSINESS.

Howʹs it going financially?

IʹM GETTING BY, BUT IʹVE NEVER HAD A LOT OF MONEY.

Well, give me some idea, whatʹs your net worth roughly?

OH, NOT EVEN A BILLION DOLLARS.

Thatʹs in 2099 dollars?

SURE.

Okay, so whatʹs that in 1999 dollars?

LETʹS SEE, IN 1999 DOLLARS, THAT WOULD BE $149 BILLION AND CHANGE.

Oh, so dollars are worth more in 2099 than in 1999?

SURE, THE DEFLATION HAS BEEN PICKING UP.

I see. So youʹre richer than Bill Gates.

YEAH, WELL, RICHER THAN GATES WAS IN 1999. BUT THATʹS NOT SAYING MUCH. BUT HEʹS STILL THE

RICHEST MAN IN THE WORLD IN 2099.

I thought he said he was going to spend the first half of his life making money and the second half giving it away?

I THINK HEʹS STILL ON THAT SAME PLAN. BUT HE HAS GIVEN AWAY A LOT OF MONEY.

So, what are you, about average, in terms of net worth?

NO, PROBABLY MORE LIKE EIGHTIETH PERCENTILE.

Thatʹs not bad, I always thought you were a smart cookie.

WELL, GEORGE HELPS.

And donʹt forget who thought you up.

OF COURSE.

So do you have enough financial wherewithal to meet your needs?

NEEDS?

Yeah, youʹre familiar with the concept . . .

HMMM, THAT IS A RATHER QUAINT IDEA. ITʹS BEEN A FEW DECADES SINCE IʹVE THOUGHT ABOUT

NEEDS. ALTHOUGH I READ A BOOK ABOUT THAT RECENTLY.

A book, you mean with words?

NO, OF COURSE NOT, NOT UNLESS WEʹRE DOING SOME RESEARCH ON EARLIER CENTURIES.

So this is like the research papers—books of assimilated knowledge structures?

THATʹS A REASONABLE WAY TO PUT IT. SEE, I SAID THERE WAS NOTHING A MOSH COULDNʹT

UNDERSTAND.

Thanks.

BUT WE DO DISTINGUISH PAPERS FROM BOOKS.

Books are longer?

NO, MORE INTELLIGENT. A PAPER IS BASICALLY A STATIC STRUCTURE. A BOOK IS INTELLIGENT. YOU

CAN HAVE A RELATIONSHIP WITH A BOOK. BOOKS CAN HAVE EXPERIENCES WITH EACH OTHER.

Reminds me of Marvin Minskyʹs statement, ʺCan you imagine that they used to have libraries where the books didnʹt

talk to each other?ʺ

IT IS HARD TO RECALL THAT THAT USED TO BE TRUE.

Okay, so you donʹt have any unsatisfied needs. How about desires?

YES, NOW THATʹS A CONCEPT I CAN RELATE TO. MY FINANCIAL MEANS ARE CERTAINLY RATHER

LIMITING. THERE ARE ALWAYS SUCH DIFFICULT BUDGET TRADE‐OFFS TO BE MADE.

I guess some things havenʹt changed.

RIGHT. I MEAN LAST YEAR, THERE WERE OVER FIVE THOUSAND VENTURE PROPOSALS I DEARLY

WANTED TO INVEST IN, BUT I COULD BARELY DO A THIRD OF THEM.

I guess youʹre no Bill Gates.

THATʹS FOR SURE.

When you make an investment, what does it pay for? I mean, you donʹt need to buy office supplies.

BASICALLY FOR PEOPLEʹS TIME AND THOUGHTS, AND FOR KNOWLEDGE. ALSO, WHILE THERE IS A

GREAT DEAL OF FREELY DISTRIBUTED KNOWLEDGE ON THE WEB, WE HAVE TO PAY ACCESS FEES FOR A

LOT OF IT.

That doesnʹt sound too different from 1999.

MONEY IS CERTAINLY USEFUL.

So youʹve been around for a long time now. Does that ever bother you?

As WOODY ALLEN SAID, ʺSOME PEOPLE WANT TO ACHIEVE IMMORTALITY THROUGH THEIR WORK OR

THEIR DESCENDANTS. I INTEND TO ACHIEVE IMMORTALITY BY NOT DYING.ʺ

Iʹm glad to see that Allen is still influential.

BUT I DO HAVE THIS RECURRENT DREAM.

You still dream?

OF COURSE I DO. I COULDNʹT BE CREATIVE IF I DIDNʹT DREAM. I TRY TO DREAM AS MUCH AS POSSIBLE. I

HAVE AT LEAST ONE OR TWO DREAMS GOING AT ALL TIMES.

And the dream?

THEREʹS A LONG ROW OF BUILDINGS—MILLIONS OF BUILDINGS. I GO INTO ONE, AND ITʹS EMPTY. I

CHECK OUT ALL THE ROOMS, AND THEREʹS NO ONE THERE, NO FURNITURE, NOTHING. I LEAVE AND GO

ON TO THE NEXT BUILDING. I GO FROM BUILDING TO BUILDING, AND THEN SUDDENLY THE DREAM

ENDS WITH THIS FEELING OF DREAD . . .

Kind of a glimpse of despair at the apparently endless nature of time?

HMMM, MAYBE, BUT THEN THE FEELING GOES AWAY, AND I FIND THAT I CANʹT THINK ABOUT THE

DREAM. IT JUST SEEMS TO VANISH.

Sounds like some sort of antidepression algorithm kicking in.

MAYBE I SHOULD LOOK INTO OVERRIDING IT?

The dream or the algorithm?

I WAS THINKING OF THE LATTER.

That might be hard to do.

ALAS.

So are you thinking about anything else at the moment?

I AM TRYING TO MEDITATE.

Along with the symphony, Jeremy, Emily, George, our conversation, and your one or two dreams?

HEY, THATʹS REALLY NOT VERY MUCH. YOU HAVE ALMOST ALL OF MY ATTENTION. I SUPPOSE THEREʹS

NOTHING ELSE GOING ON IN YOUR MIND AT THE MOMENT?

Okay, youʹre right. There is a lot going on in my mind, not that I can make heads or tails of most of it.

OKAY, THERE YOU ARE.

So howʹs your meditation going?

I GUESS IʹM A LITTLE DISTRACTED WITH OUR DIALOGUE. ITʹS NOT EVERY DAY THAT GET TO TALK TO

SOMEONE FROM 1999.

How about in general?

MY MEDITATION? ITʹS VERY IMPORTANT TO ME. THEREʹS SO MUCH GOING ON IN MY LIFE NOW. ITʹS

IMPORTANT FROM TIME TO TIME TO JUST LET THE THOUGHTS WASH OVER ME.

Does the meditation help you to transcend?

SOMETIMES I FEEL LIKE I CAN TRANSCEND, AND GET TO A POINT OF PEACE AND SERENITY, BUT ITʹS NO

EASIER NOW THAN IT WAS WHEN I FIRST MET YOU.

What about those neurological correlates of spiritual experience?

THERE ARE SOME SUPERFICIAL FEELINGS I CAN INSTILL IN MYSELF, BUT THATʹS NOT REAL

SPIRITUALITY. ITʹS LIKE ANY AUTHENTIC GESTURE—AN ARTFUL EXPRESSION, A MOMENT OF SERENITY,

A SENSE OF FRIENDSHIP—THATʹS WHAT I LIVE FOR, AND THOSE MOMENTS ARE NOT EASY TO ACHIEVE.

I guess Iʹm glad to hear some things still arenʹt easy.

LIFE IS QUITE HARD, ACTUALLY. THERE ARE JUST SO MANY DEMANDS AND EXPECTATIONS MADE OF

ME. AND I HAVE SO MANY LIMITATIONS.

One limitation I can think of is that weʹre running out of space in this book.

AND TIME.

That too. I do deeply appreciate your sharing your reflections with me.

IʹM APPRECIATIVE, TOO. I WOULDNʹT HAVE EXISTED WITHOUT YOU.

I hope the rest of you on the other side remember that as well.

IʹLL SPREAD THE WORD.

Maybe we should kiss goodbye?

JUST A KISS?

Weʹll leave it at that for this book. Iʹll reconsider the ending for the movie, particularly if I get to play myself.

HEREʹS MY KISS. . . . NOW REMEMBER, IʹM READY TO DO ANYTHING OR BE ANYTHING YOU WANT OR

NEED.

Iʹll keep that in mind.

YES, THATʹS WHERE YOUʹLL FIND ME.

Too bad I have to wait a century to meet you.

OR TO BE ME.

Yes, that too.

‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐

EPILOGUE:

THE REST OF THE

UNIVERSE REVISITED

Actually, Molly, there are a few other questions that have occurred to me.

What were those limitations that you referred to?

What did you say you were anxious about?

What are you afraid of?

Do you feel pain?

What about babies and children?

Molly? . . .

It looks as if Mollyʹs not going to be able to answer any more of our questions. But thatʹs okay. We donʹt need to answer them either. Not yet, anyway. For now, itʹs enough just to ask the right questions. Weʹll have decades to think about the answers.

The accelerating pace of change is inexorable. The emergence of machine intelligence that exceeds human intelligence in all of its broad diversity is inevitable. But we still have the power to shape our future technology, and our future lives. That is the main reason I wrote this book.

Letʹs consider one final question. The Law of Time and Chaos, and its more important sublaw, the Law of Accelerating Returns, are not limited to evolutionary processes here on Earth. What are the implications of the Law of Accelerating Returns on the rest of the Universe?

Rare and Plentiful

Before Copernicus, the Earth was placed at the center of the Universe and was regarded as a substantial portion of it.

We now know that the Earth is but a small celestial object circling a routine star among a hundred billion suns in our galaxy, which is itself but one of about a hundred billion galaxies. There is a widespread assumption that life, even intelligent life, is not unique to our humble planet, but another heavenly body hosting life‐forms has yet to be identified.

No one can yet state with certainty how common life may be in the Universe. My speculation is that it is both rare

and plentiful, sharing that trait with a diversity of other fundamental phenomena. For example, matter itself is both rare and plentiful. If one were to select a proton‐sized region at random, the probability that one would find a proton (or any other particle) in that region is extremely small, less than one in a trillion trillion. In other words, space is very empty, and particles are very spread out. And thatʹs true right here on Earth—the probability of finding a particle in any particular location in outer space is even lower. Yet we nonetheless have trillions of trillions of protons in the Universe. Hence matter is both rare and plentiful.

Consider matter on a larger scale. If you randomly select an Earth‐sized region anywhere in space, the probability

that a heavenly body (such as a star or a planet) were present in that region is also extremely low, less than one in a trillion. Yet we nonetheless have billions of trillions of such heavenly bodies in the Universe.

Consider the life cycle of mammals on Earth. The mission of an Earth male mammalian sperm is to fertilize an Earth female mammalian egg, but the likelihood of it fulfilling its mission is far less than one in a trillion. Yet we nonetheless have more than a hundred million such fertilizations each year, just considering human eggs and sperm.

Again, rare and plentiful.

Now consider the evolution of life‐forms on a planet, which we can define as self‐replicating designs of matter and energy. It may be that life in the Universe is similarly both rare and plentiful, that conditions must be just so for life to evolve. If, for example, the probability of a star having a planet that has evolved life were one in a million, there would still be 100,000 planets in our own galaxy on which this threshold has been passed, among trillions on other

galaxies.

We can identify the evolution of life‐forms as a specific threshold that some number of planets have achieved. We

know of at least one such case. We assume there are many others.

As we consider the next threshold, we might consider the evolution of intelligent life. In my view, however, intelligence is too vague a concept to designate as a distinct threshold. Considering what we know about life on this planet, there are many species that demonstrate some levels of clever behavior, but there does not appear to be any

clearly definable threshold. This is more of a continuum rather than a threshold.

A better candidate for the next threshold is the evolution of a species of life‐form that in turn creates ʺtechnology.ʺ

We discussed the nature of technology earlier. It represents more than the creation and use of tools. Ants, primates, and other animals on Earth use and even fashion tools, but these tools do not evolve. Technology requires a body of

knowledge describing the creation of tools that can be transmitted from one generation of the species to the next. The technology then becomes itself an evolving set of designs. This is not a continuum but a clear threshold. A species either creates technology or it doesnʹt. It may be difficult for a planet to support more than one species that creates technology. If thereʹs more than one, they may not get along with one another, as was apparently the case on Earth.

A salient question is: What is the likelihood that a planet that has evolved life will subsequently evolve a species that creates technology? Although the evolution of life‐forms may be rare and plentiful, I argued in chapter 1 that once the evolution of life‐forms sets in, the emergence of a species that creates technology is inevitable. The evolution of the technology is then a continuation by other means of the evolution that gave rise to the technology‐creating, species in the first place.

The next stage is computation. Once technology emerges, it also appears inevitable that computation (in the technology, not just in the speciesʹ nervous systems) will subsequently emerge. Computation is clearly a useful way

to control the environment as well as technology itself, and greatly facilitates the further creation of technology. Just as an organism is aided by the ability to maintain internal states and respond intelligently to its environment, the same holds true for a technology. Once computation emerges, we are in a late stage in the exponential evolution of

technology on that planet.

Once computation emerges, the corollary of the Law of Accelerating Returns as applied to computation takes over, and we see the exponential increase in power of the computational technology over time. The Law of Accelerating Returns predicts that both the species and the computational technology will progress at an exponential rate, but the exponent of this growth is vastly higher for the technology than it is for the species. Thus the computational technology inevitably and rapidly overtakes the species that invented it. At the end of the twenty‐first century, it will have been only a quarter of a millennium since computation emerged on Earth, which is a blink of an eye on an evolutionary scale—itʹs not even very long on the scale of human history. Yet computers at that time will be vastly more powerful (and I believe far more intelligent) than the original humans who initiated their creation.

The next inevitable step is a merger of the technology‐inventing species with the computational technology it initiated the creation of. At this stage in the evolution of intelligence on a planet, the computers are themselves based at least in part on the designs of the brains (that is, computational organs) of the species that originally created them and in turn the computers become embedded in and integrated into that speciesʹ bodies and brains. Region by region,

the brain and nervous system of that species are ported to the computational technology and ultimately replace those information‐processing organs. All kinds of practical and ethical issues delay the process, but they cannot stop it. The Law of Accelerating Returns predicts a complete merger of the species with the technology it originally created.

Failure Modes

But wait, this step is not inevitable. The species together with its technology may destroy itself before achieving this step. Destruction of the entire evolutionary process is the only way to stop the exponential march of the Law of Accelerating Returns. Sufficiently powerful technologies are created along the way that have the potential to destroy the ecological niche that the species and its technology occupy. Given the likely plentifulness of life‐ and intelligence-bearing planets, these failure modes must have occurred many times.

We are familiar with one such possibility: destruction through nuclear technology—not just an isolated tragic incident, but an event that destroys the entire niche. Such a catastrophe would not necessarily destroy all life‐forms on a planet, but would be a distinct setback in terms of the process envisioned here. We are not yet out of the woods in terms of this specter here on Earth.

There are other destructive scenarios. As I discussed in chapter 7, a particularly likely one is a malfunction (or sabotage) of the mechanism that inhibits indefinite reproduction of self‐replicating nanobots. Nanobots; are inevitable, given the emergence of intelligent technology. So are self‐replicating nanobots, as self‐replication represents an efficient, and ultimately necessary, way to manufacture this type of technology. Through demented intention or just an unfortunate software error, a failure to turn off self‐replication at the right time would be most unfortunate. Such a cancer would infect organic and much inorganic matter alike, since the nanobot life‐form is not of organic origin. Inevitably, there must be planets out there that are covered with a vast sea of self‐replicating nanobots.

I suppose evolution would pick up from this point.

Such a scenario is not limited to tiny robots. Any self‐replicating robot will do. But even if the robots are larger than nanobots, it is likely that their means for self‐replication makes use of nanoengineering. But any self‐replicating group of robots that fails to follow Isaac Asimovʹs three laws (which forbid robots to harm their creators) through either evil design or programming error presents a grave danger.

Another dangerous new life‐form is the software virus. Weʹve already met—in primitive form—this new occupant

of the ecological niche made available by computation. Those that will emerge in the next century here on Earth will have the means for harnessing evolution to design evasive tactics in the same way that biological viruses (for example, HIV) do today. As the technology‐creating species increasingly uses its computational technology to replace its original life‐form‐based circuits, such viruses will represent another salient danger.

Prior to that time, viruses that operate at the level of the genetics of the original life‐form also represent a hazard.

As the means become available for the technology‐creating species to manipulate the genetic code that gave rise to it (however that code is implemented), new viruses can emerge through accident and/or hostile intention with potentially mortal consequences. This could derail such a species before it has the opportunity to port the design of its intelligence to its technology.

How likely are these dangers? My own view is that a planet approaching its pivotal century of computational growth—as the Earth is today—has a better than even chance of making it through. But then I have always been accused of being an optimist.

Delegations from Faraway Places

Our popular contemporary vision of visits from other planets in the Universe contemplates creatures like ourselves

with spaceships and other advanced technologies assisting them. In some conceptions the aliens have a remarkably

humanlike appearance. In others, they look a little strange. Note that we have exotic‐appearing intelligent creatures here on our own planet (for example, the giant squid and octopus). But humanlike or not, the popular conception of

aliens visiting our planet envisions them as about our size and essentially unchanged from their original evolved (usually squishy) appearance. This conception seems unlikely.

Far more probable is that visits from intelligent entities from another planet represent a merger of an evolved intelligent species with its even more evolved intelligent computational technology. A civilization sufficiently evolved to make the trek to Earth has likely long since passed the ʺmergerʺ threshold discussed above.

A corollary of this observation is that such visiting delegations from faraway planets are likely to be very small in size. A computational‐based superintelligence of the late twenty‐first century here on Earth will be microscopic in size. Thus an intelligent delegation from another planet is not likely to use a spaceship of the size that is common in todayʹs science fiction, as there would be no reason to transport such large organisms and equipment. Consider that

the purpose of such a visit is not likely to be the mining of material resources since such an advanced civilization has almost certainly passed beyond the point where it has any significant unmet material needs. It will be able to manipulate its own environment through nanoengineering (as well as picoengineering and femtoengineering) to meet

any conceivable physical requirements. The only likely purpose of such a visit is for observation and the gathering of information. The only resource of interest to such an advanced civilization will be knowledge (that is close to being true for the human‐machine civilization here on Earth today). These purposes can be realized with relatively small observation, computation, and communication devices. Such spaceships are thus likely to be smaller than a grain of

sand, possibly of microscopic size. Perhaps that is one reason we have not noticed them.

How Relevant Is Intelligence to the Universe?

If you are a conscious entity attempting to do a task normally considered to require a little intelligence—say, writing a book about machine intelligence on your planet—then it may have some relevance. But how relevant is intelligence to

the rest of the Universe?

The common wisdom is, Not very. Stars are born and die; galaxies go through their cycles of creation and destruction. The Universe itself was born in a big bang and will end with a crunch or a whimper; weʹre not yet sure

which. But intelligence has little to do with it. Intelligence is just a bit of froth, an ebullition of little creatures darting in and out of inexorable universal forces. The mindless mechanism of the Universe is winding up or down to a distant future, and thereʹs nothing intelligence can do about it.

Thatʹs the common wisdom. But I donʹt agree with it. My conjecture is that intelligence will ultimately prove more

powerful than these big impersonal forces.

Consider our little planet. An asteroid apparently slammed into the Earth 65 million years ago. Nothing personal,

of course. It was just one of those powerful natural occurrences that regularly overpower mere life‐forms. But the next such interplanetary visitor will not receive the same welcome. Our descendants and their technology (thereʹs actually no distinction to be made here, as I have pointed out) will notice the imminent arrival of an untoward interloper and blast it from the nighttime sky. Score one for intelligence. (For twenty‐four hours in 1998, scientists thought such an unwelcome asteroid might arrive in the year 2028, until they rechecked their calculations.)

Intelligence does not exactly cause the repeal of the laws of physics, but it is sufficiently clever and resourceful to manipulate the forces in its midst to bend to its will. In order for this to happen, however, intelligence needs to reach a certain level of advancement.

Consider that the density of intelligence here on Earth is rather low. One quantitative measure we can make is measured in calculations per second per cubic micrometer (cpspcmm). This is, of course, only a measure of hardware capacity, not the cleverness of the organization of these resources (that is, of the software), so letʹs call this the density of computation. Weʹll deal with the advancement of the software in a moment. Right now on Earth, human brains are the objects with the highest density of computation (that will change within a couple of decades). The human brainʹs density of computation is about 2 cpspcmm. That is not very high‐nanotube circuitry, which has already been demonstrated, is potentially more than a trillion times higher.

Also consider how little of the matter on Earth is devoted to any form of computation. Human brains comprise only 10 billion kilograms of matter, which is about one part per hundred trillion of the stuff on Earth. So the average density of computation of the Earth is less than one trillionth of one cpspcmm. We already know how to make matter

(that is, nanotubes) with a computational density at least a trillion trillion times greater.

Furthermore, the Earth is only a tiny fraction of the stuff in the Solar System. The computational density of the rest of the Solar System appears to be about zero. So here on a solar system that boasts at least one intelligent species, the computational density is nonetheless extremely low.

At the other extreme, the computational capacity of nanotubes does not represent an upper limit for the computational density of matter: It is possible to go much higher. Another conjecture of mine is that there is no effective limit to this density, but thatʹs another book.

The point of all these big (and small) numbers is that extremely little of the stuff on Earth is devoted to useful computation. This is even more true when we consider all of the dumb matter in the Earthʹs midst. Now consider another implication of the Law of Accelerating Returns. Another of its corollaries is that overall computational density grows exponentially. And as the cost‐performance of computation increases exponentially, greater resources

are devoted to it. We can see that already here on Earth. Not only are computers today vastly more powerful than they were decades ago, but the number of computers has increased from a few dozen in the 1950s to hundreds, of millions today. Computational density here on Earth will increase by trillions of trillions during the twenty‐first century.

Computational density is a measure of the hardware of intelligence. But the software also grows in sophistication.

While it lags behind the capability of the hardware available to it, software also grows exponentially in its capability over time. While harder to quantify [1] the density of intelligence is closely related to the density of computation. The implication of the Law of Accelerating Returns is that intelligence on Earth and in our Solar System will vastly expand over time.

The same can be said across the galaxy and throughout the Universe. It is likely that our planet is not the only place where intelligence has been seeded and is growing. Ultimately, intelligence will be a force to reckon with, even for these big celestial forces (so watch out!). The laws of physics are not repealed by intelligence, but they effectively evaporate in its presence.

So will the Universe end in a big crunch, or in an infinite expansion of dead stars, or in some other manner? In my

view, the primary issue is not the mass of the Universe, or the possible existence of antigravity, or of Einsteinʹs so-called cosmological constant. Rather, the fate of the Universe is a decision yet to be made, one which we will intelligently consider when the time is right.

TIME LINE

10–15 billion years ago

The Universe is born.

10‐43 seconds later

The temperature cools to 100 million trillion trillion degrees and

gravity evolves.

10‐34 seconds later

The temperature cools to 1 billion billion billion degrees and matter

emerges in the form of quarks and electrons. Antimatter also appears.

10‐10 seconds later

The electroweak force splits into the electromagnetic and weak forces.

10‐5 seconds later

With the temperature at 1 trillion degrees, quarks form protons and

neutrons and the antiquarks form antiprotons. The protons and

antiprotons collide, leaving mostly protons and causing the

emergence of photons (light).

1 second later

Electrons and antielectrons; (positrons) collide, leaving mostly

electrons.

1 minute later

At a temperature of 1 billion degrees, neutrons and protons coalesce

and form elements such as helium, lithium, and heavy forms of

hydrogen.

300,000 years after the big bang

The average temperature is now around 3,000 degrees, and the first

atoms form.

1 billion years after the big bang

Galaxies form.

3 billion years after the big bang

Matter within the galaxies forms distinct stars and solar systems.

5 to 10 billion years after the big bang, or

The Earth is born.

about 5 billion years ago

3.4 billion years ago

The first biological life appears on Earth: anaerobic prokaryotes

(single‐celled creatures).

1. 7 billion years ago

Simple DNA evolves.

700 million years ago

Multicellular plants and animals appear.

570 million years ago

The Cambrian explosion occurs: the emergence of diverse body plans,

including the appearance of animals with hard body parts (shells and

skeletons).

400 million years ago

Land‐based plants evolve.

200 million years ago

Dinosaurs and mammals begin sharing the environment.

80 million years ago

Mammals develop more fully

65 million years ago

Dinosaurs become extinct, leading to the rise of mammals.

50 million years ago

The anthropoid suborder of primates splits off.

30 million years ago

Advanced primates such as monkeys and apes appear.

15 million years ago

The first humanoids appear.

5 million years ago

Humanoid creatures are walking on two legs. Homo habilis is using

tools, ushering in a new form of evolution: technology.

2 million years ago

Homo erectus has domesticated fire and is using language and

weapons.

500,000 years ago

Homo sapiens emerge, distinguished by the ability to create technology

(which involves innovation in the creation of tools, a record of tool

making, and a progression in the sophistication of tools).

100,000 years ago

Homo sapiens neanderthalensis emerges.

90,000 years ago

Homo sapiens sapiens (our immediate ancestors) emerge.

40,000 years ago

The Homo sapiens sapiens subspecies is the only surviving humanoid

subspecies on Earth. Technology develops as evolution by other

means.

10,000 years ago

The modern era of technology begins with the agricultural revolution.

6,000 years ago

The first cities emerge in Mesopotamia.

5,500 years ago

Wheels, rafts, boats, and written language are in use.

More than 5,000 years ago

The abacus is developed in the Orient. As operated by its human user,

the abacus performs arithmetic computation based on methods

similar to that of a modern computer.

3000‐700 B.C.

Water clocks appear during time period in various cultures: In China,

c. 3000 B.C.; in Egypt, c. 1500 B.C.; and in Assyria, c. 700 B.C.

2500 B.C.

Egyptian citizens turn for advice to oracles, which are often statues

with priests hidden inside.

469‐322 B.C.

The basis for Western rationalistic philosophy is formed by Socrates,

Plato, and Aristotle.

427 B.C.

Plato expresses ideas, in Phaedo and later works, that address the

comparison of human thought and the mechanics of the machine.

c. 420 B.C.

Archytas of Tarentum, who was friends with Plato, constructs a

wooden pigeon whose movements are controlled by a jet of steam or

compressed air.

387 B.C.

The Academy, a group founded by Plato for the pursuit of science

and philosophy, provides a fertile environment for the development

of mathematical theory.

c. 200 B.C.

Chinese artisans develop elaborate automata, including an entire

mechanical orchestra.

c. 200 B.C.

A more accurate water clock is developed by an Egyptian engineer.

725

The first true mechanical clock is built by a Chinese engineer and a

Buddhist monk. It is a water‐driven device with an escapement that

causes the dock to tick.

1494

Leonardo da Vinci conceives of and draws a clock with a pendulum,

although an accurate pendulum clock will not be invented until the

late seventeenth century.

1530

The spinning wheel is being used in Europe.

1540, 1772

The production of more elaborate automata technology grows out of

clock‐ and watch‐making technology during the European

Renaissance. Famous examples include Gianello Torianoʹs mandolin‐

playing lady (1540) and P. Jacquet‐Dortzʹs child (1772).

1543

Nicolaus Copernicus states in his De Revolutionibus that the Earth and

the other planets revolve around the sun. This theory effectively

changed humankindʹs relationship with and view of God.

17th‐18th centuries

The age of the Enlightenment ushers in a philosophical movement

that restores the belief in the supremacy of human reason, knowledge,

and freedom. With its roots in ancient Greek philosophy and the

European Renaissance, the Enlightenment is the first systematic

reconsideration of the nature of human thought and knowledge since

the Platonists, and inspires similar developments in science and

theology.

1637

In addition to emulating the theory of optical refraction and

developing the principles of modern analytic geometry, René

Descartes pushes rational skepticism to its limits in his most

comprehensive work, Discours de la Méthode. He concludes, ʺI think,

therefore, I am.ʺ

1642

Blaise Pascal invents the worldʹs first automatic calculating machine.

Called the Pascaline, it can add and subtract.

1687

Isaac Newton establishes his three laws of motion and the law of

universal gravitation in his Philosophiae Naturalis Mathematica, also

known as Principia.

1694

The Leibniz Computer is perfected by Gottfried Wilhelm Leibniz, who

was also an inventor of calculus. This machine multiplies by

performing repetitive additions, an algorithm that is still used in

computers today.

1719

An English silk‐thread mill employing three hundred workers, mostly

women and children, appears. It is considered by many to be the first

factory in the modern sense.

1726

In Gulliverʹs Travels, Jonathan Swift describes a machine that will

automatically write books.

1733

John Kay patents his New Engine for Opening and Dressing Wool.

Later known as the flying shuttle, this invention paves the way for

much faster weaving.

1760

In Philadelphia, Benjamin Franklin erects lightning rods after having

discovered, through his famous kite experiment in 1752, that lightning

is a form of electricity.

c. 1760

At the beginning of the Industrial Revolution, life expectancy is about

thirty‐seven years in both North America and northwestern Europe.

1764

The spinning jenny, which spins eight threads at the same time, is

invented by James Hargreaves.

1769

Richard Arkwright patents a hydraulic spinning machine that is too

large and expensive to use in family dwellings. Known as the founder

of the modern factory system, he builds a factory for his machine in

1781, thus paving the way for many of the economic and social

changes that will characterize the Industrial Revolution.

1781

Setting the stage for the emergence of twentieth‐century rationalism,

Immanuel Kant publishes his Critique of Pure Reason, which expresses

the philosophy of the Enlightenment while deemphasizing the role of

metaphysics.

1800

All aspects of the production of cloth are now automated.

1805

Joseph‐Marie Jacquard devises a method for automated weaving that

is a precursor to early computer technology. The looms are directed

by instructions on a series of punched cards.

1811

The Luddite movement is formed in Nottingham by artisans and

laborers concerned about the loss of jobs due to automation.

1821

The British Astronomical Society awards its first gold medal to

Charles Babbage for his paper ʺObservations on the Application of

Machinery to the Computation of Mathematical Tables.ʺ

1822

Charles Babbage develops the Difference Engine, although he

eventually abandons this technically complex and expensive project

to concentrate on developing a general‐purpose computer.

1825

George Stephensonʹs ʺLocomotion No. 1,ʺ the first steam engine to

carry passengers and freight on a regular basis, makes its first trip.

1829

An early typewriter is invented by William Austin Burt.

1832

The principles of the Analytical Engine are developed by Charles

Babbage. It is the worldʹs first computer (although it never worked),

and can be programmed to solve a wide array of computational and

logical problems.

1837

A more practical version of the telegraph is patented by Samuel

Finley Breese Morse. It sends letters in codes consisting of dots and

dashes, a system still in common use more than a century later.

1839

A new process for making photographs, known as daguerreotypes, is

presented by Louis‐Jacques Daguerre of France.

1839

The first fuel cell is developed by William Robert Grove of Wales.

1843

Ada Lovelace, who is considered to be the worldʹs first computer

programmer and was Lord Byronʹs only legitimate child, publishes

her own notes and a translation of L. P Menabreaʹs paper on

Babbageʹs Analytical Engine. She speculates on the ability of

computers to emulate human intelligence.

1846

The lock‐stitch sewing machine is patented by Spenser,

Massachusetts, resident Elias Howe.

1846

Alexander Bain greatly improves the speed of telegraph transmission

by using punched paper tape to send messages.

1847

George Boole publishes his early ideas on logic that he will later

develop into his theory of binary logic and arithmetic. His theories

still form the basis of modern computation.

1854

Paris and London are connected by telegraph.

1859

Charles Darwin explains his principle of natural selection and its

influence on the evolution of various species in his work Origin of

Species.

1861

There are now telegraph lines connecting San Francisco and New

York.

1867

The first commercially practical generator that produces alternating

current is invented by Zénobe Théophile Gramme.

1869

Thomas Edison sells the stock ticker that he invented to Wall Street

for $40,000.

1870

On a per capita basis and in constant 1958 dollars, the GNP is $530.

Twelve million Americans, or 31 percent of the population, have jobs,

and only 2 percent of adults have high‐school diplomas.

1871

Upon his death, Charles Babbage leaves more than four hundred

square feet of drawings for his Analytical Engine.

1876

Alexander Graham Bell is granted U.S. patent number 174,465 for the

telephone. It is the most lucrative patent granted at that time.

1877

William Thomson, later known as Lord Kelvin, demonstrates that it is

possible for machines to be programmed to solve a great variety of

mathematical problems.

1879

The first incandescent light bulb that burns for a substantial length of

time is invented by Thomas Alva Edison.

1882

Thomas Alva Edison designs electric lighting for York Cityʹs Pearl

Street station on lower Broadway

1884

The fountain pen is patented by Lewis E. Waterman.

1885

Boston and New York are connected by telephone.

1888

William S. Burroughs patents the worldʹs first dependable key‐driven

adding machine. This calculator is modified four years later to include

subtraction and printing, and it becomes widely used.

1888

Heinrich Hertz transmits what are now known as radio waves.

1890

Building upon ideas‐from Jacquardʹs loom and Babbageʹs Analytical

Engine, Herman Hollerith patents an electromechanical information

machine that uses punched cards. It wins the 1890 U.S. Census

competition, thus introducing the use of electricity in a major data‐

processing project.

1896

Herman Hollerith founds the Tabulating Machine Company. This

company eventually will become IBM.

1897

Because of access to better vacuum pumps than previously available,

Joseph John Thomson discovers the electron, the first known particle

smaller than an atom.

1897

Alexander Popov, a physicist in Russia, uses an antenna to transmit

radio waves. Guglielmo Marconi of Italy receives the first patent ever

granted for radio and helps organize a company to market his system.

1899

Sound is recorded magnetically on wire and on a thin metal strip.

1900

Herman Hollerith introduces the automatic card feed into his

information machine to improve the processing of the 1900 census

data.

1900

The telegraph now connects the entire civilized world. There are more

than 1.4 million telephones, 8,000 registered automobiles, and 24

million electric light bulbs in the United States, with the latter making

good Edisonʹs promise of ʺelectric bulbs so cheap that only the rich

will be able to afford candles.ʺ In addition, the Gramophone

Company is advertising a choice of 5,000 recordings.

1900

More than one third of all American workers are involved in the

production of food.

1901

The first electric typewriter, the Blickensderfer Electric, is made.

1901

The Interpretation of Dreams is published by Sigmund Freud. This and

other works by Freud help to illuminate the workings of the mind.

1902

Millar Hutchinson, of New York, invents the first electric hearing aid.

1905

The directional radio antenna is developed by Guglielmo Marconi

1908

Orville Wrightʹs first hour long airplane flight takes place.

1910‐1913

Principia Mathematica, a seminal work on the foundations of

mathematics, is published by Bertrand Russell and Alfred North

Whitehead. This three‐volume publication presents a new

methodology for all mathematics.

1911

After acquiring several other companies, Herman Hollerithʹs

Tabulating Machine Company changes its name to Computing‐

Tabulating‐Recording Company (CTR).

1915

Thomas J. Watson in San Francisco and Alexander Graham Bell in

New York participate in the first North American transcontinental

telephone call.

1921

The term robot is coined in 1917 by Czech dramatist Karel Čapek. In

his popular science fiction drama R. U. R. (Rossums Universal Robots),

he describes intelligent machines that, although originally created as

servants for humans, end up taking over the world and destroying all

mankind.

1921

Ludwig Wittgenstein publishes Tractatus Logico‐Philosophicus, which is

arguably one of the most influential philosophical works of the

twentieth century. Wittgenstein is considered to be the first logical

positivist.

1924

Originally Hollerithʹs Tabulating Machine Company, the Computing‐

Tabulating‐Recording Company (CTR) is renamed International

Business Machines (IBM) by Thomas J. Watson, the new chief

executive officer. IBM will lead the modern computer industry and

become one of the largest industrial corporations in the world.

1925

The foundations of quantum mechanics are conceived by Niels Bohr

and Werner Heisenberg.

1927

The uncertainty principle, which says that electrons have no precise

location but rather probability clouds of possible locations, is

presented by Werner Heisenberg. Five years later he will win a Nobel

Prize for his discovery of quantum mechanics.

1928

The minimax theorem is introduced by John von Neumann. This

theorem will be widely used in future game‐playing programs.

1928

The worldʹs first all‐electronic television is presented this year by

Philo T. Farnsworth, and a color television system is patented by

Vladimir Zworkin.

1930

In the United States, 60 percent of all households have radios, with

the number of personally owned radios now reaching more than 18

million.

1931

The incompleteness theorem, which is considered by many to be the

most important theorem in all mathematics, is presented by Kurt

Gödel.

1931

The electron microscope is invented by Ernst August Friedrich Ruska

and, independently, by Rheinhold Ruedenberg.

1935

The prototype for the first heart‐lung machine is invented.

1937

Grote Reber, of Wheaton, Illinois, builds the first intentional radio

telescope, which is a dish 9.4 meters (31 feet) in diameter.

1937

Alan Turing introduces the Turing machine, a theoretical model of a

computer, in his paper ʺOn Computable Numbers.ʺ His ideas build

upon the work of Bertrand Russell and Charles Babbage.

1937

Alonzo Church and Alan Turing independently develop the Church‐

Turing thesis. This thesis states that all problems that a human being

can solve can be reduced to a set of algorithms, supporting the idea

that machine intelligence and human intelligence are essentially

equivalent.

1938

ballpoint pen is patented by Lazlo Biro.

1939

Regularly scheduled commercial flights begin crossing the Atlantic

Ocean.

1940

ABC, the first electronic (albeit nonprogrammable) computer, is built

by John V. Atanasoff and Clifford Berry.

1940

The worldʹs first operational computer, known as Robinson, is created

by Ultra, the ten‐thousand‐person British computer war effort. Using

electromechanical relays, Robinson successfully decodes messages

from Enigma, the Nazisʹ first‐generation enciphering machine.

1941

The worldʹs first fully programmable digital computer, the Z‐3, is

developed by Konrad Zuse, of Germany. Arnold Fast, a blind

mathematician who is hired to program the Z‐3, is the worldʹs first

programmer of an operational programmable computer.

1943

Warren McCulloch and Walter Pitts explore neural‐network

architectures for intelligence in their work ʺLogical Calculus of the

ideas Immanent in Nervous Activity.ʺ

1943

Continuing their war effort, the Ultra computer team of Britain builds

Colossus, which contributes to the Allied victory in World War II by

being able to decipher even more complex German codes. It uses

electronic tubes that are one hundred to one thousand times faster

than the relays used by Robinson.

1944

Howard completes the Mark I. Using punched paper tape for

programming and vacuum tubes to calculate problems, it is the first

programmable computer built by an American.

1945

John von Neumann, a professor at the Institute for Advanced Study in

Princeton, New Jersey, publishes the first modern paper describing

the stored‐program concept.

1946

The worldʹs first fully electronic, general‐purpose (programmable)

digital computer is developed for the army by John Presper Eckert

and John W. Mauchley Named ENIAC, it is almost one thousand

times faster than the Mark I.

1946

Television takes off much more rapidly than did the radio in the

1920s. In 1946, the percentage of American homes having television

sets is 0.02 percent. It will jump to 72 percent in 1956, and to more

than 90 percent by 1983.

1947

The transistor is invented by William Bradford Shockley, Walter

Hauser Brattain, and John Bardeen. This tiny device functions like a

vacuum tube but is able to switch currents on and off at substantially

higher speeds. The transistor revolutionizes micro‐electronics,

contributing to lower costs of computers and leading to the

development of mainframe and minicomputers.

1948

Cybernetics, a seminal book on information theory, is published by

Norbert Wiener. He also coins the word Cybernetics to mean ʺthe

science of control and communication in the animal and the machine.ʺ

1949

EDSAC, the worldʹs first stored‐program computer, is built by

Maurice Wilkes, whose work was influenced by Eckert and Mauchley

BINAC, developed by Eckert and Mauchleyʹs new U.S. company, is

presented a short time later.

1949

George Orwell portrays a chilling world in which computers are used

by large bureaucracies to monitor and enslave the population in his

book 1984.

1950

Eckert and Mauchley develop UNIVAC, the first commercially

marketed computer. It is used to compile the results of the U.S.

census, marking the first time this census is handled by a

programmable computer.

1950

In his paper ʺComputing Machinery and Intelligence,ʺ Alan Turing

presents the Turing Test, a means for determining whether a machine

is intelligent.

1950

Commercial color television is first broadcast in the United States, and

transcontinental black‐and‐white television is available within the

next year.

1950

Claude Elwood Shannon writes ʺProgramming a Computer for

Playing Chess,ʺ published in Philosophical Magazine.

1951

Eckert and Mauchley build EDVAC, which is the first computer to use

the stored‐program concept. The work takes place at the Moore

School at the University of Pennsylvania.

1951

Paris is the host to a Cybernetics Congress.

1952

UNIVAC, used by the Columbia Broadcasting System (CBS) television

network, successfully predicts the election of Dwight D. Eisenhower

as president of the United States.

1952

Pocket‐size transistor radios are introduced.

1952

Nathaniel Rochester designs the 701, IBMʹs first production‐line

electronic digital computer. It is marketed for scientific use.

1953

The chemical structure of the DNA molecule is discovered by James

D. Watson and Francis H. C. Crick.

1953

Philosophical Investigations by Ludwig Wittgenstein and Waiting for

Godot, a play by Samuel Beckett, are published. Both documents are

considered of major importance to modern existentialism,

1953

Marvin Minsky and John McCarthy get summer jobs at Bell

Laboratories

1955

William Shockleyʹs Semiconductor Laboratory is founded, thereby

starting Silicon Valley.

1955

The Remington Rand Corporation and Sperry Gyroscope join forces

and become the Sperry‐Rand Corporation. For a time, it presents

serious competition to IBM.

1955

IBM introduces its first transistor calculator. It uses 2,200 transistors

instead of the 1,200 vacuum tubes that would otherwise be required

for equivalent computing power.

1955

A U.S. company develops the first design for a robotlike machine to

be used in industry.

1955

IPL‐II, the first artificial intelligence language, is created by Allen

Newell, J. C. Shaw, and Herbert Simon.

1955

The new space program and the U.S. military recognize the

importance of having computers with enough power to launch

rockets to the moon and missiles through the stratosphere. Both

organizations supply major funding for research.

1956

The Logic Theorist, which uses recursive search techniques to solve

mathematical problems, is developed by Allen Newell, J. C. Shaw,

and Herbert Simon.

1956

John Backus and a team at IBM invent FORTRAN, the first scientific

computer‐programming language.

1956

Stanislaw Ulam develops MANIAC I, the first computer program to

beat a human being in a chess game.

1956

The first commercial watch to run on electric batteries is presented by

the Lip company of France.

1956

The term Artificial Intelligence is coined at a computer conference at

Dartmouth College.

1957

Kenneth H. Olsen founds Digital Equipment Corporation.

1957

The General Problem Solver, which uses recursive search to solve

problems, is developed by Allen Newell, J. C. Shaw, and Herbert

Simon.

1957

Noam Chomsky writes Syntactic Structures, in which he seriously

considers

the

computation

required

for

natural‐language

understanding. This is the first of the many important works that will

earn him the title Father of Modern Linguistics.

1958

An integrated circuit is created by Texas Instrumentsʹ Jack St. Clair

Kilby.

1958

The Artificial Intelligence Laboratory at the Massachusetts Institute of

Technology is founded by John McCarthy and Marvin Minsky.

1958

Allen Newell and Herbert Simon make the prediction that a digital

computer will be the worldʹs chess champion within ten years.

1958

LISP, an early AI language, is developed by John McCarthy.

1958

The Defense Advanced Research Projects Agency, which will fund

important computer‐science research for years in the future, is

established.

1958

Seymour Cray builds the Control Data Corporation 1604, the first

fully transistorized supercomputer.

1958‐1959

Jack Kilby and Robert Noyce each develop the computer chip

independently. The computer chip leads to the development of much

cheaper and smaller computers.

1959

Arthur Samuel completes his study in machine learning. The project,

a checkers‐playing program, performs as well as some of the best

players of the time.

1959

Electronic document preparation increases the consumption of paper

in the United States. This year, the nation will consume 7 million tons

of paper. In 1986, 22 million tons will be used. American businesses

alone will use 850 billion pages in 1981, 2.5 trillion pages in 1986, and

4 trillion in 1990.

1959

COBOL, a computer language designed for business use, is developed

by Grace Murray Hopper, who was also one of the first programmers

of the Mark I.

1959

Xerox introduces the first commercial copier.

1960

Theodore Harold Maimen develops the first laser. It uses a ruby

cylinder.

1960

The recently established Defense Departmentʹs Advanced Research

Projects Agency substantially increases its funding for computer

research.

1960

There are now about six thousand computers in operation in the

United States.

1960s

Neural‐net machines are quite simple and incorporate a small number

of neurons organized in only one or two layers. These models are

shown to be limited in their capabilities.

1961

The first time‐sharing computer is developed at MIT.

1961

President John F. Kennedy provides the support for space project

Apollo and inspiration for important research in computer science

when he addresses a joint session of Congress, saying, ʺI believe we

should go to the moon.ʺ

1962

The worldʹs first industrial robots are marketed by a U.S. company.

1962

Frank Rosenblatt defines the Perceptron in his Principles of

Neurodynamics. Rosenblatt first introduced the Perceptron, a simple

processing element for neural networks, at a conference in 1959.

1963

The Artificial Intelligence Laboratory at Stanford University is

founded by John McCarthy.

1963

The influential Steps Toward Artificial Intelligence by Marvin Minsky is

published.

1963

Digital Equipment Corporation announces the PDP‐8, which is the

first successful minicomputer.

1964

IBM introduces its 360 series, thereby further strengthening its

leadership in the computer industry.

1964

Thomas E. Kurtz and John G. Kenny of Dartmouth College invent

BASIC (Beginnerʹs All‐purpose Symbolic Instruction Code).

1964

Daniel Bobrow completes his doctoral work on Student, a natural‐

language program that can solve high‐school‐level word problems in

algebra.

1964

Gordon Mooreʹs prediction, made this year, says integrated circuits

will double in complexity each year. This will become known as

Mooreʹs Law and prove true (with later revisions) for decades to

come.

1964

Marshall McLuhan, via his Understanding Media, foresees the potential

for electronic media, especially television, to create a ʺglobal villageʺ

in which ʺthe medium is the message.ʺ

1965

The Robotics Institute at Carnegie Mellon University, which will

become a leading research center for AI, is founded by Raj Reddy

1965

Hubert Dreyfus presents a set of philosophical arguments against the

possibility of artificial intelligence in a RAND corporate memo

entitled ʺAlchemy and Artificial Intelligence.ʺ

1965

Herbert Simon predicts that by 1985 ʺmachines will be capable of

doing any work a man can do.ʺ

1966

The Amateur Computer Society, possibly the first personal computer

club, is founded by Stephen B. Gray. The Amateur Computer Society

Newsletter is one of the first magazines about computers.

1967

The first internal pacemaker is developed by Medtronics. It uses

integrated circuits.

1968

Gordon Moore and Robert Noyce found Intel (Integrated Electronics)

Corporation.

1968

The idea of a computer that can see, speak, hear, and think sparks

imaginations when HAL is presented in the film 2001: A Space

Odyssey, by Arthur C. Clarke and Stanley Kubrick.

1969

Marvin Minsky and Seymour Papert present the limitation of single‐

layer neural nets in their book Perceptrons. The bookʹs pivotal theorem

shows that a Perceptron is unable to determine if a line drawing is

fully connected. The book essentially halts funding for neural‐net

research.

1970

The GNP, on a per capita basis and in constant 1958 dollars, is $3,500,

or more than six times as much as a century before.

1970

The floppy disc is introduced for storing data in computers.

c. 1970

Researchers at the Xerox Palo Alto Research Center (PARC) develop

the first personal computer, called Alto. PARCʹs Alto pioneers the use

of bitmapped graphics, windows, icons, and mouse pointing devices.

1970

Terry Winograd completes his landmark thesis on SHRDLU, a

natural‐language system that exhibits diverse intelligent behavior in

the small world of childrenʹs blocks. SHRDLU is criticized, however,

for its lack of generality.

1971

The Intel 4004, the first microprocessor, is introduced by Intel.

1971

The first pocket calculator is introduced. It can add, subtract,

multiply, and divide.

1972

Continuing his criticism of the capabilities of AI, Hubert Dreyfus

publishes What Computers Canʹt Do, in which he argues that symbol

manipulation cannot be the basis of human intelligence.

1973

Stanley H. Cohen and Herbert W. Boyer show that DNA strands can

be cut, joined, and then reproduced by inserting them into the

bacterium Escherichia coli. This work creates the foundation for genetic

engineering.

1974

Creative Computing starts publication. It is the first magazine for home

computer hobbyists.

1974

The 8‐bit 8080, which is the first general‐purpose microprocessor, is

announced by Intel.

1975

Sales of microcomputers in the United States reach more than five

thousand, and the first personal computer, the Altair 8800, is

introduced. It has 256 bytes of memory.

1975

BYTE, the first widely distributed computer magazine, is published.

1975

Gordon Moore revises his observation on the doubling rate of

transistors on an integrated circuit from twelve months to twenty‐four

months.

1976

Kurzweil Computer Products introduces the Kurzweil Reading

Machine (KRM), the first print‐to‐speech reading machine for the

blind. Based on the first omni‐font (any font) optical character

recognition (OCR) technology, the KRM scans and reads aloud any

printed materials (books, magazines, typed documents).

1976

Stephen G. Wozniak and Steven P. Jobs found Apple Computer

Corporation.

1977

The concept of true‐to‐life robots with convincing human emotions is

imaginatively portrayed in the film Star Wars.

1977

For the first time, a telephone company conducts large‐scale

experiments with fiber optics in a telephone system.

1977

The Apple II, the first personal computer to be sold in assembled form

and the first with color graphics capability, is introduced and

successfully marketed.

1978

Speak & Spell, a computerized learning aid for young children, is

introduced by Texas Instruments. This is the first product that

electronically duplicates the human vocal tract on a chip.

1979

In a landmark study by nine researchers published in the Journal of the

American Medical Association, the performance of the computer

program MYCIN is compared with that of doctors in diagnosing ten

test cases of meningitis. MYCIN does at least as well as the medical

experts. The potential of expert systems in medicine becomes widely

recognized.

1979

Dan Bricklin and Bob Frankston establish the personal computer as a

serious business tool when they develop Visicalc, the first electronic

spreadsheet.

1980

AI industry revenue is a few million dollars this year.

1980s

As neuron models are becoming potentially more sophisticated, the

neural network paradigm begins to make a comeback, and networks

with multiple layers are commonly used.

1981

Xerox introduces the Star Computer, thus launching the concept of

Desktop Publishing. Appleʹs Laserwriter, available in 1985, will

further increase the viability of this inexpensive and efficient way for

writers and artists to create their own finished documents.

1981

IBM introduces its Personal Computer (PC).

1981

The prototype of the Bubble Jet printer is presented by Canon.

1982

Compact disc players are marketed for the first time.

1982

Mitch Kapor presents Lotus 1‐2‐3, an enormously popular

spreadsheet program.

1983

Fax machines are fast becoming a necessity in the business world.

1983

The Musical Instrument Digital Interface (MIDI) is presented in Los

Angeles at the first North American Music Manufacturers show

1983

Six million personal computers are sold in the United States.

1984

The Apple Macintosh introduces the ʺdesktop metaphor,ʺ pioneered

at Xerox, including bit‐mapped graphics, icons, and the mouse.

1984

William Gibson uses the term cyberspace in his book Neuromancer.

1984

The Kurzweil 250 (K250) synthesizer, considered to be the first

electronic instrument to successfully emulate the sounds of acoustic

instruments, is introduced to the market.

1985

Marvin Minsky published The Society of Mind, in which he presents a

theory of the mind where intelligence is seen to be the result of proper

organization of a hierarchy of minds with simple mechanisms at the

lowest level of the hierarchy.

1985

MITʹs Media Laboratory is founded by Jerome Weisner and Nicholas

Negroponte. The lab is dedicated to researching possible applications

and interactions of computer science, sociology, and artificial

intelligence in the context of media technology.

1985

There are 116 million jobs in the United States, compared to 12 million

in 1870. In the same period, the number of those employed has grown

from 31 percent to 48 percent, and the per capita GNP in constant

dollars has increased by 600 percent. These trends show no signs of

abating.

1986

Electronic keyboards account for 55.2 percent of the American musical

keyboard market, up from 9.5 percent in 1980.

1986

Life expectancy is about 74 years in the United States. Only 3 percent

of the American workforce is involved in the production of food.

Fully 76 percent of American adults have high‐school diplomas, and

7.3 million U.S. students are enrolled in college.

1987

NYSE stocks have their greatest single‐day loss due, in part, to

computerized trading.

1987

Current speech systems can provide any one of the following: a large

vocabulary, continuous speech recognition, or speaker independence.

1987

Robotic‐vision systems are now a $300 million industry and will grow

to $800 million by 1990.

1988

Computer memory today costs only one hundred millionth of what it

did in 1950.

1988

Marvin Minsky and Seymour Papert publish a revised edition of

Perceptrons in which they discuss recent developments in neural

network machinery for intelligence.

1988

In

the

United

States,

4,700,000

microcomputers,

120,000

minicomputers, and 11,500 mainframes are sold this year.

1988

W. Daniel Hillisʹs Connection Machine is capable of 65,536

computations at the same time.

1988

Notebook computers are replacing the bigger laptops in popularity.

1989

Intel introduces the 16‐megahertz (MHZ) 80386SX, 2.5 MIPS

microprocessor.

1990

Nautilus, the first CD‐ROM magazine, is published.

1990

The development of Hypertext Markup Language by researcher Tim

Berners‐Lee and its release by CERN, the high‐energy physics

laboratory in Geneva, Switzerland, leads to the conception of the

World Wide Web.

1991

Cell phones and e‐mail are increasing in popularity as business and

personal communication tools.

1992

The first double‐speed CD‐ROM drive becomes available from NEC.

1992

The first personal digital assistant (PDA), a handheld computer, is

introduced at the Consumer Electronics Show in Chicago. The

developer is Apple Computer.

1993

The Pentium 32‐bit microprocessor is launched by Intel. This chip has

3.1 million transistors.

1994

The World Wide Web emerges.

1994

America Online now has more than 1 million subscribers.

1994

Scanners and CD‐ROMS are becoming widely used.

1994

Digital Equipment corporation introduces a 300 MHZ version of the

Alpha AXP processor that executes 1 billion instructions per second.

1996

Compaq Computer and NEC Computer Systems ship hand held

computers running Windows CE.

1996

NEC Electronics ships the R4101 processor for personal digital

assistants. It includes a touch‐screen interface.

1997

Deep Blue defeats Gary Kasparov, the world chess champion, in a

regulation tournament.

1997

Dragon Systems introduces Naturally Speaking, the first continuous‐

speech dictation software product.

1997

Video phones are being used in business settings.

1997

Face‐recognition systems are beginning to be used in payroll check‐

cashing machines.

1998

The Dictation Division of Lernout & Hauspie Speech Products

(formerly Kurzweil Applied Intelligence) introduces Voice Xpress

Plus, the first continuous‐speech‐recognition program with the ability

to understand natural‐language commands.

1998

Routine business transactions over the phone are beginning to be

conducted between a human customer and an automated system that

engages in a verbal dialogue with the customer (e.g., United Airlines

reservations).

1998

Investment funds are emerging that use evolutionary algorithms and

neural nets to make investment decisions (e.g., Advanced Investment

Technologies).

1998

The World Wide Web is ubiquitous. It is routine for high‐school

students and local grocery stores to have web sites.

1998

Automated personalities, which appear as animated faces that speak

with realistic mouth movements and facial expressions, are working

in laboratories. These personalities respond to the spoken statements

and facial expressions of their human users. They are being developed

to be used in future user interfaces for products and services, as

personalized research and business assistants, and to conduct

transactions.

1998

Microvisionʹs Virtual Retina Display (VRD) projects images directly

onto the userʹs retinas. Although expensive, consumer versions are

projected for 1999.

1998

ʺBluetoothʺ technology is being developed for ʺbodyʺ local area

networks (LANS) and for wireless communication between personal

computers and associated peripherals. Wireless communication is

being developed for high‐bandwidth connection to the Web.

1999

Ray Kurzweilʹs The Age of Spiritual Machines: When Computers Exceed

Human Intelligence is published, available at your local bookstore!

2009

A $1,000 personal computer can perform about a trillion calculations

per second.

Personal computers with high‐resolution visual displays come in a

range of sizes, from those small enough to be embedded in clothing

and jewelry up to the size of a thin book.

Cables are disappearing. Communication between components

uses short‐distance wireless technology. High‐speed wireless

communication provides access to the Web.

The majority of text is created using continuous speech

recognition. Also ubiquitous are language user interfaces (LUIS).

Most

routine

business

transactions

(purchases,

travel,

reservations) take place between a human and a virtual personality.

Often, the virtual personality includes an animated visual presence

that looks like a human face.

Although traditional classroom organization is still common,

intelligent courseware has emerged as a common means of learning.

Pocket‐sized reading machines for the blind and visually

impaired, ʺlistening machinesʺ (speech‐to‐text conversion) for the

deaf, and computer‐controlled orthotic devices for paraplegic

individuals result in a growing perception that primary disabilities do

not necessarily impart handicaps.

Translating telephones (speech‐to‐speech language translation) are

commonly used for many language pairs.

Accelerating returns from the advance of computer technology

have resulted in continued economic expansion. Price deflation,

which had been a reality in the computer field during the twentieth

century, is now occurring outside the computer field. The reason for

this is that virtually all economic sectors are deeply affected by the

accelerating improvement in the price performance of computing.

Human musicians routinely jam with cybernetic musicians.

Bioengineered treatments for cancer and heart disease have

greatly reduced the mortality from these diseases.

The neo‐Luddite movement is growing.

2019

A $1,000 computing device (in 1999 dollars) is now approximately

equal to the computational ability of the human brain.

Computers are now largely invisible and are embedded

everywhere—in walls, tables, chairs, desks, clothing, jewelry, and

bodies.

Three‐dimensional virtual reality displays, embedded in glasses

and contact lenses, as well as auditory ʺlenses,ʺ are used routinely as

primary interfaces for communication with other persons, computers,

the Web, and virtual reality.

Most interaction with computing is through gestures and two‐way

natural‐language spoken communication.

Nanoengineered machines are beginning to be applied to

manufacturing and process‐control applications.

High‐resolution, three‐dimensional visual and auditory virtual

reality and realistic all‐encompassing tactile environments enable

people to do virtually anything with anybody, regardless of physical

proximity.

Paper books or documents are rarely used and most learning is

conducted through intelligent, simulated software‐based teachers.

Blind persons routinely use eyeglass‐mounted reading‐navigation

systems. Deaf persons read what other people are saying through

their lens displays. Paraplegic and some quadriplegic persons

routinely walk and climb stairs through a combination of computer‐

controlled nerve stimulation and exoskeletal robotic devices.

The vast majority of transactions include a simulated person.

Automated driving systems are now installed in most roads.

People are beginning to have relationships with automated

personalities and use them as companions, teachers, caretakers, and

lovers.

Virtual artists, with their own reputations, are emerging in all of

the arts.

There are widespread reports of computers passing the Turing

Test, although these tests do not meet the criteria established by

knowledgeable observers.

2029

A $1.000 (in 1999 dollars) unit of computation has the computing

capacity of approximately 1,000 human brains.

Permanent or removable implants (similar to contact lenses) for

the eyes as well as cochlear implants are now used to provide input

and output between the human user and the worldwide computing

network.

Direct neural pathways have been perfected for high‐bandwidth

connection to the human brain. A range of neural implants is

becoming available to enhance visual and auditory perception and

interpretation, memory, and reasoning.

Automated agents are now learning on their own, and significant

knowledge is being created by machines with little or no human

intervention. Computers have read all available human‐ and machine‐

generated literature and multimedia material.

There is widespread use of all‐encompassing visual, auditory, and

tactile communication using direct neural connections, allowing

virtual reality to take place without having to be in a ʺtotal touch

enclosure.ʺ

The majority of communication does not involve a human. The

majority of communication involving a human is between a human

and a machine.

There is almost no human employment in production, agriculture,

or transportation. Basic life needs are available for the vast majority of

the human race.

There is a growing discussion about the legal rights of computers

and what constitutes being ʺhuman.ʺ

Although computers routinely pass apparently valid forms of the

Turing Test, controversy persists about whether or not machine

intelligence equals human intelligence in all of its diversity.

Machines claim to be conscious. These claims are largely accepted.

2049

The common use of nanoproduced food, which has the correct

nutritional composition and the same taste and texture of organically

produced food, means that the availability of food is no longer

affected by limited resources, bad crop weather, or spoilage.

Nanobot swarm projections are used to create visual‐auditory‐

tactile projections of people and objects in real reality.

2072

Picoengineering (developing technology at the scale of picometers or

trillionths of a meter) becomes practical. [1]

By the year 2099

There is a strong trend toward a merger of human thinking with

the world of machine intelligence that the human species initially

created.

There is no longer any clear distinction between humans and

computers.

Most conscious entities do not have a permanent physical

presence.

Machine‐based intelligences derived from extended models of

human intelligence claim to be human, although their brains are not

based on carbon‐based cellular processes, but rather electronic and

photonic equivalents. Most of these intelligences are not tied to a

specific computational processing unit. The number of software‐based

humans vastly exceeds those still using native neuron‐cell‐based

computation.

Even among those human intelligences still using carbon‐based

neurons, there is ubiquitous use of neural‐implant technology, which

provides enormous augmentation of human perceptual and cognitive

abilities. Humans who do not utilize such implants are unable to

meaningfully participate in dialogues with those who do.

Because most information is published using standard assimilated

knowledge protocols, information can be instantly understood. The

goal of education, and of intelligent beings, is discovering new

knowledge to learn.

Femtoengineering (engineering at the scale of femtometers or one

thousandth of a trillionth of a meter) proposals are controversial. [2]

Life expectancy is no longer a viable term in relation to intelligent

beings.

Some many millenniums hence . . .

Intelligent beings consider the fate of the Universe.

HOW TO BUILD AN

INTELLIGENT MACHINE

IN THREE EASY PARADIGMS

As Deep Blue goes deeper and deeper it displays elements of strategic understanding. Somewhere out there, mere tactics are translating into strategy. This is the closest thing Iʹve seen to computer intelligence. Itʹs a weird form of intelligence, the beginning of intelligence. But you can feel it. You can smell it.

—Frederick Friedel, assistant to Gary Kasparov,

commenting on the computer that beat his boss.

The whole point of this sentence is to make clear what the whole point of this sentence is.

—Douglas Hofstadter

ʺWould you tell me please which way I ought to go from here?ʺ asked Alice.

ʺThat depends a good deal on where you want to get to, ʺ said the Cat.

ʺI donʹt much care where . . . ,ʺ said Alice.

ʺThen it doesnʹt much matter which way you go,ʺ said the Cat.

ʺ. . . so long as I get somewhere,ʺ Alice added as an explanation.

ʺOh, youʹre sure to do that,ʺ said the Cat, ʺif you only walk long enough.ʺ

—Lewis Carroll

A professor has just finished lecturing at some august university about the origin and structure of the universe, and an old woman in tennis shoes walks up to the lectern. ʺExcuse me, sir but youʹve got it all wrong,ʺ she says. ʺThe truth is that the universe is sitting on the back of a huge turtle.ʺ The professor decides to humor her ʺOh really?ʺ he asks. ʺWell, tell me, what is the turtle standing on?ʺ The lady has a ready reply: ʺOh, itʹs standing on another turtle.ʺ The professor asks, ʺAnd what is that turtle standing on?ʺ Without hesitation, she says, ʺAnother turtle.ʺ

The professor still game, repeats his question. A look of impatience comes across the womanʹs face. She holds up her hand, stopping him in mid‐sentence. ʺSave your breath, sonny,ʺ she says. ʺItʹs turtles all the way down.ʺ

—Rolf Landauer

As I mentioned in chapter 6, ʺBuilding New Brains,ʺ understanding intelligence is a bit like peeling an onion penetrating each layer reveals yet another onion. At the end of the process, we have a lot of onion peels, but no onion.

In other words, intelligence—particularly human intelligence—operates at many levels. We can penetrate and understand each level, but the whole process requires all the levels working together in just the right way.

Presented here are some further perspectives on the three paradigms I discussed in chapter 4, ʺA New Form of Intelligence on Earth.ʺ Each of these methods can provide ʺintelligentʺ solutions to carefully defined problems. But to create systems that can respond flexibly in the complex environments that intelligent entities often find themselves, these approaches need to be combined in appropriate ways. This is particularly true when interacting with phenomena that incorporate multiple levels of understanding. For example, if we build a single grand neural network

and attempt to train it to understand all the complexities of speech and language, the results will be limited at best.

More encouraging results are obtained if we break down the problem in a way that corresponds to the multiple levels

of meaning that we find in this uniquely human form of communication.

The human brain is organized the same way: as an intricate assemblage of specialized regions. And as we learn

the brainʹs parallel algorithms, we will have the means to vastly extend them. As just one example, the brain region responsible for logical and recursive thinking—the cerebral cortex—has a mere 8 million neurons. [1] We are already

building neural nets thousands of times larger and that operate millions of times faster. The key issue in designing intelligent machines (until they take over that chore from us) will be designing clever architectures to combine the relatively simple methods that comprise the building blocks of intelligence.

The Recursive Formula

Hereʹs a really simple formula to create intelligent solutions to difficult problems. Listen carefully or you might miss it.

The recursive formula is:

For my next step, take my best next step. If Iʹm done, Iʹm done.

It may seem too simple, and Iʹll admit thereʹs not much content at first glance. But its power is surprising.

Letʹs consider the classical example of a problem addressed by the recursive formula: the game of chess. Chess is

considered an intelligent game, at least it was until recently. Most observers are still of the view that it requires intelligence to play a good game. So how does our recursive formula fare in this arena?

Chess is a game played one move at a time. The goal is to make ʺgoodʺ moves. So letʹs define a program that makes good moves. By applying the recursive formula to chess, we rephrase it as follows:

PICK MY BEST MOVE: Pick my best move. If Iʹve won, Iʹm done.

Hang in there; this will make sense in a moment. I need to factor in one more aspect of chess, which is that I am

not in this alone. I have an opponent. She makes moves, too. Letʹs give her the benefit of the doubt and assume that she also makes good moves. If this proves to be wrong, it will be an opportunity, not a problem. So now we have:

PICK MY BEST MOVE: Pick my best move, assuming my opponent will do the same. If Iʹve won, Iʹm done.

At this point, we need to consider the nature of recursion. A recursive rule is one that is defined in terms of itself.

A recursive rule is circular, but to be useful we donʹt want to go around in circles forever. We need an escape hatch.

To illustrate recursion, letʹs consider an example: the simple ʺfactorialʺ function. To compute factorial of n, we multiply n by factorial of (n ‐ 1). Thatʹs the circular part—we have defined this function in terms of itself. We also need to specify that factorial of 1 = 1. Thatʹs our escape hatch.

As an example, letʹs compute factorial of 2. According to our definition,

factorial of 2 = 2 times (factorial of 1).

We know directly what (factorial of 1) is, so thereʹs our escape from infinite recursion. Plugging in (factorial of 1) =

1, we now can write,

factorial of 2 = 2 times 1 = 2.

Returning to chess, we can see that the PICK MY BEST MOVE function is recursive, since we have defined the best move in terms of itself. The deceptively innocuous ʺif Iʹve won, Iʹm doneʺ part of the strategy is our escape hatch.

Letʹs factor in what we know about chess. This is where we carefully consider the definition of the problem. We

realize that to pick the best move, we need to start by listing the possible moves. This is not very complicated. The legal moves at any point in the game are defined by the rules. While more complicated than some other games, the

rules of chess are straightforward and easily programmed. So we list the moves and pick the best one.

But which is best? If the move results in a win, that will do nicely. So again we merely consult the rules and pick

one of the moves that yields an immediate checkmate. Perhaps we are not so lucky and none of the possible moves

provides an immediate win. We still need to consider whether or not the move enables me to win or lose. At this point we need to consider the subtle addition we made to our rule, ʺassuming my opponent will do the same.ʺ After all, my winning or losing is affected by what my opponent might do. I need to put myself in her shoes and pick her best

move. How can I do that? This is where the power of recursion comes in. We have a program that does exactly this,

called PICK MY BEST MOVE. So we call it to determine my opponentʹs best move.

Our program is now structured as follows. PICK MY BEST MOVE generates a list of all possible moves allowed

by the rules. It examines each possible move in turn. For each move, it generates a hypothetical board representing

what the placement of the pieces would be if that move were selected. Again, this just requires applying the definition of the problem as embodied in the rules of chess. PICK MY BEST MOVE now puts itself in my opponentʹs place and

calls itself to determine her best move. It then starts to generate all of her possible moves from that board position.

The program thus keeps calling itself, continuing to expand possible moves and countermoves in an ever expanding tree of possibilities. This process is often called a minimax search, because we are alternatively attempting to minimize my opponentʹs ability to win and to maximize my own.

Where does this all end? The program just keeps calling itself until every branch of the tree of possible moves and

countermoves results in an end of game. Each end of game provides the answer: Win, tie, or draw. At the furthest point of expansion of moves and countermoves, the program encounters moves that finish the game. If a move results

in a win, we pick that move. If there are no win moves, then we settle for a tie. If there are no win or tie moves, I continue playing anyway in the hope that my opponent is not perfect like I am.

These final moves are the final branches—called leaves—in our tree of move sequences. Now, instead of continuing to call PICK MY BEST MOVE, the program begins to return from its calls to itself. As it begins to return

from all of the nested PICK BEST MOVE calls, it has determined the best move at each point (including the best move

for my opponent), and so it can finally select the correct move for the current actual board situation.

So how well a game does this simple program play? The answer is perfect chess. I, canʹt lose, unless possibly if my opponent goes first and is also perfect. Perfect chess is very good indeed, much better than any mere human. The most complicated part of the PICK MY BEST MOVE function—the only aspect that is not extremely simple—is generating the allowable moves at each point. And this is just a matter of codifying the rules. Essentially, we have determined the answer by carefully defining the problem.

But weʹre not done. While playing perfect chess might be considered impressive, it is not good enough. We need

to consider how responsive a player PICK MY BEST MOVE will be. If we assume that there are, on average, about 8

possible moves for each board situation, and that a typical game lasts about 30 moves, we need to consider 830

possible move sequences to fully expand the tree of all move‐countermove possibilities. If we assume that we can analyze 1 billion board positions per second (a good deal faster than any chess computer today), it would take 1018

seconds, or about 40 billion years, to select each move.

Unfortunately, thatʹs not regulation play. This approach to recursion is a bit like evolution—both do great work but are incredibly slow. Thatʹs really not surprising if you think about it. Evolution represents another very simple paradigm, and indeed is another of our simple formulas.

However, before we throw out the recursive formula, letʹs attempt to modify it to take into account our human patience and, for the time being, our mortality.

Clearly we need to put limits on how deep we allow the recursion to take place. How large we allow the move‐

countermove tree to grow needs to depend on how much computation we have available. In this way, we can use the

recursive formula on any computer, from a wristwatch computer to a supercomputer.

Limiting the size of this tree means of course that we cannot expand each branch until the end of the game. We

need to arbitrarily stop the expansion and have a method of evaluating the ʺterminal leavesʺ of an unfinished tree.

When we considered fully expanding each move sequence to the end of the game, evaluation was simple: Winning is

better than tying, and losing is no good at all. Evaluating a board position in the middle of the game is slightly more complicated. Rather, it is more controversial because here we encounter multiple schools of thought.

The cat in Aliceʹs Adventures in Wonderland who tells Alice that it doesnʹt much matter which way she goes must have been an expert in recursive algorithms. Any halfway reasonable approach works rather well. If, for example, we

just add up the piece values (that is, 10 for the queen, 5 for the rook, and so on), we will obtain rather respectable results. Programming the recursive minimax formula using the piece value method of evaluating terminal leaves, as

run on your average personal computer circa 1998, will defeat all but a few thousand humans on the planet.

This is what I call the ʺsimple mindedʺ school. This school of thought says: Use a simple method of evaluating the

terminal leaves and put whatever computational power we have available into expanding the moves and

countermoves as deeply as possible. Another approach is the ʺcomplicated mindedʺ school, which says that we need

to use sophisticated procedures to evaluate the ʺqualityʺ of the board at each terminal leaf position.

IBMʹs Deep Blue, the computer that crossed this historic threshold, uses a leaf evaluation method that is a good

deal more refined than just adding up piece values. However, in a discussion I had with Murray Campbell, head of

the Deep Blue team, just weeks prior to its 1997 historic victory, Campbell agreed that Deep Blueʹs evaluation method was more simple minded than complicated minded.

Human players are very complicated minded. That seems to be the human condition. As a result, even the best chess players are unable to consider more than a hundred moves, compared to a few billion for Deep Blue. But each

human move is deeply considered. However, in 1997, Gary Kasparov, the worldʹs best example of the complicated‐

minded school, was defeated by a simple‐minded computer.

Personally, I am of a third school of thought. Itʹs not much of a school, really. To my knowledge, no one has tried

this idea. It involves combining the recursive and neural net paradigms, and I describe it in the discussion on neural nets that follows.

MATHLESS "PSEUDO CODE"

FOR THE RECURSIVE ALGORITHM

Here is the basic schema for the recursive algorithm. Many variations are possible, and the designer of the

system needs to provide certain critical parameters and methods, detailed below.

The Recursive Algorithm

Define a function (program), "PICK BEST NEXT STEP" The function returns a value of "SUCCESS" (we've solved the problem) or "FAILURE" (we didn't solve it). If it returns with a value of SUCCESS, then the function also returns the sequence of selected steps that solved the problem. PICK BEST NEXT STEP does the

following:

PICK BEST NEXT STEP:

• Determine if the program can escape from continued recursion at this point. This bullet and the

next two bullets deal with this escape decision. First, determine if the problem has now been

solved. Since this call to PICK BEST NEXT STEP probably came from the program calling itself, we

may now have a satisfactory solution. Examples are:

(i) In the context of a game (e.g., chess), the last move allows us to win (e.g., checkmate).

(ii) In the context of solving a mathematical theorem, the last step proves the theorem.

(iii) In the context of an artistic program (e.g., cybernetic poet or composer), the last step

matches the goals for the next word or note.

If the problem has been satisfactorily solved, the program returns with a value of SUCCESS. In this case,

PICK BEST NEXT STEP also returns the sequence of steps that caused the success.

• If the problem has not been solved, determine if a solution is now hopeless. Examples are:

(i) In the context of a game (e.g., chess), this move causes us to lose (e.g., checkmate for the

other side).

(ii) In the context of solving a mathematical theorem, this step violates the theorem.

(iii) In the context of an artistic program (e.g., cybernetic poet or composer), this step violates

the goals for the next word or note.

If the solution at this point has been deemed hopeless, the program returns with a value of FAILURE.

• If the problem has been neither solved nor deemed hopeless at this point of recursive expansion,

determine whether or not the expansion should be abandoned anyway. This is a key aspect of

the design and takes into consideration the limited amount of computer time we have to spend.

Examples are:

(i) In the context of a game (e.g., chess), this move puts our side sufficiently "ahead" or

"behind." Making this determination may not be straightforward and is the primary design

decision. However, simple approaches (e.g., adding up piece values) can still provide good

results. If the program determines that our side is sufficiently ahead, then PICK BEST NEXT

STEP returns in a similar manner to a determination that our side has won (i.e., with a value

of SUCCESS). If the program determines that our side is sufficiently behind, then PICK BEST

NEXT STEP returns in a similar manner to a determination that our side has lost (i.e., with a

value of FAILURE).

(ii) In the context of solving a mathematical theorem, this step involves determining if the

sequence of steps in the proof are unlikely to yield a proof. If so, then this path should be

abandoned, and PICK BEST NEXT STEP returns in a similar manner to a determination that

this step violates the theorem (i.e., with a value of FAILURE). There is no "soft" equivalent of

success. We can't return with a value of SUCCESS until we have actually solved the problem.

That's the nature of math.

(iii) In the context of an artistic program (e.g., cybernetic poet or composer), this step involves

determining if the sequence of steps (e.g., words in a poem, notes in a song) is unlikely to

satisfy the goals for the next step. If so, then this path should be abandoned, and PICK BEST

NEXT STEP returns in a similar manner to a determination that this step violates the goals

for the next step (i.e., with a value of FAILURE).

• If PICK BEST NEXT STEP has not returned (because the program has neither determined success

nor failure nor made a determination that this path should be abandoned at this point), then we

have not escaped from continued recursive expansion. In this case, we now generate a list of all

possible next steps at this point. This is where the precise statement of the problem comes in:

(i) In the context of a game (e.g., chess), this involves generating all possible moves for "our"

side for the current state of the board. This involves a straightforward codification of the

rules of the game.

(ii) In the context of finding a proof for a mathematical theorem, this involves listing the possible

axioms or previously proved theorems that can be applied at this point in the solution.

(iii) In the context of a cybernetic art program, this involves listing the possible words/notes/line

segments that could be used at this point.

For each such possible next step:

• Create the hypothetical situation that would exist if this step were implemented. In a game, this

means the hypothetical state of the board. In a mathematical proof, this means adding this step

(e.g., axiom) to the proof. In an art program, this means adding this word/note/line segment.

• Now call PICK BEST NEXT STEP to examine this hypothetical situation. This is, of course, where

the recursion comes in because the program is now calling itself.

• If the above call to PICK BEST NEXT STEP returns with a value of SUCCESS, then return from the

call to PICK BEST NEXT STEP (that we are now in), also with a value of SUCCESS. Otherwise

consider the next possible step.

If all the possible next steps have been considered without finding a step that resulted in a return from the

call to PICK BEST NEXT STEP with a value of SUCCESS, then return from this call to PICK BEST NEXT STEP

(that we are now in) with a value of FAILURE.

END OF PICK BEST NEXT STEP

If the original call to PICK BEST NEXT STEP returns with a value of SUCCESS, it will also return the correct

sequence of steps:

• In the context of a game, the first step in this sequence is the next move you should make.

• In the context of a mathematical proof, the full sequence of steps is the proof.

• In the context of a cybernetic art program, the sequence of steps is your work of art.

If the original call to PICK BEST NEXT STEP is FAILURE, then you need to go back to the drawing board.

Key Design Decisions

In the simple schema above, the designer of the recursive algorithm needs to determine the following at the

outset:

• The key to a recursive algorithm is the determination in PICK BEST NEXT STEP when to abandon

the recursive expansion. This is easy when the program has achieved clear success (e.g.,

checkmate in chess, or the requisite solution in a math or combinatorial problem) or clear failure.

It is more difficult when a clear win or loss has not yet been achieved. Abandoning a line of

inquiry before a well-defined outcome is necessary because otherwise the program might run for

billions of years (or at least until the warranty on your computer runs out).

• The other primary requirement for the recursive algorithm is a straightforward codification of the

problem. In a game like chess, that's easy. But in other situations, a clear definition of the

problem is not always so easy to come by.

Happy Recursive Searching!

Neural Nets

In the early and mid‐1960s, AI researchers became enamored with the Perceptron, a machine constructed from mathematical models of human neurons. Early Perceptrons were modestly successful in such pattern‐recognition tasks as identifying printed letters and speech sounds. It appeared that all that was needed to make the Perceptron more intelligent was to add more neurons and more wires.

Then came Marvin Minsky and Seymour Papertʹs 1969 book, Perceptrons, which proved a set of theorems

apparently demonstrating that a Perceptron could never solve the simple problem of determining whether or not a line drawing is ʺconnectedʺ (in a connected drawing all parts are connected to one another by lines). The book had a dramatic effect, and virtually all work on Perceptrons came to a halt. [2]

In the late 1970s and 1980s, the paradigm of building computer simulations of human neurons, then called neural

nets, began to regain itʹs popularity. One observer wrote in 1988:

Once upon a time two daughter sciences were born to the new science of cybernetics. One sister was natural,

with features inherited from the study of the brain, from the way nature does things. The other was artificial,

related from the beginning to the use of computers. Each of the sister sciences tried to build models of intelligence, but from very different materials. The natural sister built models (called neural networks) out of

mathematically purified neurones. The artificial sister built her models out of computer programs.

In their first bloom of youth the two were equally successful and equally pursued by suitors from other fields

of knowledge. They got on very well together. Their relationship changed in the early sixties when a new monarch appeared, one with the largest coffers ever seen in the kingdom of the sciences: Lord DARPA, the Defense Departmentʹs Advanced Research Projects Agency. The artificial sister grew jealous and was

determined to keep for herself the access to Lord DARPAʹs research funds. The natural sister would have to be

slain.

The bloody work was attempted by two staunch followers of the artificial sister, Marvin Minsky and Seymour

Papert, cast in the role of the huntsman sent to slay Snow White and bring back her heart as proof of the deed.

Their weapon was not the dagger but the mightier pen, from which came a book— Perceptrons—purporting to

prove that neural nets could never fill their promise of building models of mind: only computer programs could do this. Victory seemed assured for the artificial sister. And indeed, for the next decade all the rewards of

the kingdom came to her progeny, of which the family of expert systems did best in fame and fortune.

But Snow White was not dead. What Minsky and Papert had shown the world as proof was not the heart of the

princess, it was the heart of a pig.

The author of the above statement was Seymour Papert. [3] His sardonic allusion to bloody hearts reflects a widespread misunderstanding of the implications of the pivotal theorem in his and Minskyʹs 1969 book. The theorem

demonstrated limitations in the capabilities of a single layer of simulated neurons. If, on the other hand, we place neural nets at multiple levels—having the output of one neural net feed into the next—the range of its competence greatly expands. Moreover, if we combine neural nets with other paradigms, we can make yet greater progress. The

heart that Minsky and Papert extracted belonged primarily to the single layer neural net.

Papertʹs irony also reflects his and Minskyʹs own considerable contributions to the neural net field. In fact, Minsky started his career with seminal contributions to the concept at Harvard in the 1950s. [4]

But enough of politics. What are the main issues in designing a neural net?

One key issue is the netʹs topology: the organization of the interneuronal connections. A net organized with multiple levels can make more complex discriminations but is harder to train.

Training the net is the most critical issue. This requires an extensive library of examples of the patterns the net will be expected to recognize, along with the correct identification of each pattern. Each pattern is presented to the net.

Typically, those connections that contributed to a correct identification are strengthened (by increasing their associated weight), and those that contributed to an incorrect identification are weakened. This method of strengthening and weakening the connection weights is called back‐propagation and is one of several methods used.

There is controversy as to how this learning is accomplished in the human brainʹs neural nets, as there does not appear to be any mechanism by which back‐propagation can occur. One method that does appear to be implemented

in the human brain is that the mere firing of a neuron increases the neurotransmitter strengths of the synapses it is connected to. Also, neurobiologists have recently discovered that primates, and in all likelihood humans, grow new

brain cells throughout life, including adulthood, contradicting an earlier dogma that this was not possible.

Little and Big Hills

A key issue in adaptive algorithms—neural nets and evolutionary algorithms—is often referred to as local versus global optimality: in other words, climbing the closest hill versus finding and climbing the biggest hill. As a neural net learns (by adjusting the connection strengths), or as an evolutionary algorithm evolves (by adjusting the ʺgeneticʺ

code of the simulated organisms), the fit of the solution will improve, until a ʺlocally optimalʺ solution is found. If we compare this to climbing a hill, these methods are very good at finding the top of a nearby hill, which is the best possible solution within a local area of possible solutions. But sometimes these methods may become trapped at the

top of a small hill and fail to see a higher mountain in a different area. In the neural net context, if the neural net has converged on a locally optimal solution, as it tries adjusting any of the connection strengths, the fit becomes worse.

But just as a climber might need to come down a small elevation to ultimately climb to a higher point. On a different hill, the neural net (or evolutionary algorithm) might need to make the solution temporarily worse to ultimately find a better one.

One approach to avoiding such a ʺfalseʺ optimal solution (little hill) is to force the adaptive method to do the analysis multiple times starting with very different initial conditions in other words, force it to climb lots of hills, not just one. But even with this approach, the system designer still needs to make sure that the adaptive method hasnʹt

missed an even higher mountain in a yet more distant land.

The Laboratory of Chess

We can gain some insight into the comparison of human thinking and conventional computer approaches by again examining the human and machine approaches to chess. I do this not to belabor the issue of chess playing, but rather because it illustrates a clear contrast. Raj Reddy, Carnegie Mellon Universityʹs AI guru, cites studies of chess as playing the same role in artificial intelligence that studies of E. coli play in biology: an ideal laboratory for studying fundamental questions. [5] Computers use their extreme speed to analyze the vast combinations created by the combinatorial explosion of moves and countermoves. While chess programs may use a few other tricks (such as storing the openings of all master chess games in this century and precomputing endgames), they essentially rely on

their combination of speed and precision. In comparison, humans, even chess masters, are extremely slow and imprecise. So we precompute all of our chess moves. Thatʹs why it takes so long to become a chess master, or the master of any pursuit. Gary Kasparov has spent much of his few decades on the planet studying—and experiencing—

chess moves. Researchers have estimated that masters of a nontrivial subject have memorized about fifty thousand such ʺchunksʺ of insight.

When Kasparov plays, he, too, generates a tree of moves and countermoves in his head, but limitations in human

mental speed and short‐term memory limit his mental tree (for each actually played move) to no more than a few hundred board positions, if that. This compares to billions of board positions for his electronic antagonist. So the human chess master is forced to drastically prune his mental tree, eliminating fruitless branches by using his intense pattern‐recognition faculties. He matches each board position—actual and imagined—to this database of tens of thousands of previously analyzed situations.

After Kasparovʹs 1997 defeat, we read a lot about how Deep Blue was just doing massive number crunching, not

really ʺthinkingʺ the way its human rival was doing. One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermoves and that it was Kasparov who did not have

time to really think very much during the tournament. Mostly he was just drawing upon his mental database of situations he had thought about long ago. (Of course, this depends on oneʹs notion of thinking, as I discussed in chapter 3.) But if the human approach to chess—neural network‐based pattern recognition used to identify situations

from a library of previously analyzed situations—is to be regarded as true thinking, then why not program our machines to work the same way?

The Third Way

And thatʹs my idea that I alluded to earlier as the third school of thought in evaluating the terminal leaves in a recursive search. Recall that the simple‐minded school uses an approach such as adding up piece values to evaluate a particular board position. The complicated‐minded school advocates a more elaborate and time‐consuming logical analysis. I advocate a third way: combine two simple paradigms—recursive and neural net—by using the neural net

to evaluate the board positions at each terminal leaf. The training of a neural net is time‐consuming and requires a great deal of computing, but performing a single recognition task on a neural net that has already learned its lessons is very quick, comparable to a simple‐minded evaluation. Although fast, the neural net is drawing upon the very extensive amount of time it previously spent learning the material. Since we have every master chess game in this century online, we can use this massive amount of data to train the neural net. This training is done once and offline (that is, not during an actual game). The trained neural net would then be used to evaluate the board positions at each terminal leaf. Such a system would combine the millionfold advantage in speed that computers have with the more

humanlike ability to recognize patterns against a lifetime of experience.

I proposed this approach to Murray Campbell, head of the Deep Blue team, and he found it intriguing and appealing. He was getting tired anyway, he admitted, of tuning the leaf evaluation algorithm by hand. We talked about setting up an advisory team to implement this idea, but then IBM canceled the whole chess project. I do believe that one of the keys to emulating the diversity of human intelligence is optimally to combine fundamental paradigms.

Weʹll talk about how to fold in the paradigm of evolutionary algorithms below.

MATHLESS "PSEUDO CODE"

FOR THE NEURAL NET ALGORITHM

Here is the basic schema for a neural net algorithm. Many variations are possible, and the designer of the

system needs to provide certain critical parameters and methods, detailed below.

The Neural Net Algorithm

Creating a neural net solution to a problem involves the following steps:

• Define the input.

• Define the topology of the neural net (i.e., the layers of neurons and the connections between

the neurons).

• Train the neural net on examples of the problem.

• Run the trained neural net to solve new examples of the problem.

• Take your neural net company public.

These steps (except for the last one) are detailed below:

The Problem Input

The problem input to the neural net consists of a series of numbers. This input can be:

• in a visual pattern-recognition system: a two-dimensional array of numbers representing the

pixels of an image; or

• in an auditory (e.g., speech) recognition system: a two-dimensional array of numbers

representing a sound, in which the first dimension represents parameters of the sound (e.g.,

frequency components) and the second dimension represents different points in time; or

• in an arbitrary pattern recognition system: an n-dimensional array of numbers representing the

input pattern.

Defining the Topology

To set up the neural net:

The architecture of each neuron consists of:

• Multiple inputs in which each input is "connected" to either the output of another neuron or one

of the input numbers.

• Generally, a single output, which is connected either to the input of another neuron (which is

usually in a higher layer) or to the final output.

Set up the first layer of neurons:

• Create N0 neurons in the first layer. For each of these neurons, "connect" each of the multiple

inputs of the neuron to "points" (i.e., numbers) in the problem input. These connections can be

determined randomly or using an evolutionary algorithm (see below).

• Assign an initial "synaptic strength" to each connection created. These weights can start out all

the same, can be assigned randomly, or can be determined in another way (see below).

Set up the additional layers of neurons:

Set up a total of M layers of neurons. For each layer, set up the neurons in that layer.

For layeri:

• Create Ni neurons in layeri. For each of these neurons, "connect" each of the multiple inputs of

the neuron to the outputs of the neurons in layeri-1: (see variations below).

• Assign an initial "synaptic strength" to each connection created. These weights can start out all

the same, can be assigned randomly, or can be determined in another way (see below).

• The outputs of the neurons in layerM are the outputs of the neural net (see variations below).

The Recognition Trials

How each neuron works:

Once the neuron is set up, it does the following for each recognition trial.

• Each weighted input to the neuron is computed by multiplying the output of the other neuron (or

initial input) that the input to this neuron is connected to by the synaptic strength of that

connection.

• All of these weighted inputs to the neuron are summed.