Braille Monitor                                                                                                   June 2004

(back) (next) (contents)

Access Technology and Disabilities in the Twenty-First Century

 by Ray Kurzweil

 

Jim Gashel, Ray Kurzweil, and Marc Maurer enjoy dinner together before Mr. Kurzweil's banquet remarks.
Jim Gashel, Ray Kurzweil, and Marc Maurer enjoy dinner together before Mr. Kurzweil's banquet remarks.

From the EditorM Raymond Kurzweil is an inventor, entrepreneur, author, and futurist. The Wall Street Journal calls him "a restless genius"; Forbes magazine refers to him as the "ultimate thinking machine." He has many firsts to his credit: developer of omni font optical character recognition, inventor of the first text-to-speech reading machine for the blind, developer of the CCD flat bed scanner, inventor of both the first speech synthesizer and a music synthesizer that reproduces the grand piano and orchestral instruments, and developer of the first large-vocabulary speech-recognition system.

Dr. Kurzweil is also the recipient of many awards and honors: the $500,000 Lemelson-MIT Prize, inductee to the Inventors' Hall of Fame, and recipient of the National Medal of Technology. But, more important than all these honors to members of the blindness community around the world, Ray Kurzweil has exhibited a close personal and constant commitment to improving the lives of blind people. He awards scholarships to the most deserving blind students each year. He is also working closely with the National Federation of the Blind to develop a pocket-size reading machine that can be used anywhere. On April 8, 2004, Ray Kurzweil addressed the attendees at the first technology conference to be held at the NFB Jernigan Institute. Here are his after-dinner remarks:

This has been the most meaningful working relationship I have had. I started my inventing career with a reading machine for the blind and was gratified to get this very enthusiastic organization not only to back the effort but to work closely with me. We worked with a team of scientists and engineers from the National Federation of the Blind, which I really attribute the success of that project to. It's been a very rewarding effort, so I have kept a close involvement in this field for over thirty years. One gentleman, who is here, Steve Baum, has been a tremendous contributor to that technology and has led the software effort for the last--fifteen years is it?--twenty-two years. Yes, time goes by quickly when you are having a good time.

As Jim mentioned, we are working together again. I'm working with the National Federation of the Blind, and we are going to be using Kurzweil educational software to create a pocket-size reading machine. I'll talk a little bit about the status of the project and what my goals are. But what I'd like to talk to you about is the future of technology. Really my interest in being a futurist stems from my interest in being an inventor, and that goes back to this first major project, the reading machine. I realized that my project had to make sense when I finished the project, not when I began it, and invariably the world was a different place when we got the project done three or four years later. Everything changed--the technology, enabling factors, the market distribution channels, the tools of distribution, and development systems.

Mostly projects fail, not because the R and D department can't get the project to work, but because the timing is wrong--the projects are too late, or they are too early. It's kind of like surfing, catching the wave at the right time, and it's very hard to get that timing just right. Invariably projects are too early, and not all the enabling factors are in place. So I became an ardent student of technology trends. I began to track trends very carefully, and this has taken on a life of its own. I have a team of ten people who gather data about all the different aspects of technology (computation, communications, biological technology, different kinds of electronic technologies), and I work with mathematicians to develop mathematical models, then use that to anticipate technology and have used that really to time the project so we can catch the wave at just the right time.

This has actually enabled me now to invent with the technologies of the future, and not just projects of three or four years from now, but to anticipate what technology will be ten years, twenty years, thirty years hence. While we can't build a circa-2020 product today, we can envision what it would be like and then we can contemplate what its impact will be on society.

So that's what I'd like to talk to you about: where technology is headed, what kind of capabilities we'll have twenty or thirty years from now, and what impact that will have on the world in general and in particular on disabilities and on blindness technology. This will affect everyone; it will have a profound effect on disabilities. I would say that already technology has been a great leveler in that it can really overcome the primary handicaps associated with disabilities, provided that the technology is designed correctly, that people know about the technology, that people have the right training, and that the technology really meets the needs of disabled people.

That's really the purpose of this research and training institute. Technology is accelerating, and I think Dr. Jernigan was very prescient to have the urgency he did despite a fatal illness. He realized there really was no time to wait, that this was the right time for this very daunting project. We do need a national leadership institute such as the Jernigan Institute to guide technology and also to train the world to use the technology effectively. I think the need for that will become more apparent as I go through some of the future trends.

The one trend that has impressed me most deeply from these models that I've developed is that the pace of technology itself is accelerating. It's not a constant. You might say, "Well okay, that's obvious; things are getting faster." It's remarkable that so few otherwise thoughtful observers really take this into consideration. I would say 95 percent of Nobel Prize winners don't factor this in.

I was just at a conference a few months ago, and we were talking about the promise and peril of technology--and I will try to touch on that--and we were talking about the dangers of nanotechnology. One Nobel Prize biologist said, "Oh, we're not going to have self-replicating nanotechnology for at least a hundred years."

I said, "That's actually a very good estimate of the amount of technical progress needed to achieve that milestone (that matches my own models) at today's rate of progress, but we're doubling the paradigm-shift rate, the rate of technical progress, every decade, so we'll make a hundred years of progress at today's rate of progress in twenty-five years." That is consensus in the nanotechnology field.

Generally scientists look at the work they've done recently. They'll have an intuition, "Okay, over the last year we've solved one percent of the problem." It might be hard to define exactly what that means, but their intuition is actually pretty good. Then they'll say, "Okay, we'll take ninety-nine years to do the other 99 percent," not realizing that the pace of progress is going to accelerate greatly because the tools get more and more powerful. That particular insight is very rarely taken into consideration.

I'd like to show you some of these trends. I'm going to talk through the screens for those who are visually impaired. Most of the screens just show a graph that goes up and up. These are what are called logarithmic graphs, which means that going up the chart represents multiplying some key value by a constant, rather than just adding to it. For example, if you have something that doubles every year, a linear chart would show a curve as the slope got more and more extreme as you went to the right. On a logarithmic chart that would be a straight line. So a straight line on a logarithmic chart means exponential growth, and all of these, except for one which I will point out, are logarithmic graphs. This is the growth of the phone industry over the last 110 years. Basically the only point here is that it took half a century for the telephone to be adopted by a quarter of the U.S. population. Here is a more recent technology, U.S. cell phones, and we have the same progression in only ten years. This is one example of the acceleration at all different levels of technology--not only the power of the technology, but the adoption and impact that it has had.

If we put a lot of different communication technologies on a logarithmic graph (the telephone, radio, television), each took quite a few decades to be adopted by a quarter of the U.S. population. The World Wide Web took about five years, reflecting this ongoing acceleration. If we take a broader view of technology, in fact see technology itself as an outgrowth of the biological evolution that led to the technology-creating species in the first place, we actually see that this exponential growth of the rate of progress goes back billions of years to the beginning of life on this planet. The first paradigm shift, cells and DNA, took billions of years.

Evolution works through indirection. It creates a capability; then it uses that capability to create the next stage. That's why an evolutionary process like technology or like biological evolution accelerates. So once we had DNA, which is actually a little computer system that evolution devised to keep track of its experiments, the next stage, the Cambrian explosion when all the body plans of the animals were evolved, went relatively quickly. It only took about ten or twenty million years, which is hundreds of times faster. Biological evolution kept accelerating. Homo sapiens, our species, evolved in only a few hundred thousand years.

Then again, working through indirection, evolution used that creation, homo sapiens, to bring in the next stage, which was human-directed cultural and technological evolution. That again was faster. The first stage in that took only tens of thousands of years. Fire, the wheel, stone tools evolved much more quickly than our species evolved. Each new stage of technology was used to create the next stage. So a thousand years ago a paradigm shift like the printing press took about a century to be adopted, but recent major communication technologies, new paradigms, have been adopted in only a few years' time.

This next chart goes back a certain amount of time and is a double exponential chart that shows the time that particular paradigm took to be adopted. It creates a straight line, really showing that technological evolution was an outgrowth and a continuation of biological evolution. The cutting edge of evolution on our planet is not biological evolution anymore. It's the evolution that we are creating, and we use each generation's tools to create the next.

The first generation of computers were drawn by hand using pencils and straight rules and wired with screwdrivers and individual wires. Now a computer designer will sit at a computer station, design some high-level parameters, and twelve different layers of intermediate design will be automatically computed. A very complex design will be done in a matter of hours, rather than years.

This is some personal experience. When I was a student back at MIT in 1967, a computer that took up a room bigger than this auditorium cost a few million dollars. It was less powerful than the computer in your cell phone; it was a quarter of a MIPS [million instructions per second]. Today a notebook computer, like the one I am using here which costs $2,000--it's actually already less--is four thousand times faster. That's twenty-two doublings of price performance in thirty-six years. So every nineteen months the power per dollar of computers has doubled, and that's actually a pretty conservative statement.

Many of you, I'm sure, have heard of Moore's Law, which reflects the exponential growth of computation. While we're doubling the rate of progress every ten years, the actual power of the technology per unit cost is doubling, generally about once a year. That's actually a deflationary effect of 50 percent. The economists are now worried about deflation when we used to be worried about inflation. They are worried about deflation because we had deflation in the Depression, and they think deflation is a harbinger of depression. That deflation was because of a collapse in consumer demand and the money supply. This deflation is because of an improvement in price performance, and it's actually leading to economic prosperity. It leads to greater productivity; and, as I'll show you later, our actual economic output more than outpaces that 50-percent deflationary factor.

The one factor that has fueled Moore's Law is that on an integrated circuit we shrink the size of transistors, which are microscopic in size, by 50 percent every two years. The transistors also run faster because the electrons have less distance to travel. We basically double the price performance of integrated circuits every twelve months.

Many observers have said, okay, we're going to run out of room in that particular paradigm within--actually they keep pushing the date back--but now it's twenty years. Within fifteen to twenty years the key features of transistors will be a few atoms in width, and we won't be able to shrink them any more. So will that be the end of Moore's Law? Well, the answer is, yes, that paradigm won't work anymore. But the real question is, will that be the end of the exponential growth of computers and all the other things that stem from electronics, like communications and so on? It's really a key question as we consider the twenty-first century because of the profound impact that computation is going to have on many aspects of our lives.

So as one way to examine this question I put forty-nine famous computers on a logarithmic graph, which you see on this slide. This goes back one century, so at the end of the nineteenth century, in the lower left-hand corner, we had a computer that automated the first American census. It used those little punch-card machines. (I think those were subsequently shipped to the Florida Election Commission.) [laughter]

In 1942 we had a different type of computer, based on relays used from the telephone system that Alan Turing and his colleagues put together to crack the Nazi Enigma code, which provided Churchill and Roosevelt a complete transcription of all of the German messages. Actually this was quite a dilemma for the leadership because they didn't want to use the information too freely, or they would tip off the enemy that they had cracked the code. So Churchill knew that Coventry was to be bombed and wasn't able to warn the city. They would try various ruses to try and convince the Nazis that they had gotten the information in some other way. So if they knew a convoy of ships was coming, they would send over a lone flyer, and the Germans would say, "Oh, we've been spotted." In fact the English knew all along where the convoy was coming from, having cracked the code. But then in the Battle of Britain they used the information without reservation. Despite the fact that the RAF was greatly outnumbered, England won that battle, giving us a launching pad for the D-Day invasion.

In the 1950's a different type of circuit came in--vacuum tubes--completely different from relays. CBS used it to predict the election of a president for the first time, President Eisenhower. They were shrinking vacuum tubes, making them smaller and smaller to keep this exponential growth of electronics growing. Finally that paradigm hit a wall. They couldn't shrink it anymore and keep the vacuum. Then a completely different paradigm came out of left field with transistors, which were not small tubes; it's a different technique, and it kept the exponential growth going. Then integrated circuits came.

So every time one paradigm ran out of ability to keep this exponential growth going, another paradigm emerged. Generally, as we could see ahead of time the end of a paradigm's ability to produce exponential growth, that fact would create pressure on research and development to create the next paradigm. That's happening right now, despite the fact that we're fifteen to twenty years away from the end of the current paradigm, which is the fifth, not the first, paradigm to provide exponential growth in computing.

Now that we can see that we will be unable to shrink transistors on an integrated circuit, we've already been doing extensive work on the sixth paradigm, which is three-dimensional molecular computing--building computing devices at the molecular level in three dimensions. When I wrote about this five years ago in the book, The Age of Spiritual Machines, it was considered a controversial notion and was not a mainstream view. There has been a real sea change in attitude among mainstream scientists as to the feasibility of three-dimensional molecular circuits. There has been so much progress, really in the last two years, it's now a mainstream view that, why of course, we'll have three-dimensional molecular circuits, and they are already working on a small scale. My favorite, which I said would most likely work five years ago, are carbon nanotubes, which are hexagonal arrays of carbon atoms that are extremely strong. They're fifty times stronger than steel, and they are extremely fast in terms of computation. A one-inch cube of nanotube circuitry would be a million times more powerful than the computational ability of the brain. I will come back to that, the human brain.

We'll be able to keep this trend going really through the twenty-first century by going into the third dimension. Chips today are very dense, but they are flat. Our brain is organized in three dimensions. Even though our brain actually uses a very cumbersome and inefficient signaling system (it uses an electrochemical computational method that is a million times slower than today's electronic circuits), it's organized in three dimensions, and by using the third dimension, which we might as well do since we live in a three-dimensional world, we'll be able to compete with the human brain. I want to come back and talk more about that.

This curve up here on this chart is not a straight line; it's another curve, meaning there is actually exponential growth in the rate of exponential growth. It took us three years to double the price performance for computation at the beginning of the twentieth century. We are now doubling it every year. That's going to continue as well.

These graphs are all different exponential charts. I won't dwell on this because I want to talk more about the implications. But these are different ways of measuring the exponential growth of electronics. These are different Intel processors growing along, doubling time every 1.8 years. Here's the average transistor price in 1968. You could buy one transistor for a dollar. I remember in the 1960's hanging around the surplus electronics shops in New York on Canal Street--they're still there. I could buy the equivalent of a transistor, which at that time was a relay with support circuitry, a large device, about the size of a small toaster. It cost about forty dollars. Today you can buy ten million transistors for a dollar. That has come down in price by half every 1.6 years, so you can buy twice as many transistors for the same price every 1.6 years. Unlike Gertrude Stein's roses, it's not the case that a transistor is a transistor. As we make them smaller, they're actually better. By being smaller, they run faster. Electrons have less distance to travel, so the actual price performance of electronics is doubling every 1.1 years.

The next question is, is there something special about electronics? Some people have said that it's a self-fulfilling prophecy, that the electronics industry has noticed this so-called law, so all the companies know where they need to be at different points in time, and it kind of perpetuates itself. So I examined other areas where people have not talked about Moore's Law or exponential growth. These are completely different types of technology--magnetic data storage. This is not packing transistors on an integrated circuit. This is packing magnetic bits on substrate. So this is a different technical problem. But we find exactly the same exponential growth, doubling time every fifteen months.

DNA sequencing, a different technical problem. This is how much biological information you can sequence per dollar. This has doubled every year. The price performance, the speed, the bandwidth of DNA sequencing have doubled, and this has fueled another profound revolution that has enormous implications. We are understanding biology, life, aging processes, and disease in terms of information. We are understanding the software of cancer, of heart disease, diabetes, the fifteen processes that underlie aging. We're actually figuring out how to change those so that we can reverse aging. There are, I would say, hundreds of different developments and methods and drugs in the pipeline that detect very precisely the different steps in the sequence of these diseases.

I believe we will largely eliminate cancer and heart disease over the next ten years. I mean, there are drugs right now in the pipeline for approval (there was just an announcement just this morning) that will wipe out heart disease if you take advantage of these methods over the next three or four years now that we are really understanding these processes in information terms. This technology is also accelerating, doubling in power the amount of genetic information every year. This graph is a logarithmic plot of the amount of DNA information we have sequenced. Basically we are doubling the amount of information we have about these processes every year.

So, if you remember, it took three years to sequence the HIV (AIDS) virus; SARS took two weeks. This is a good example, and it gives me a lot of comfort about our ability to deal with biological viruses. A lot of concern has been expressed about the potential for biological warfare and biological terrorism. Here we had a new virus, much more dangerous than HIV. HIV is hard to spread; you have to really work at it. SARS spreads in the air. It's very communicable. It's much more deadly. HIV is dangerous, but with SARS 20 percent died. The other 80 percent are not doing well. It is a very dangerous virus. We were able to contain that very quickly with a combination of modern technologies, the Internet (where information about it spread very quickly), the ability to sequence it in two weeks and then develop testing methods in a matter of days. Then also we used some ancient methods of physically separating people who have the disease and so on. But it does give me some comfort that we were able to understand this new outbreak about which we had no information and contain it very quickly.

This is a profound revolution, now that we have the intersection of biology and information science. It's subject to the same law of accelerating returns. Communication, another technology, and I won't dwell on these charts, but it's the same thing. It's not Moore's Law. This is a different kind of technology, and there are thirty different ways to measure it: wired, wireless, fiber optics, modems, ISPs--different ways to look at this, all doubling every year.

Here's the Internet. When I wrote my first book in the mid-1980's, The Age of Intelligent Machines, I had only a little piece of this chart. It wasn't called the Internet; it was called the ARPA Net. ARPA is the Advanced Research Project Agency of the Department of Defense. We had gone from twenty thousand nodes serving a few thousand scientists to forty thousand in one year. The next year it was eighty thousand. Very few people had heard of this phenomenon. It was clear to me that that exponential trend would continue, and if you do the math and keep doubling, we would get to ten million, go to twenty million, go to forty million nodes in the mid-1990's. So I predicted that by the mid-1990's this would be on everybody's radar screen and we'd have this universal communication network expanding the globe, and that's exactly what happened. You can see it if you look at the exponential trend.

This next chart is the only one I'm going to show you that's on a linear graph. Of course, when you look at it in a linear graph, it looked like nothing was happening, and then it suddenly exploded in the mid-1990's and came out of nowhere. This is how we experienced the Internet; this is how we experience technology because we live in a linear world. But if you look at these trends exponentially, you can see them coming. Exponential growth is very seductive because it looks like nothing is happening. Exponential growth looks completely flat at first. It's like living on a pond with lily pads. Lily pads grow exponentially. So someone is waiting, doesn't want to leave on vacation because he doesn't want the lily pads to take over his pond, but there seem to be no lily pads, so he waits until the very end of the year, then takes off, and comes back two weeks later to find lily pads have covered the pond completely. It's that last burst of exponential growth that really takes over.

In 1990 the chess champion, Kasparov, looked at the very best chess machines and said, "They're pathetic; they are never going to touch me. They are very crude; they don't have human ways of performing." People told him that they were growing exponentially, but he didn't understand what that meant. He just looked at the best, and it seemed like they would never be able to perform. But in 1997 they soared past him and defeated him. We'll see that in one area after another.

Here is another very important trend. This is a chart showing the same thing in terms of miniaturization, things getting smaller at an exponential rate. We are shrinking the size of technology at a rate of four or five per linear dimension per decade. That exponential rate of shrinking is true of electronic technology. It's also true of mechanical systems. The state of the art is that we can create tiny little machines using the same technology we used to create our chips. There are already four major conferences on building blood-cell-sized devices to go inside the human bloodstream--right now we are testing them on animals--to keep us healthy and do diagnosis and therapy.

One scientist actually cured type 1 diabetes in rats using this type of device. It's a little computerized device that lets out insulin in a controlled fashion and monitors the amount of insulin. It's a very clever device that presides in the bloodstream. It's nanoengineered; the features are measured in billionths of a meter. The same technique will work in humans because it's the same mechanism of diabetes. There are dozens of projects like this already on the drawing boards, so putting intelligent devices in our bloodstream to keep us healthy (there are a number of applications that are more exciting which I will talk about in a few moments) is not so futuristic. This is already working in animals.

This whole field began in the mid-1980's at my alma mater, MIT. This is a little animation of a design by Eric Drexler, who founded this field of nanotechnology. Nanotechnology is building little machines at the nanoscale, which is billionths of a meter. It really means building them out of atoms and molecular fragments. He had these theoretical designs, and they have since been simulated on supercomputers. These are little machines that work at the molecular level, and we already have machines like this working. I would say the golden age of nanotechnology, where we really can build very intelligent machines at this level, will be in the 2020's.

One of the implications is that we will really marry these trends of miniaturization, our understanding of biology in information terms, computation, and communication; they will all come together. We'll have little devices that go inside our bodies and bloodstream and greatly enhance the human potential. They'll have profound implications for disabilities, which I want to talk about in a few moments, but it really will advance our health and longevity.

I believe radical life extension will be coming within a couple of decades. I think it's already feasible for baby boomers like me to essentially postpone aging indefinitely. I have a book coming out on that this fall, How to Live Long Enough to Live Forever. It talks about three things: bridge one is the knowledge we have today that can slow down aging to such a degree that we can remain alive and healthy and viable until the full blossoming of the second bridge, which is biotechnology. That will expand our lifespan to the point which will take us to the third bridge, which is the full blossoming of the nanotechnology revolution, where we can really enhance our biological systems with these nanoengineered devices.

So we really do have the means today to dramatically change the nature of human aging and human lifespan. This brings up natural controversies, but my view is that what is unique about the human species is that we seek to extend beyond our horizons and limitations. We don't celebrate our limitations; we celebrate our ability to overcome them and extend beyond them. We didn't stay on the ground. We didn't stay on the planet. We are not staying within the limitations of our biology, which incidentally was a life expectancy of thirty-seven in 1800, and that's pretty recent in terms of evolution or even human history. Most of us in this room wouldn't be here if we had not extended human potential through our technology.

This is one design where I know that brilliant nanotechnology theorist Rob Freitas has actually designed a robotic red blood cell, called a respirocyte. It's actually a pretty simple device. The red blood cell just gathers oxygen in a certain way and lets it out at prescribed times. Detailed studies of these respirocytes indicate that, if you replace 10 percent of your red blood cells with these respirocytes, you could do an Olympic sprint for fifteen minutes without taking a breath. You could sit at the bottom of your pool for four hours. We are actually very limited when it comes to breathing. We have to be in the vicinity of breathable air all the time. It's a pretty big limitation. It will be interesting to see what impact these devices have on our Olympic contests. Presumably we'll ban them, assuming we can detect them, but then we will have the specter of kids in their junior high school gymnasiums routinely outperforming Olympic athletes.

It's actually not that complicated a device, and it does show up a key insight into the design of our biology. You commonly hear people say how remarkable the design of our biology is and how intricate the designs are, how clever evolution is, and how perfect the design of our biological systems is. On the one hand, they are remarkably intricate; on the other hand, they are very suboptimal. Once we reverse-engineer our biological systems--that is to say, understand how they work, which we are increasingly learning now (that process is also accelerating)--we can re-engineer these systems in our bodies to be thousands of times more capable. These respirocytes, a pretty conservative design, are about a thousand times more powerful than our red blood cells.

The most important example is our thinking. Thinking takes place in the interneuronal connections, the connections between our neurons. We have about a hundred billion neurons. There's an average fan-out of a thousand to one, so we have a hundred trillion connections, dendrites and axons. That's where the bulk of our thinking takes place. It uses an electrochemical signaling system that is literally a million times slower than our conventional electronics. So once we can build electronics in three dimensions, we'll be able to build circuits that are millions of times more powerful than our interneuronal connections.

Basically our biology is very limited because biological evolution adopted certain techniques, and then it got stuck in those designs. I mean, everything is built out of protein. That's a very limited construction set, and proteins are physically weak. They don't signal very well. There are a lot of limitations. We already have means of engineering things far more powerfully.

This picture shows another design called a microbivore, which is basically a robotic white blood cell. Our white blood cells are very clever devices. These are the warriors, the soldiers of our immune system. They actually go out and detect pathogens of different kinds: viruses, fungal particles, or cancer cells. Then they sneak up, engulf them, and destroy them. Again, they are remarkable but also very limited. I've actually watched my own white blood cell in a microscope sneak up on a bacterium, engulf it, and destroy it. It was clever; except it was very slow and very boring to watch. It took about an hour and a half before it was complete. This device does the same thing, except it does it in a few seconds. It's a thousand times more capable. It can download software from the Internet for any particular pathogen.

If that seems unusual, we already have neuro-implants. I am actually talking about an FDA-approved device for Parkinson's patients that is planted in the brain, and it replaces biological neurons, and the biological neurons nearby get signals from this implant, which replaces the neurons that were destroyed by the Parkinson's. The neurons that remain and get signals from the electronic device are perfectly happy to get signals from an electronic device as if they were getting signals from the original biological neurons. So you have a hybrid of electronics and biological signaling that works perfectly fine. The original devices were hardwired, but this recent version now has downloadable software from outside the patient, so you get the latest software upgrade to your neural implant from outside.

These devices will download software and will be able to destroy any kind of pathogen. They will basically eliminate cancer and any disease caused by a disease agent, to the extent that we have not already done that with biotechnology. So nanotechnology is going to be kind of a clean-up to overcome any aging disease processes that we can't actually direct just by re-engineering our biological processes.

Coming back to that chart of the exponential growth of computing, this is the same chart you saw before. The left-hand side has the forty-nine famous computers, showing the exponential growth of computing power through the twentieth century projected through the twenty-first century using three-dimensional molecular computing. We can see that a thousand dollars of computation, like the computer I am using up here on the stage, is somewhere between an insect and a mouse brain but will intersect the capability of a human brain in terms of raw power, or computation, by around 2020. Then it will go on to greatly exceed that.

Does that mean that computers will have the intellectual capabilities of humans by 2020? No, it really answers one aspect, will we have the raw computational power? That also was a controversial notion when I wrote about it five years ago. I would say most scientists didn't agree with me. Today the mainstream view is, Why of course we'll have plenty of computation to emulate the human brain. Now the challenge is, will we have software, or will we just have extremely fast calculators that don't have the suppleness, subtlety, tremendous insight, and flexibility of human intelligence?

Now we go to a different field which is also growing exponentially, not surprisingly--knowledge of the human brain. Our ability to see inside the brain, reverse-engineer the brain, understand its principles of operation, are also growing exponentially. I work in a field called artificial intelligence, where we try to teach computers to do things that otherwise required human intelligence. Up until now there has not been much contribution to AI (artificial intelligence) from understanding the human brain because our tools for seeing inside the human brain have been very crude.

Suppose you were trying to reverse-engineer a computer, and you didn't know anything about it, and all you had were these sort of crude magnetic sensors that you could place outside the box. When you were storing something in the database, you'd pick up a little signal--ah, there's something going on over here. You would develop a theory that, okay, this big circuit board is doing something to format the information. Then you'd hear some noise on another device and say, okay, this device which says "disk drive" on it must be restoring the information. You would develop a theory like that. The theory is correct, but it's very crude. It doesn't really tell you how these processes take place.

That's pretty much been the state of the art in brain scanning and reverse-engineering. You had relatively crude ways of seeing what's going on inside the brain. But the resolution, the precision, the speed, the spatial and temporal resolution, the performance of brain scanning is also doubling every year. Some new techniques are now emerging--for example, a new scanning technique from the University of Pennsylvania that can actually see the individual signaling on interneuron connections in a cluster of thousands of neurons in real time. So for the first time we can now see exactly what's going on inside the brain in response to different tasks. If you asked an electrical engineer to reverse-engineer a computer, he or she would say, "Well, I want to place individual sensors on each of the wires, and I want to be able to track them at high enough speed." Of course, that's exactly what an electrical engineer does when reverse-engineering a competitor's product.

We are now getting to the point where we can do that with the human brain. It has led to some interesting insights. One insight is, not only does our brain create our thoughts, but our thoughts create our brain. We can actually see this in real time. When someone thinks about something, we can actually see new synaptic connections being created in real time from that activity.

There is an interesting study with violinists in which researchers saw that the part of the brain dealing with sensing the four fingers on the left hand, if the musician was right-handed, was actually greatly enlarged. They figured, maybe people with enlarged sensitivity for those fingers decide to become violinists. So they took people who were not violinists, taught them violin, and within three months they showed the same thing happening. Then using high-speed scanning, they could actually see the creation of these new connections in real time. It's long been known from the study of Einstein's brain that the regions of his brain dealing with mathematics and the types of analytical skills needed for physics were greatly enlarged.

We really do create our own brains with our activities. But we're also learning exactly how those methods work. This is a block diagram that I won't explain, but each one of these little boxes represents a region of the brain. Based on the information we have from brain scans and other types of neurological information, a group of scientists on the West Coast has actually created a mathematical model and a computer simulation of each of these regions and then has created basically an artificial system that recreates these fifteen regions of the brain having to do with processing auditory information. Then applying psychoacoustic tests to this computer simulation gets the same results as applying the same kinds of tests to human auditory perception, indicating that the model and the simulation are reasonably correct.

Another group has actually simulated the human cerebellum, which comprises nearly half of the neurons in our brains. That's where we learn skills like talking and walking. They take place in the cerebellum. We have a computer simulation, again, which operates very similarly to the way the cerebellum works in the human brain. If we ask, well, how complicated is the human brain? I mean, okay, we are making progress in certain areas and simulating certain regions, but isn't the whole thing vastly beyond our capability of understanding? Doug Hofstadter, who is a famous scientist and Pulitzer Prize winner, has been saying for many years, "Well maybe our intelligence is just below the threshold needed to understand our own intelligence. If we were smarter, then our brain would have to be more complicated, and we still wouldn't be able to understand it, and we could never catch up." He compares it to a giraffe, whose brains are really not that different from ours. A giraffe is clearly not capable of understanding its own brain; maybe we're not capable of understanding our own brains either.

But we're finding that the brain is only several hundred regions. We've already reverse-engineered about twenty of them. Once we get the data, we can develop these models and understand how they work. The brain itself has a lot of content in it. I mentioned the hundred trillion connections. There are thousands of trillions of bytes of information to characterize the state of all the neurotransmitters in our brain. But how complicated is the design? The design is captured in the genome. The genome is six billion bits, eight hundred million bytes; it's replete with redundancies; sequences are repeated hundreds of thousands of times. If you just take out the redundancies and use data compression, there are about thirty million bytes of information in the genome. Two-thirds of that, twenty million bytes, describes the human brain. That's less than Microsoft Word.

Now, you might say, How can that be? How could a twenty-million-byte design describe the brain, which I just said requires thousands of trillions of bytes, which is to say, millions of times more information than is contained in the genome? Well, the design of the genome actually describes an evolutionary process. It describes something that actually evolved just like human beings. It typically takes place in the course of our lives, and it happens very rapidly in the first few years of our lives. We can actually simulate these evolutionary algorithms on our computers. We can have a very compact program that sets up millions of examples of itself, and then they compete with each other in a simulated evolution. They ultimately create a much more complex system than the original design. That's in fact exactly how the brain works.

A very small amount of information in the genome describes the wiring of the cerebellum. It's very simple, so this may be a few hundred bytes of information in the genome that describes how the cerebellum, which is only four different types of neurons, is wired. It says, okay, repeat this ten billion times with some random variation. So you get this randomly wired cerebellum before birth. Then the newborn child starts interacting with a complex environment. It's learning skills, and the cerebellum and all the other parts of our brains begin to organize themselves. It is self-organizing and ultimately contains a lot of meaningful information. The actual design is relatively compact, and it has a level of complexity that we as humans are capable of understanding.

I will just say a few words about the economic impact because I want to cover the impact for disabilities. Our economy has been growing exponentially and is growing exponentially because of the exponential price-performance improvement in productivity. I mentioned that we have 50 percent deflation in electronics, and you might say that's going to lead to the decreasing size of the electronics industry in dollars if you can buy the same stuff for half the money. But in fact the electronics industry has more than kept up. There has been an 18 percent per year growth in real dollars adjusted for inflation despite the 50 percent deflation in the electronics industry, and that is really what is fueling economic growth today.

Let's talk about a few scenarios. By the end of this decade computers are going to disappear. They're going to be so small that they're going to be in our clothing, in our eyeglasses. We are going to be online all the time with wireless, extremely high-speed connection to the Internet. There will be a profound impact for visual impairment. We are working now, as Jim mentioned, to develop a handheld reading machine. That project is going very well. We have a prototype of a standard camera that can read material of substantial diversity. It's connected to a notebook computer. The next step will be to eliminate the wire so you will have a wireless connection to a notebook and will ultimately use a very small computer that you can put in your pocket or carry on your belt, which will communicate wirelessly with the pocket-sized camera.

So these two very small devices will use standard electronics. We decided on that early on, rather than trying to build a specialized device which would be obsolete by the time we designed it. This way you can take advantage of the tremendous accelerating price performance of consumer electronics. It's always important to do that. We were talking at dinner about the fact that Braille displays have not enjoyed that kind of improvement because it's a small, orphan market of a certain number of Braille readers who are not able to take advantage of the price performance of consumer electronics.

We expect next year to have a major testing program involving hundreds of people. We expect this to be a real consumer product by the year after that, 2006. Around that time this should really be just one device--it won't require the two devices. It uses standard reading-machine software, which we are using from Kurzweil Education. But it requires solving some other problems. If you are using a standard scanner, standard reading-machine software will allow only a 5-percent skew. You are lining up the reading material at the edge of the scanner, but if you are just holding a camera, our experiments show that you will have much more rotational skew than that, so we have to correct things like that.

The illumination will be very uneven. If the illumination in the scanner is controlled, you have a controlled environment in which the illumination is perfect. But in the real world you have all kinds of strange lighting conditions. Formats are much more complicated because you are taking a picture of text on a wall or out in the world on a sign with lawn and trees behind it. So you have to deal with the vagaries of the many kinds of text in the real world. We are developing software outside of standard reading-machine software that deals with these kinds of complexities.

Ultimately this type of device will also be able to describe things other than reading material. It will be able to identify what's in a room, describe people and types of objects, tell you where they are, and even recognize faces. There is already very good face-recognition software. For example, the Department of Homeland Security is using such software in airports to try to recognize terrorists. That's actually a very daunting problem, because they might be trying to recognize a thousand terrorists out of millions of people who walk by. Generally we have fewer people we want to recognize. We may have a few dozen friends whose faces we wish to identify.

Over time this reader will provide more and more sophisticated descriptions of real-world scenes. It can be integrated with GPS to provide information about what's in the real world and actually download information about specific objects and buildings and guide you using GPS direction. These will evolve into kind of an intelligent sighted assistant that can guide you in the real world. It will get more sophisticated over time, and it will get smaller and smaller. There are already actually some very high-resolution scanning cameras that are tiny enough to be built into a pair of eyeglasses or pinned on your lapel and look in all directions. This is a direction that this type of project is headed in.

[Mr. Kurzweil then played a recording of computer software at work translating speech from one language to another.] That's using speech recognition that one of my companies developed. It is using the latest in speech synthesis, which is the synthesizer used in the Kurzweil 1000 reading system, and contemporary language translation. I actually held a conversation with a woman who spoke only German. I spoke in English; she heard me in German. She responded in German, and I heard her in English. We were able to converse quite well. I think you will see systems like this in the future. They will also be integrated into these reading systems, so if you take a picture of a sign while you are traveling in another country, you can hear it in the language of your choice.

If we go ahead twenty-five years and talk about the end of the 2020's, these trends of exponential growth in computation, communications, our understanding of biology and the human brain, reverse-engineering of the brain, and developing artificial intelligence will really have come to fruition. Keep in mind the power of exponential growth, that, for example, the progress we are making in the 2020's will be far greater than the progress we are going to make in this decade because the ongoing power of advancement will be at an exponential pace. By that time a thousand dollars of computation will be a thousand times more powerful than the human brain. We will have reverse-engineered the human brain, really understanding how it works. We will be able to use those insights as templates of intelligence for our intelligent machines.

Computers will pass what's called the "Turing test," which is to say they'll have the suppleness and subtlety of human intelligence. Nonbiological intelligence will then combine the flexibility of human intelligence, which is really based on pattern recognition, with some of the ways in which machines are already far superior to ours. I mean, this computer here can remember billions of things accurately. We are hard pressed to remember a handful of phone numbers. Once a machine masters a skill, it can do it very quickly. It can do it repetitively and tirelessly.

Most important, machines can share their knowledge. If I read War and Peace or learn French, I can't just download that knowledge to you. Human beings do have the ability to communicate. They have a very slow bandwidth channel to do that in, which is language. But at least we have language, which is a major technology--it was the first technology, aural language, then written language--that enables human beings to communicate at all. Animals don't have that. Another unique aspect of human beings is that we have a shared knowledge base that we pass down from generation to generation. The size of that knowledge base is, not surprisingly, growing exponentially. It's doubling every year, and language is the key to this growth. Other animals don't have a knowledge base; they don't have a language to embody it in.

Nonetheless, language is very slow. If we teach a subject to someone else, it takes months or years. Machines can share their knowledge instantly. We, for example, spent years training one computer to understand human speech. We taught it like a child, and it actually had self-organizing methods just like the human brain does. We exposed it to thousands of hours of recorded speech, and we automatically corrected its mistakes, and patiently over years it got better and better. Now finally it does a commercially good job of recognizing human speech.

If you want your computer to do the same job of recognizing speech, you don't have to go through those years of training it like we have to do with every human child. You can just load the evolved patterns of our research computer. It's called loading the software. So machines can actually share instantly the results of their learning. Once they can learn as well as humans can, they can go out and read all of human knowledge, which is increasingly out on the Web, and master all of human knowledge and share it among themselves.

But the key impact of all this, in my view, is not going to be from an alien invasion of intelligent machines to compete with us, because they are not coming from over the horizon. They are emerging from within our human civilization, and they are already a tool that is expanding the human intelligence of our civilization. We're going to merge with this technology and become more capable. I've mentioned some early examples of that, where we are already putting intelligent machines in our bloodstream. We have a lot of human beings walking around with neural implants in their brains.

If we get to the 2020's, it's going to be a ubiquitous phenomenon because we're going to be able to essentially merge with our technology noninvasively. We'll be able to have billions of nanobots in our bloodstream. They will reverse the aging process, stop disease, keep us healthy, and provide radical life extension. But they will also greatly expand our mental capabilities. These devices will be able to communicate with our biological neurons. We have already demonstrated that with our neural implants today. They'll be on the Internet. We'll be able to communicate with each other on a wireless local area network.

Let's take one application of virtual reality from within the nervous system. You want to be in real reality: the nanobots don't do anything. You want to be in virtual reality: then the nanobots shut down the signals coming from your real senses--your ears, skin, whatever--and replace them with the signals you would be experiencing if you were in the virtual environment. Your brain feels just like it's in that virtual environment. You can be there by yourself or with other people. Some of these virtual environments will be recreations of earthly environments; some will be fantastic environments that have no earthly counterpart.

Design of new virtual environments will be a new art form. We will have what I call "experience beamers," people who put their whole flow of sensory experiences on the Internet the way people now beam images from their Webcams. You will be able to plug in and experience what it's like to be someone else, kind of like the plot concept of being John Malkovich. You will be able to relive archived experiences. Designs of virtual experiences will be another new art form. But, most important, it will be a profound expansion of human intelligence because right now we're limited to a mere hundred trillion connections. That might sound like a big number, but speaking for myself, I find that quite limiting. Many of you, like Dr. Maurer, will send me books to read and Web sites to look at, and we have a very limited human bandwidth. It could take a long time to read one book. And there are so many different books that we would like to read. We would ultimately be able to greatly expand our mental capacity. We could have a hundred trillion connections times a thousand or times a million. These new connections can operate a million times faster. We would be able to greatly expand the kind of human intelligence we have, while also having an intimate connection to new forms of nonbiological intelligence that would be very powerful.

What will all of this mean for disabilities? I mentioned that disabilities have already been greatly affected by technology. A major handicap associated with a disability of blindness is the inability to access ordinary print, and reading machines and screen readers have been major steps forward, but they still have limitations. You have to bring your reading material to the machine. Well, the portable machine will address that particular handicap. So bit by bit, as we identify each handicap associated with a disability, we really can overcome it.

Would we want to create artificial vision? That's actually a more complicated question than it might seem. I think it's fortunate that we have an institute such as this one to answer these kinds of questions when these technologies come up. As I mentioned, our thoughts create our brain. A scientist named Broca a hundred years ago did experiments in which, when particular regions of the brain were disabled, he found that those people lost particular skills. He developed a theory that the brain was hardwired, that particular regions of the brain dealt with particular things. So one region of the brain dealt with processing vision, and another dealt with hearing, and another dealt with memory, and another dealt with emotion. Maps of brain function were devised.

We realize now that this is not the case. It's fortunate that that's not the case because we can all use all of our brain, and we use our brains for whatever it is we think about. All of our brain matter is devoted to that optimally. What if someone hasn't been processing visual signals for ten years or for their whole life? That brain matter will have been doing other things. So you can develop a rehabilitation program to relearn those skills. Is that a good use of resources? It's actually going to come down to detailed issues of what the technology really does, what other disabilities a particular person has, and the state of technology at any given time. It's going to be a complicated question, and there are going to be a lot of complex questions like that.

We want technology in general to be enabling rather than creating new barriers. We have certainly seen that. I mean, Windows was a step forward in some ways, but it created new barriers for quite a long time which we are still struggling with. We now have a profusion of electronic devices, all of which have displays, all of which are incompatible with each other. This has now created another barrier despite the fact that these technologies also enhance our lives in many ways.

All of these technologies are going to create many new questions, but ultimately they provide new ways of communicating information, which will give us many new strategies for overcoming the handicaps associated with disabilities. It is already the case today that, with the right training and right strategies, there are no real handicaps that cannot be overcome. We want to make that easier and easier. We want to avoid introducing new handicaps. That's really what this institute will do.

We will ultimately have new ways of communicating from one brain to another wirelessly that don't just rely on the conventional senses. So our expansion of human intelligence is something that will be open to everyone. I don't see us having to compete with machines. I think machines are something we are creating in our civilization to enhance our own potential.

I will show you one last slide and then be happy to take questions and dialogue about these issues. This is a slide that looks like the other ones I have shown. It's a graph of human life expectancy, going back to the 1800's. In 1800 it was thirty-seven. In the eighteenth century every year we added a few days to human life expectancy. In the nineteenth century things really picked up; we improved sanitation and made other innovations. We added a few weeks every year to human life expectancy. In the twentieth century we had antibiotics and more powerful medical technologies. This curve actually now is starting to accelerate because we are in the early stages of this profound biotechnology revolution--the intersection of biology and information. We are now adding about a hundred fifty days per year to human life expectancy. Many observers, including myself, believe that within ten years, as we get to the more mature phase of this biotechnology revolution, we'll add more than a year to human life expectancy. So as you move forward a year, human life expectancy will actually move away from us. If you can hang in there for another decade, we may get to appreciate the remarkable century ahead. Thank you very much. [applause]

(back) (next) (contents)