Raymond Kurzweil(18740 bytes)
Raymond Kurzweil

Brave New World: Technology for the Blind in the 21st Century

by Raymond Kurzweil, Ph.D.

From the Editor: Ray Kurzweil has demonstrated his commitment to improving the lives of blind people through technology since the mid-seventies. He is a talented inventor and thinker, and he has keynoted all four of the U.S./Canada technology seminars. We found it necessary to summarize parts of his remarks as the keynote speaker on October 27 since a written transcription of his demonstration of his new language-translation program would have made very little sense.

It's a great pleasure to be here at the headquarters of the National Federation of the Blind, which I would honestly say is my favorite organization, and I see many of my favorite people here, whom I have kept running into over the past quarter century. This is a very rewarding field of work, and I think people who discover it never leave. So it's always the same group of people.

It started about a quarter century ago when I met Jim Gashel--who hasn't changed a bit--and he introduced me to Dr. Jernigan. We had this little project of a reading machine for the blind, which we were trying to interest people in, and a lot of people were interested in it and wished us well. But Dr. Jernigan, being the visionary and entrepreneurial person that he was, wanted to get involved and help us--help us in ways we hadn't expected, including helping us design the reading machine. We didn't realize we needed that help, but we did. Dr. Jernigan and Mr. Gashel organized a whole team of blind engineers and helped insure that the reading machine would be really useful to blind people.

In my first session with Dr. Jernigan I didn't know a lot about blindness--I'm still learning, though I know more than I did a quarter of a century ago. He said that blindness could be just a characteristic, just a minor inconvenience, and that blind people could accomplish anything they wanted to, just like sighted people. At the time I wondered to myself to what extent that was really true--was this a goal or a political statement, or was it a reality? I want to come back to that as I talk about the role of technology because I think technology has one small part to play in realizing Dr. Jernigan's vision. I very quickly came to recognize that Dr. Jernigan's statement was a plain, realistic assessment, provided that you had an organization like the National Federation of the Blind to make some prerequisites of the vision a reality. Those prerequisites include training in the skills and knowledge to accomplish the things desired.

The right attitudes about what blind people can accomplish are important for blind and sighted people alike. And information accessibility in all forms must be encouraged at every level. Technology has one role to play, but the technology needs to be useful to blind people. It needs to have the right features. Blind people must be involved in its development. The technology and the skills needed to use it effectively must be available.

I want to come back to those issues, and I want to talk about how, in my view, technology will develop in general over the next century. I think we will be hearing a good bit about technology issues in the very near term at this conference. So I think it's appropriate to start out with a little more expansive view about where technology will go over the next several decades and how that will affect technology for the disabled, with particular regard to the visually impaired.

I would like to start with some contemporary technology. This is technology circa 1999--actually I should probably say circa 2001. I had to decide whether to show you some bullet-proof technology that would be reliable or share with you some really cutting-edge technology that's not so bullet-proof. I opted for the latter, so I hope you'll bear with me. This is a rather complicated assemblage of software components, which usually work well together, but this is only the second time I have given this demonstration. I gave it in a private meeting with Bill Gates about ten days ago because he likes to stay on the cutting edge. It actually worked pretty well. It did make one mistake, which I will share with you after I give you this demo.

[Dr. Kurzweil began by calibrating the system for the acoustical environment in which it would be working. He then said three times in a clear voice, "It is very good to be here comma." After a pause a female voice repeated the words in extremely understandable German. Speaking in short, clear phrases, he went on to say that this was a demonstration of a prototype of a translating telephone and that in several years anyone would be able to speak to anyone else regardless of the languages spoken by the parties. Each phrase was faithfully translated into excellently accented German.

After making a small alteration in his equipment, Dr. Kurzweil spoke again, and after one patch of gobbledy-gook, French replaced the German. The same female voice spoke just as acceptable French as it had German. Then Dr. Kurzweil spoke in French, and the system produced unaccented English. In fairness one should point out that the machine's French pronunciation was considerably better than the human being's; yet the machine understood it and did its job.]

This was a combination of three technologies running on a notebook computer: speech recognition (Version 4 of Voice Express, the Kurzweil voice-to-text technology I sold to Lernout & Hauspie two years ago), language translation, which can go back and forth in sixteen languages, and RealSpeak, which is a new speech synthesizer. This system uses a new version of Voice Express. I have another one which I used to dictate my book, but this is a fresh one that has only heard me for about ten minutes, so you can see that it is quite accurate.

As for RealSpeak, I've been watching speech synthesis for twenty-five years, since we developed the first full text-to-speech twenty-five years ago in the Kurzweil Reading Machine. The early text-to-speech required some getting used to. Over time speech synthesis has gotten more understandable, but it has still sounded synthetic. RealSpeak is new technology. It's not quite out as a product, but it is coming out. That was not recorded speech; it was text-to-speech. [He then typed a sentence into the computer for the system to read back, proving that it really was producing high-quality synthetic speech.] The speed can be varied.

This full text-to-speech system will be in our reading machine anyway, along with the language translation, so that you can read something in French and hear it translated in a human-sounding voice. A lot of the technology is actually devoted to the prosodics, understanding at least the grammar of the speech, so the inflection is fairly reasonable--not as intelligent as a human reading it, but pretty good. There will be other voices, and next year you will be able to record a sample of your own voice and have the machine speak in your voice or maybe someone else's voice that you like to listen to.

Let me now talk about where technology is going. We will be hearing a lot about the next few years, so I'll concentrate on the more distant future, as is, perhaps, fitting for a keynote. Then I will come back and address what the implications are for technology for blindness, which is something that has been important to me for twenty-five years, and I'm sure to all of you.

How many people here are familiar with Moore's law? Virtually every hand went up. I always ask that question, but now it is sometimes almost insulting to do so. It's like asking if you have heard of computers. But only two or three years ago relatively few hands went up in most audiences, even among people in the computer industry. So Moore's law has become more and more noticeable.

What is Moore's law? It says that transistors on an integrated circuit get smaller--take up about half as much space--every two years. This means that you can put twice as many transistors on an integrated circuit. And, because they are smaller and the electrons don't have to travel as far, they run twice as fast. That's actually a quadrupling of computer power for the same unit cost every two years. That's been going on for quite some time. Gordon Moore first noticed it in the 1960's. At first he said it was every twelve months; then he revised it to every twenty-four months in the 1970's. Where does Moore's Law come from? Why is this happening? Randy Isaacs from IBM Research says it's just a basic set of industry expectations, that it's been going on, so we know where we need to be at particular times in the future, and we target our research to be there. It is a self-fulfilling prophecy.

But in examining where technology will go in the twenty-first century, it's important to understand this phenomenon in greater depth because that paradigm of the shrinking transistors is going to come to an end. There is some controversy as to whether it is in ten years or twenty years, but sometime during the teen years, 2010 to 2019, the key features of transistors will be so small that they will be only a few atoms in width, and we won't be able to shrink them anymore. So is that the end of Moore's Law? Well yes, but is it the end of the acceleration of computer power, the exponential growth of computing that we have seen in recent decades?

That is a very important question to answer because, depending on the outcome, either computer technology will continue to become more and more profound, or it will level off. So I have spent a lot of time examining that issue. Relatively little has been written about it. The first thing I did was to consider all of the computers over the past hundred years--forty-nine machines going back to 1900. I started with the computer that did the 1890 U.S. census and ran up to the Turing Robinson machine built out of telephone relays that cracked the German Enigma code.

That's actually an interesting story. A Polish spy had stolen the German Enigma machine, which had three coding wheels, and they figured out how it coded. But they needed a computer to figure out every combination of the coding wheels in order to decode messages. The only problem was that they didn't have a computer, So Turing invented the computer and built the first functioning computer in 1942. It succeeded in breaking the German code, and Churchill had a complete transcription of all the German military messages.

He knew when the Nazis were going to bomb various English cities. He was under great pressure to warn city officials so that they could take necessary precautions, and he refused to do that because he figured that, if the Germans saw these precautions, they would realize that their code had been broken. He didn't really use this information until the Battle of Britain when suddenly the English planes just seemed to know at every moment where the German planes would be. Despite the fact that they were outnumbered, they won the Battle of Britain. And if it hadn't been for that, we wouldn't have had a place from which to launch our D-Day invasion.

Anyway, I have that machine on the chart in the early '40's. Then there was the vacuum-tube computer that CBS used to predict the election of Eisenhower in 1952. The notebook computer you bought your daughter for Christmas last year is on the chart also. I put the computers on an exponential graph, in which a straight line would mean exponential growth. The first thing I noticed was that the exponential growth of computers goes back a hundred years, long before we had any integrated circuits, long before Moore's Law was feasible. So it turns out that Moore's Law is not the first but the fifth paradigm to project exponential growth in computing, starting with the relay-based electro-mechanical calculators, then relay-based computers, then vacuum-based computers, then transistor-based computers, and finally integrated-circuit computers.

The other thing I noticed is that it's actually not a straight line. That graph is another exponential; the rate of exponential growth in computing has actually been growing exponentially. We doubled computing power every three years at the beginning of the century, every two years in the middle of the century; and now we are doubling it every year. So that rate continues to accelerate. One of the predictions that this suggests is that, when Moore's Law dies, there'll be another, a sixth paradigm to continue the exponential growth of computing.

We don't have to look far to figure out what that is. Despite the fact that they are very dense, integrated circuits are built in two dimensions; they're flat. Our brains, by contrast, are built in three dimensions. We live in a three-dimensional world; why not use the third dimension? That obviously will be the sixth paradigm. There are already chips with dozens of layers of circuitry; they are building some now with hundreds of layers of circuitry. And there is a new technology called nanotubes, which are basically pentagonal tubes of carbon atoms, and researchers have been able to arrange them in such a way that they can do every kind of electrical manipulation--emulate transistors and other types of electrical components. So they can actually build three-dimensional computing circuits at the atomic level using these molecular structures that are extremely strong and impervious to heat, which is the main problem in building two-dimensional circuits.

They have built small-scale circuits; they haven't yet built a full nanotube-based computer, but this is technology that we can touch and feel. We know that it works. A one-inch cube of nanotube circuitry would be a million times more powerful than the human brain. There are probably a dozen different three-dimensional types of circuitry being developed. We can't be sure which one will prevail, but I think we can have confidence that a sixth paradigm will be there when this fifth paradigm of Moore's Law runs out of steam just as in the 1950's, when they were building vacuum-tube-based computers. They kept shrinking the vacuum tubes and making them smaller and smaller. They finally came to a fundamental limit where they just couldn't make them any smaller; then transistors came along. Transistors are not small tubes; it's a completely different paradigm.

As we look at the history of technology, we see that this exponential growth of a technical process is inherent in all of technology. Moore's Law is not the only example of exponential growth. Take the human genome scans, a completely different issue. We can sequence DNA at a certain speed. Twelve years ago the human genome project was announced, and it was greeted with a lot of skepticism because people pointed out that at the speed with which we could then scan the human genome, it would take 10,000 years to finish the project. Proponents of the project said, "Well, technology accelerates, so we'll figure out how to make this fast." And indeed, if you plot genome sequencing speeds, they have accelerated in the same way that computing speeds have. We are now going to finish that project on time, in a fifteen-year period. In fact, it is going to finish years early.

Brain scanning used to be very crude, low-resolution, and slow, but it has also accelerated in the same way. We can make this basic observation about technology in general. Technology is an evolutionary process, and it accelerates. The first steps in technology took tens of thousands of years. It took thousands of years to figure out that, if you sharpened both sides of a stone, you created a sharp edge which made a useful tool. It also took tens of thousands of years to develop the other early steps in technology such as the wheel and using fire. But a key difference between the human species and other species is that we remembered these innovations. There are many examples of other species using tools, but they don't have a species-wide knowledge base that they pass down from generation to generation and to which they add on layers of innovation.

Humans, in contrast, have used the tools from one generation to create the tools of the next. So a thousand years ago paradigm shifts took only a few hundred years rather than tens of thousands of years. We accomplished more in the nineteenth century than in the ten centuries before it. We accomplished more in the first twenty years of the twentieth century than we did in all of the nineteenth. Today paradigm shifts take only a few years. The World Wide Web didn't exist in anything like its current form just a few years ago. So technology accelerates.

If we take an even broader view, we can say that any evolutionary process accelerates. Technology is just one example of that. Take the evolution of life forms. It took billions of years for the first cells to form. Then in the Cambrian explosion paradigm shifts took only tens of millions of years. Later on humanoids would evolve in only a few million years, then homo sapiens in only a few hundred thousand years. At that point the accelerating pace of the evolution of life forms became too fast for DNA-guided protein synthesis to keep up with it, and the cutting edge of evolution on Earth migrated from the evolution of life forms, changed from evolution of DNA-guided protein synthesis to the evolution of technology.

Obviously DNA-guided biological evolution continues, but it is at such a slow pace that it is insignificant compared to the accelerating pace of technology. The key point is that technology in the twenty-first century will become so powerful that it will provide the next step in evolution.

If we view Moore's Law in this perspective, it's just one example of an accelerating technological process. It took us ninety years to achieve the first MIPS (million instructions per second) per thousand dollars. Now we add a MIPS per thousand dollars every day. So that process is accelerating. It is one of many accelerating processes in technology. Any particular innovation allows us to grow exponentially for a while, but then the paradigm eventually ends, and it's taken over by some other innovation. It is basically the process of human innovation and creativity that allows the exponential growth of a technology to continue. We can view the exponential growth of computing as an example of the exponential growth of any evolutionary process, and it goes back to the evolution of life on Earth. It's a multi-billion-year process which is now getting faster and faster.

There are many technologies waiting in the wings which will continue that process. Where will this take us in the twenty-first century? The human brain is immensely powerful in one way. It's remarkable that such an intricate, complex, rich, and deep-thinking entity could evolve through natural selection. On the other hand its design is limited and crude in certain respects. The tremendous power of the human brain comes from its massively parallel organization. We have a hundred billion neurons. Each of them has a thousand connections to other neurons. That's a hundred trillion connections. The calculations take place in the connections, so that's a hundred-trillion-fold parallelism.

This notebook computer I have up here does one thing at a time, and it does it very quickly. The human brain, by contrast, does a hundred trillion things at a time. That's a very different type of organization. On the other hand, the circuitry it uses is an electrochemical form of information-processing. It's both analog and digital. We can do analog processing with electronics--there's nothing unique there. But it's very slow. The human brain interneural connections calculate at about two hundred calculations per second, which is at least ten million times slower than electronic circuits. Neurons are quite big, clumsy objects compared to electronic circuits. Most of their complexity is devoted to maintaining their life processes and reproduction, not their information-processing capabilities. If we take that hundred trillion connections and multiply it by two hundred calculations per second, we get a capacity of about twenty billion million calculations per second, or about twenty billion MIPS, which is on the order of a million times more powerful than notebook computers today.

But, as I mentioned, electronics and computing is growing exponentially; human thinking is not. It is relatively fixed. Our human thinking is constrained to a mere hundred trillion calculations at a time. Nonbiological intelligence has no such constraint. I have developed a mathematical model of this double exponential growth, which matches different technological processes. (Another one, by the way, is miniaturization. You have certainly noticed in your lifetime how technology gets smaller and smaller. That's actually another predictable exponential process. Right now we are shrinking technology at a rate of 5.6 per linear dimension per decade.) So we can project where technology will be, at least in these types of quantitative terms,at different points in time. By 2019 a thousand-dollar computer--and they won't look like this rectangular box I have on the podium--will match that twenty billion million calculations per second. By 2030 a thousand-dollar computer will be a thousand times more powerful than the human brain. By 2050 a thousand dollars of computation will equal the thinking capacity of ten billion brains. I might be off a year or two on that.

I did actually predict in a book I wrote in the 1980's that by 1998 a computer would take the world chess championship, based on how many moves ahead I thought the computer would need to look to match the playing of a human grand master or chess champion. That was actually off by a year because it happened in 1997.

But by 2019 we will have the basic capacity of human thinking in nonbiological form. That's a necessary but not sufficient condition to recreate human intelligence. We could have a machine that's a million times more powerful than the human brain and have merely a very fast calculator that could calculate your spreadsheet in a billionth of a second. But we wouldn't necessarily have the richness, subtlety, suppleness, and flexibility of human intelligence. We wouldn't have the endearing qualities of human thought. How are we going to achieve that, which I would call the software of human intelligence: the knowledge, the skills of human intelligence?

Before I address that question, let me say that, once nonbiological intelligence achieves the richness and capabilities of human intelligence and all the diverse ways that humans excel in thinking, it will necessarily soar past it for several reasons. For one thing machines can share their knowledge. If I spend years learning French, I can't download that knowledge to you. Humans can communicate, something that other species have not been able to do, in the way of building up a species-wide dialogue and cultural knowledge base and technological knowledge base, but we don't have quick downloading ports on our neurotransmitter concentrations.

If I learn French, where is that knowledge; what is it; what represents all of my knowledge and skills and personality and capabilities? It's a pattern of information; it's a pattern on interneuronal connections. Our brains do grow new connections between neurons. That's part of our skill and knowledge. It's a vast, intricate pattern of information that's in my brain--in everyone's brain--representing memories, knowledge, and skill. And we don't have a way of taking that pattern and quickly instantiating it into someone else's brain.

Machines do have that. Take this system I just demonstrated to you. We spent years teaching several research computers how to recognize human speech. We started with certain methods which were imperfect. We had tens of thousands of hours of recorded human speech, which is annotated with the accurate transcription. We had the speech-recognition system try to recognize it, and when it made mistakes, we corrected it. We've automated that teaching process, and patiently we have taught it to correct its errors. It adjusts its pattern of information to be able to do a better job. But after years we have a system in our laboratory which does a very good job of recognizing human speech.

Now, if you want your notebook computer to recognize human speech, you don't have to go through those years of training. You can quickly load the program, which is the pattern of information that we've evolved over a couple of decades of research, and you can load it in a matter of seconds. So computers can share their knowledge; they do have the means of loading these patterns quickly. As we build these nonbiological equivalents of our thinking process, we are not going to leave out quick downloading ports for interneuronal connection patterns and neuro-transmission concentration patterns.

Another advantage is that electronics is inherently faster--ten million times faster right now and continuing to get faster. As we can build structures that are equivalent in three dimensions to the massively parallel processing of the human brain, they will be inherently faster than human thinking.

Machines have more accurate memories. We are all hard-pressed to remember a handful of phone numbers. Machines can remember billions of facts accurately and recall them very quickly. So, if we combine the subtlety and richness of human thinking with some of these advantages of knowledge-sharing, speed, and accuracy of memory, it will be a formidable combination. And the nonbiological forms of intelligence will continue to grow exponentially.

But how are we going to achieve that software of intelligence? All of this speed is just brute force,crunching of information. It's not the subtlety and richness of human intelligence. In my book [The Age of Spiritual Machines] I talk about a number of different scenarios, but I'll just address one, which I think is the most compelling. We have an example of an entity that has human-level intelligence: the human brain. We have several dozen examples in this room. It's not hidden from us.

It's not impossible to access that information. In fact, we are well down that path. We've been scanning the human brain,and, as I mentioned earlier, the speed and resolution of our ability to do that is continuing to accelerate as well. We have the ability today actually to scan the human brain with sufficient resolution and fineness of detail to see every single detail, all the neurotransmitter concentrations, the interneural connections, provided that the scanning tip is in close physical proximity to those neural features. So we take that scanning tip and move it around in the brain so that it's near every single interneural connection, every neurotransmitter concentration, every detail.

How are we going to do that without making a mess of things? We are going to do it in the following way. This is a scenario that we can touch and feel today. Everything I am going to describe is feasible today, except for cost and size. But those are aspects that we can readily predict because of the ongoing trends of the accelerating price performance of computing and diminishing size or miniaturization. We simply develop what I call little nanobots, nano-robots, the size of blood cells, which are little computers with some robotic and scanning capability, and send them through the bloodstream. By the way, we already have early prototypes of nanobots, something called smart dust, which is extremely tiny specks that actually have computers in them, scanning devices, communication devices. They can actually fly; they have little wings. So we are already building little tiny devices.

By 2030 we will be able to send billions of these little nanobots through the bloodstream. They will travel through every capillary in the brain and get into close physical proximity to every neural feature and build up a big database of exactly how that human brain is organized.The results will at least be a data dump of the organization of a human brain. What are we going to do with that information? One thing is that we are going to learn how the human brain works and understand how those massively parallel analog algorithms work. That's already underway. We actually already have maps of the early auditory and visual cortex. This speech-recognition, for example, has built into it the transformations that the human brain does on sound information. Without that speech recognition wouldn't work very well. So we are already applying our insights into the human brain from these scanning projects to the design of intelligent software.

Another application of this kind of intelligent software is that we could reinstantiate the whole database into a neural computer of sufficient capacity. That wouldn't necessarily require us to understand all the methods. We would need to understand local brain processes, but not necessarily global brain processes. So if you scan my brain and reinstantiate it into a computer, you'd have a new Ray Kurzweil, and he would claim to have grown up in Queens, New York, and have gone to Massachusetts to go to MIT, and then he met Dr. Jernigan and developed a relationship with the National Federation of the Blind and was involved with reading machines for a few decades. He would say, "I walked into the scanner over there and woke up in the machine here. This technology really works." He will have a memory of having been Ray Kurzweil and will believe that he is Ray Kurzweil.

Of course I'll still be here in my old carbon-cell-based body and brain, and I'll probably end up jealous of the new Ray Kurzweil because he'll be capable of things I could only dream of. Sometimes this scenario is presented as a road to immortality, but there are some philosophical issues that one has to contend with. For example, you could scan my brain while I am sleeping and reinstantiate. I wouldn't even necessarily know about it. If you came to me in the morning and said, "Hey Ray, good news--we've successfully scanned and reinstantiated your brain; we don't need your old carbon-cell-based body and brain anymore," I might discover a flaw in that philosophical perspective.

We could talk for a long time about the philosophical conundrums of what consciousness is and whether these entities are conscious at all. I will say that these entities will certainly seem conscious; they will claim to be human even though they are based on nonbiological thinking processes. They will seem very human, and they will be very intelligent, so they will succeed in convincing us that they are intelligent. We will come to believe them; they will get mad if we don't believe them. Some philosophers will say, no, you cannot be conscious unless you squirt neurotransmitters, or you can't be conscious unless you are based on DNA-guided protein synthesis. Yes, they seem very conscious and they're compelling and they are funny and they get the joke and are emotional and they are very clever, but they don't squirt neurotransmitters, so they aren't conscious. At that point the nonbiological intelligence will crack a joke and will complain about being misunderstood, so we will come to accept that these are conscious entities.

But the more practical scenario we will see is that we will expand our own human intelligence through combining with this nonbiological intelligence. One way we will do this is with these nanobots. Today we have something called neuron transistors. These are little electrical devices, which, if they are in close physical proximity to a neuron, can communicate in both directions with that neuron. They can detect the firing of a neuron and can also cause that neuron to fire or suppress it from firing. That is two-way communication noninvasively--it doesn't have to stick a wire into the neuron; it just has to be next to it.

This technology is being used today. The whole era of neuroimplants has already started. I have a deaf friend who, before he got his cochlear implant, was profoundly deaf. I can now talk to him on the telephone because of his neural implant. There are neural implants for people with Parkinson's Disease--Parkinson's scrambles a certain locus of cells--and this neural implant replaces that neural module with an electronic equivalent and communicates through this type of noninvasive, electronic interface. This was first developed about three years ago. In a dramatic demonstration of the technology, patients with advanced Parkinson's so that they were completely rigid were wheeled in to the room. The doctor, who was controlling them noninvasively through wireless radio control--which is a little scary--flipped the switch, and suddenly they came alive. Their Parkinson's symptoms were eliminated as he activated their neural implants.

In my book I talk about an era of neural implants in which we will all use them to expand our thinking capability, not just to reverse diseases such as Parkinson's. People have challenged that, asking how many people are going to want to get a neural implant? Brain surgery is a pretty big step, a pretty formidable obstacle. The response is that we will be able to do this noninvasively. I just wrote a paper called "The Noninvasive, Surgery-Free, Reversible, Programmable, Distributed Neural Implant." It again uses these nanobots.

Remember that already today we have the means for electronic devices to communicate in both directions in the brain, to detect what is going on in the neural biological circuits and also to control them. So these nanobots go through the blood stream and take up positions in millions or billions of different locations; they can basically expand the brain. They can create new interneural connections because they will all be on a wireless area network. They will also all be plugged into the World Wide Web wirelessly, so they can expand all of our biological networks, or memory, learning capability. We will be able to download knowledge and skills. This will really happen. It will be gradually introduced in different ways. But as we go through the twenty-first century, we will be expanding our thinking capability through this intimate connection with nonbiological intelligence.

So let me come back to technology for the blind and just mention what we'll see as a few milestones. The very early part of the twenty-first century, the next several years, will see a rapid evolution of reading machines. They will take on new capabilities. They will sound human. They will translate languages. This is technology that will be introduced very soon. They will also get smaller. I have talked about my vision of hand-held reading machines for many years. We are really very close to having the technical means to have a digital camera that you can hold in the palm of your hand and instantly snap pages with sufficient resolution. We are also close to providing a pocket-sized reading machine that you can hold up to printed information in the real world, not necessarily on paper, like road signs, LTD displays, or other examples of real-world text.

If we look out ten to twenty years from now, computers as we know them are essentially going to disappear. They are not going to be in little boxes and palmtops that you can put into your pocket. They are going to become very small and discrete and be built into our clothing and into other little devices that we can carry around on our bodies. This again is all technology that we can touch and feel today. There are already tiny visual sensors the size of pins that provide very high-resolution imaging. In fact, the smart dust that I talked about has visual sensors. Part of the application for that is spying. One version of this is being developed by the U.S. military so that they can just drop millions of these in enemy territory. These tiny little visual sensors will be flying around and sending back reports on what they see.

But we can also apply this type of technology to the visually impaired. We will have the means constantly to interpret that visual information and present it through other modalities such as whispering in your ear or providing tactile information or combinations thereof. There will be plenty of opportunity to develop the most appropriate means of doing that. It's probably something we can't fully describe today. But information can be presented in many different forms. The reading machine is one example of that.

These visual sensors, which will be looking around in all directions, will be interpreting that information and providing a constant stream of information for a visually impaired person. This would include reading. Any kind of printed information could be spoken or translated by using reading machines, but they will also provide other interpretations of the visual world,

That's the scenario for 2010 to 2020. These devices will also be plugged into the World Wide Web through wireless communication. Everyone is going to walk around plugged into the World Wide Web at all times. Going to a Web site will mean entering a virtual-reality environment. We'll have the means of communicating with other people through that type of wireless communication at all times. These computing devices will be in and around our bodies and clothing within ten years. That's the scenario for between 2010 and 2020.

As we go out to 2030 and beyond, the type of technology I described, which can be introduced inside our bodies and brains, will become a reality. Like every other type of technology, they won't provide every capability that one could imagine initially, but the technology will continue to evolve. The power of the computing substrate will continue to grow exponentially, so we will have the means of introducing knowledge and information into our brains in a more intimate way. This is a vision for everyone. Ultimately that will mean that we will have many different ways of experiencing the world and expanding our knowledge.

Of course it will be important to develop and design this technology in ways that provide equal access for people with disabilities to overcome the disabilities and overcome the handicaps associated with disabilities. One lesson I have learned is the difference between the words "disability" and "handicap." Visual impairment, blindness, is a disability, and it may or may not be a handicap, depending on whether that person has the right set of skills and access to the right kind of technology. That's why organizations like the National Federation of the Blind and the Canadian National Institute for the Blind are vital, so that the power of this technology is applied to overcoming those handicaps.

One handicap is the inability to access ordinary print for material that isn't readily available in Braille or Talking Book form. Reading machines have the potential of overcoming that, provided that they are designed in the right way and that people have access to them and that they are affordable and distributed and that people learn how to use them. That's true for all technology. Overcoming handicaps is not necessarily an issue of technology. Sometimes simple technical solutions such as the fiber glass cane can overcome limitations in travel. But that's a matter of having the right set of skills, and again we need organizations like the NFB to make sure that they are available.

We will have many new tools in the future. These will provide opportunity, but there will also be challenges as we saw with the graphical user interface, which was a new technology that suddenly made visual information from the computer harder to access. With concerted efforts over the past five to ten years we've made great progress in making GUI information available. But we are going to continue to have those kinds of challenges when new technologies that create new sources of information are introduced. It's important that we keep in mind accessibility and make sure that blind people have access to the information. But I think the technical tools will be there, provided that we develop them in the right way.

That's really the purpose of this conference, to deal with some of the near-term issues of new technology. That will continue to be the case as we go forward. But I think we will have the tools, provided that we develop them in the right ways to continue the vision that Dr. Jernigan articulated, which I quickly decided back twenty-five years ago was true for all the people I met coming out of the Iowa Commission and from the National Federation of the Blind, but wasn't true for every blind person. Some didn't have the access, the training, and really the attitude that information is available in many different forms and that there is nothing that a blind person is unable to accomplish if there is access to the information and skills. That is the purpose of this conference. Technology has one role to play. I look forward to continuing to work on this. I've been involved with this field for twenty-five years, and I look forward to working with Dr. Maurer and Dr. Herie and other leaders of this field to continue that progress.