Intelligence is a broad word. If you want to check it out in Wikipedia, it is here. Of note is that although usually applied to humans, it is also used in conjunction with animals, plants, and increasingly to machines (artificial Intelligence.) If we consider the word “smart”(perhaps a subset of intelligent), we seem to include everything with a chip in it (smart cars, smart houses, smart thermostats, smart toys, etc.)
Here in the heart of Silicon Valley, there seems to be increasing confidence that the computer and related information systems will continue to become more intelligent as time passes. In fact, a few hard-core folk believe that computers will surpass humans in intelligence. There is an educational venture in the Silicon Valley area that calls itself Singularity University and which studies innovation and so-called exponentially accelerating technologies. Singularity in this case refers to digital technology, and is defined in Wikipedia here. One of the co-founders, and now Chancellor of the University, is Ray Kurzweil, an inventor and technology visionary.. He has written a book entitled Singularity is Near, and believes this singularity will occur in 2045, and that we should prepare for this time when computers become more intelligent than humans and continue to advance more rapidly than we can.
Admittedly, I do not have the technical reputation that Kurzweil does, but I do not believe this at all. Computers have steadily taken over activities that humans used to do—calculation, recording and organizing information, controlling machinery, making decisions based on input, and so on. But I see no way they can take over the ship. For one reason, we humans, not computers, define intelligence, and we do so selfishly. Think of how much faith we place in the I.Q. test, which˙ after all merely measures how well one does on a traditional “test”(according to traditional questions) rather than overall intelligence. Nevertheless we believe in it, But should should computers out perform us at taking the test, we will change the test. As computers do better than humans at various tasks (Big Blue beating Kasparov at Chess, once thought to be a game requiring great intelligence, and speed at calculation, which got me good grades at Caltech), we seem to change our minds as to how much intelligence is necessary to accomplish these tasks. And there is always Dave’s approach to HAL, his misbehaving AI computer, in the 1968 movie A Space Odyssey—shut it down.
I first encountered the then large mainframe computer in the 1950’s when I worked first in the Air Force, and then at the Jet Propulsion Laboratory. I was fond of these machines because they could do complicated and involved calculations that I really did not like to do, and practically never could have, by hand. I have worked with computers ever since, and like most people, I consider them not only immensely useful, and wondrously capable of speeds and data handling with which humans cannot compete. But I don’t think of them as particularly intelligent, any more than I thought of the Library of Congress, adding machines or telephones as Intelligent. I know that Artificial Intelligence (AI) approaches are being applied in an increasing number of ways. And the role of computers will change a great deal by 2045, which after all is 30 years from now. The digital age is still quite young. The invention of the internet was approximately 30 years ago and of the World Wide Web only 24 years ago. Computers will continue to amaze us , until we take the new miracle for granted, but we humans will continue to lead the movement.
I am a believer in artificial intelligence when humans are in the loop. When I joined the Stanford faculty, I worked a bit with the then Stanford Artificial Intelligence Laboratory, headed by John McCarthy, one of the early pioneers in AI. One of the graduate students was working on computer vehicle control , and was using a remotely controlled cart I had built for my Ph.D. thesis (remote control by a human with time delays in the loop) a few years before. It was clear to me then that computers would take over an increasing number of information processing tasks that people had traditionally done, but even then I was skeptical about how far they would go at being truly intelligent. At some level, we humans have always had to tell digital devices what to do and how to do it.
In the early days, AI was very interested in the workings of the human mind. But the human mind is beyond complex. Yesterday there was an article in the New York Times entitled “Will You Ever Be Able to Upload Your Brain, by Kenneth Miller, Co-Director of the Center for Theoretical Neurosciences at Columbia University”. It is here. He wrote it because of the attention being paid to an increasing number of people who seems to want their brain frozen and stored until science has progressed to the time when someone can thaw it out and start it up again. His answer is “maybe centuries from now”. I have another reaction, and that is “who would want to”? But the article does give an insight into how far away we are from truly understanding the human mind, despite a great amount of brain and behavior research.
AI researchers no longer think of duplicating the human mind on silicon, but the search for computer based intelligence that mimics human intelligence is alive and well. In fact, as computers become better and better at crunching numbers and handling “higher” mathematics, and as they are equipped with better sensors and actuators and integrated into more and more nets, they become more and more useful to us (with a few exceptions, such as handling “big data” for marketing purposes and being involved in pranks and crimes). But the more time I spend screwing around with such things as new operating systems and applications, handling the increasing number of people who want to send me e-mails or texts, find things on the ever-increasing internet, or become my friend or contact on social networking sites, the dumber my computer seems at handling information for me, and I have never considered allowing it to make decisions for me.
The good news is that I don’t have to worry about Mr. Kurzweil’s singularity, because I will be dead by then. The bad news is that computers may hasten my death, but only through frustration, and not because they are more intelligent than I.
Recent Comments