Sunday Night Journal — February 13, 2011
It's probably an indication of just how deeply materialism has been adopted by educated and partly-educated people that the ideas of Ray Kurzweil are generally accepted as fundamentally plausible, even if not as close to becoming reality as Kurzweil and others insist. In a nutshell, Kurzweil believes that computers will very soon--within a few decades--become more intelligent than humans, and that their intelligence will then increase very, very rapidly as they use that intelligence to make themselves more intelligent. He also believes that we will figure out how to transfer our consciousness--our very selves, our "I"--into computers, and thereby live forever.
He believes all kinds of wild stuff, actually, a lot more than I can quickly summarize. You can read all about it, as reported very excitedly by Time, in a piece titled, quite misleadlingly, "2045: The Year Man Becomes Immortal." He is certainly a very intelligent man, with all sorts of technical achievements to his credit. But I think his ideas are taken more seriously than they deserve because most of us are so awed by scientific and technical expertise.
Science and engineering have been spectacularly successful in transforming the conditions of life in much of the world. Yet they are not at all well understood by most people, and so those who do understand them, and who discover the principles and craft the machines that make technological civilization what it is, take on some of the mystique that pre-technological cultures ascribe to practitioners of magic. And people tend to take them seriously when they're talking about things in which they really have no more expertise than anyone else. Those who might think that there is something fundamentally wrong in the expert's thinking are intimidated by the sheer power of intellect confronting them. If Einstein was smart enough to come up with the theory of relativity and all its abstruse mathematical justification (one thinks) while I can't even understand it, who am I to challenge him when he talks about religion? or politics? This is how the idea that "science" has somehow disproved "religion" gets its power.
In the case of Kurzweil, I'm both more and less intimidated than other people might be, and for the same reason: I know something about computers. I don't know anywhere near as much as he does; I'm just a sort of everyday journeyman programmer, and my ability in that line is to Kurzweil's as a beginning piano student's is to Yevgeny Sudbin's. On the other hand, I do know how computers work, and I know that there is no reason at all to suppose that computers will ever come to life, which is essentially what Kurzweil says is going to happen.
Like those who assume that "evolution" produces consciousness, Kurzweil erects his whole structure on the materialist assumption that the human mind (soul, the conscious self) is a purely material phenomenon, a sort of side effect of the brain. This seems obvious to the materialist. But it only seems obvious because he is a materialist. And it only seems persuasive to others who are not necessarily committed to materialism because the scientific rationalism of our time has disposed them to think this way. On the face of it, the idea that there are non-physical things which are just as real as physical ones is every bit as plausible. We experience them constantly, in the form of our own thoughts and ideas. What is, for instance, justice? It is very difficult and unpersuasive to try to define that word without appealing to some principle that is independent of the material. And we experience ourselves as somehow in our bodies, but not entirely identifiable with them. (I don't mean that the experience proves the immateriality of the soul, only that the idea should not seem strange to us.)
Let me describe, for those who don't know how computers work, a simple one, one that you use dozens of times a day without thinking much about it. You know it by the name "electric light." You walk into a room. The switch by the door is in the off position, and the light is off. You turn the switch to the on position, and the light comes on. You could say that the switch tells the light to turn on, and that when the switch is in one position the light knows that it should shine, and when it's in the other position the light knows it should stop shining. If you spoke of it this way, you would know you were speaking figuratively, and that the light doesn't "know" anything.
Well, that doesn't change if you postulate 10 billion or 10 trillion switches instead of one. A computer is only an extremely elaborate configuration of a very large number of on-off switches, with even more elaborate mechanisms for controlling them so as to represent and manipulate information. If you log in to a system and it displays a message along the lines of "Hello, Dave. It's nice to see you," the computer has not recognized you in the way that a person would; it is only retrieving a pattern of ones and zeroes which result in the words "Hello, Dave etc." appearing on your screen. This is true whether you provided it with a username and password, a thumbprint, or a DNA sample: it doesn't "know" you except in the way that a lock "knows" the key that unlocks it.
But if you leap over those facts and assume that when all these ones and zeroes and switches reach a certain level of complexity they will become conscious, you are free to invent anything and claim the authority of science for it.
I'm almost certain that I remember a prediction from the late 1960s or '70s so that when computer memories reached a certain size and processors reached a certain speed, we would have true machine intelligence and consciousness. And I think the size and speed specified were achieved some years ago. I wish I could remember where I read it. At any rate, no one believes the PC on his desk is a sentient being (setting aside the occasional suspicion, apparently entertained by many people, that it is capable of hostility). The truly intelligent, to say nothing of conscious, computer, remains off in the future somewhere.
As for the idea that a human soul can be converted into some electronic form that can be stored in a computer: the Time article was noted at Inside Catholic, and I'll just reproduce a comment I made there, in response to someone asking if anyone could explain how this might work:
No, because nobody has any idea how it might be done beyond a far-fetched theoretical conception which is based on huge assumptions. The supposition that it's even theoretically possible rests on the leap-of-faith assumption that our selves consist only of data held in the brain, and moreover that the data is represented, or can be converted to, the same ones-and-zeroes system that computers use. Or if not, that we can invent some means of data storage that will do what the brain does. Really, it's hard to overstate just how far removed this stuff is from anything we actually know and can do. And, again, it's all 100% based on a materialist assumption, and if that's wrong the whole thing is not just far-fetched but nonsensical.
Kurzweil himself apparently hopes to keep himself alive long enough to save himself in this way, which is a pretty sad hope. Obviously what's at work here is a badly misdirected religious impulse, manifesting itself as a pseudo-scientific variation of the ancient Gnostic quest to escape from the body. He believes that we are heading toward something called the Singularity, when the arrival and immediate enormous expansion of conscious intelligent machines will fundamentally change the world, and then ourselves. You can read a long list of his predictions here. Not all of them are over-the-top; some of the less dramatic and purely technological ones may well come true.
Here's a prediction of my own: in the year 2100, man will still be pretty much what he always has been: a somewhat faulty union of matter and spirit, still wishing he could be something better. And neither Ray Kurzweil nor I will be here.