In Commentary, David Gelernter has a sharp critique of philosophical materialism as applied to the human mind that's very much worth reading. Materialism for many scientists has become a sort of religion, a dogma setting the bounds of permissible thinking. You can't really call yourself a Christian if you don't believe in God, and scientist-materialists would have it that you can't call yourself a scientist if you aren't a materialist--not, at least, if you allow non-materialist thinking to enter into your view of what constitutes objective reality. It's a frustrating position to argue against, because those who hold it believe that any fundamental disagreement is intrinsically irrational.
Yet it is hardly rational to attempt to account for reality while ignoring or explaining away the actual experience of being human, of being a being that wants, indeed desperately needs, to account for reality. It is an extremely mysterious experience, and people have been trying to figure it out for as long as there have been people. Our materialists are certainly not the first to deny that the experience has any meaning, but surely they are among the first to deny systematically that it even exists in any real sense.
Gelernter, who is not some vague-minded humanities type but a professor of computer science at Yale, focuses on the school of thought which takes the computer as an analog to the human being: hardware = body, software = mind. The actual experience of being that sort of being--i.e., human consciousness--is reduced to a more or less illusory epiphenomenon of the operation of the machine.
Gelernter's critique is extensive and deep (barring a passing not-all-that-well-informed reference to the Catholic Church and heretics), and you should read the whole thing. Here's an important passage:
In her book Absence of Mind, the novelist and essayist Marilynne Robinson writes that the basic assumption in every variant of “modern thought” is that “the experience and testimony of the individual mind is to be explained away, excluded from consideration.” She tells an anecdote about an anecdote. Several neurobiologists have written about an American railway worker named Phineas Gage. In 1848, when he was 25, an explosion drove an iron rod right through his brain and out the other side. His jaw was shattered and he lost an eye; but he recovered and returned to work, behaving just as he always had—except that now he had occasional rude outbursts of swearing and blaspheming, which (evidently) he had never had before.
Neurobiologists want to show that particular personality traits (such as good manners) emerge from particular regions of the brain. If a region is destroyed, the corresponding piece of personality is destroyed. Your mind is thus the mere product of your genes and your brain. You have nothing to do with it, because there is no subjective, individualyou. “You” are what you say and do. Your inner mental world either doesn’t exist or doesn’t matter. In fact you might be a zombie; that wouldn’t matter either.
Robinson asks: But what about the actual man Gage? The neurobiologists say nothing about the fact that “Gage was suddenly disfigured and half blind, that he suffered prolonged infections of the brain,” that his most serious injuries were permanent. He was 25 years old and had no hope of recovery. Isn’t it possible, she asks, that his outbursts of angry swearing meant just what they usually mean—that the man was enraged and suffering? When the brain scientists tell this story, writes Robinson, “there is no sense at all that [Gage] was a human being who thought and felt, a man with a singular and terrible fate.”
Man is only a computer if you ignore everything that distinguishes him from a computer.
That last line recalls an exchange I had years ago: I was complaining about the artificial intelligence researchers who assume, with no actual evidence, that computers can ever think in the way that people do, and the person I was talking to replied: "You can only assume computers are people if you think people are computers."
That assumption is one thing that Gelernter doesn't dwell upon, and it needs to be stressed. It needs to be insisted upon. In science and philosophy of all things no one should be allowed to get away with basing a sweeping claim about reality on an unacknowledged and unsupported assumption, and the materialist view of mind as a sort of secretion of the brain is exactly that. This needs to be said again and again: it is an assumption made in keeping with its proponents' philosophical views, and there is no evidence to support it. Sure, there is plenty of evidence of a mind-brain connection. But there is none to support the assumption that the brain creates the mind.
As part of his critique, Gelernter has some rough words for the arguably insane ideas of Ray Kurzweil, who has a lot of followers among technologists. I had a few harsh things of my own to say about Kurzweil a few years ago, in a piece which, if I may say so, also seems worth reading. It contains a more detailed explanation of the fundamental flaw in the assertion that computers do or can "think" in the sense that we do.
But if you leap over those facts and assume that when all these ones and zeroes and switches reach a certain level of complexity they will become conscious, you are free to invent anything and claim the authority of science for it.
It's a three-year-old piece called "Singularly Mistaken."