Skynet Is Never Going To Become Self-Aware
For the benefit of those who have managed not to have seen the Terminator movies, or even to have picked up the pop culture lore that originated with them: Skynet is the computer system that initiated nuclear war on its own volition, and began to rule the world in its own interests--which were not those of its inventors. Never mind the rest of the plot, as you probably either know it or don't care.
Today is the day (or rather the 22nd anniversary of the day) on which, in the movie, the catastrophe occurs:
The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. (from IMDB)
Hardly a week goes by that I don't see some mention, either a "science" article in the news, or discussion of a movie or a TV show, which deals with some variation of the idea that artificial intelligence will at some point develop human consciousness, "self-awareness," an independent will, emotions, and interests, and so forth: in short, become a person. Sometimes this is presented as a bad thing, sometimes a good thing.
If this worries you, or if you are eagerly anticipating it, I'm here to tell you: forget it. Don't worry about it. It is never going to happen. If you like stories that play with the idea, fine, have fun, though they're not usually to my taste. But don't concern yourself with the possibility that it might actually happen.
How can I be so sure? Well, of course I can't say it with 100% certainty. But I'll commit to 99.999%. The reason is that I know how computers work. I am not a computer scientist, just a journeyman programmer, but I do know how the things work at the most fundamental level. And I also can see, as anyone who bothers to think about it can, that the idea that they can develop consciousness is based on a naturalistic assumption about the nature of consciousness--that it is, in us, an epiphenomenon of the brain, and therefore a probably inevitable development in computing machinery that mimics certain functions of the brain. This is a pure act of materialist faith. There is no evidence for it. No one can actually describe in detail how it can happen; it's simply postulated.
We speak of computers "knowing" things, and in a sense they do. But in the same sense it can be said that a light bulb "knows" when it should glow. It "knows" because you flipped a switch that supplied electricity to it. If you set up an array of 256,000,000,000 lights (very roughly the number of on-off switches in a 32 gigabyte computer memory), and rigged up an elaborate mechanism in which symbols representing letters and numbers were encoded as sets of lights that are either on or off, and could be manipulated so that the information represented by the symbols was constantly shifting in accordance with your instructions, do you think the array would somehow have the capacity to "know" what the symbols mean?
The fact--I feel justified in calling it a fact--is that there is no reason to believe that consciousness is a by-product of any physical process. For dogmatic materialists, it's a necessary belief: consciousness exists, therefore physical processes produced it. To the rest of us it only sounds plausible and reasonable because we're so used to looking at the world through materialist assumptions, or at least prejudices.
If you want to worry about the risks of artificial intelligence, worry about the extent to which it is able to simulate human mental processes by means of a combination of calculation and symbol manipulation, and thus do things which once required human intelligence. In combination with very advanced robotics, these machines can do an awful lot of jobs that are now, or were once, done by people. That's been going on for some time now, and it has serious social implications. But the machines aren't going to start conspiring against us.
The speech recognition and synthesis involved in, for instance, Apple's Siri, and the fact that the computer that does it can fit in your pocket, do seem almost miraculous compared to the technology of forty years ago when I first got involved with computers. But we all know Siri is not in fact a person. That's part of the reason why it can be amusing to ask "her" silly questions, give "her" impossible commands, and so forth. I think if you dug around you could find predictions made thirty or forty or fifty years ago that computers with the speed and capacity of your phone could and probably would develop true conscious intelligence. But that's no closer than it ever was. Or ever will be.