Sohrab Ahmari: Through Fire By Water
Henry James: The Turn of the Screw

Skynet Is Never Going To Become Self-Aware

For the benefit of those who have managed not to have seen the Terminator movies, or even to have picked up the pop culture lore that originated with them: Skynet is the computer system that initiated nuclear war on its own volition, and began to rule the world in its own interests--which were not those of its inventors. Never mind the rest of the plot, as you probably either know it or don't care. 

Today is the day (or rather the 22nd anniversary of the day) on which, in the movie, the catastrophe occurs:

The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. (from IMDB)

Hardly a week goes by that I don't see some mention, either a "science" article in the news, or discussion of a movie or a TV show, which deals with some variation of the idea that artificial intelligence will at some point develop human consciousness, "self-awareness," an independent will, emotions, and interests, and so forth: in short, become a person. Sometimes this is presented as a bad thing, sometimes a good thing. 

If this worries you, or if you are eagerly anticipating it, I'm here to tell you: forget it. Don't worry about it. It is never going to happen. If you like stories that play with the idea, fine, have fun, though they're not usually to my taste. But don't concern yourself with the possibility that it might actually happen. 

How can I be so sure? Well, of course I can't say it with 100% certainty. But I'll commit to 99.999%. The reason is that I know how computers work. I am not a computer scientist, just a journeyman programmer, but I do know how the things work at the most fundamental level. And I also can see, as anyone who bothers to think about it can, that the idea that they can develop consciousness is based on a naturalistic assumption about the nature of consciousness--that it is, in us, an epiphenomenon of the brain, and therefore a probably inevitable development in computing machinery that mimics certain functions of the brain. This is a pure act of materialist faith. There is no evidence for it. No one can actually describe in detail how it can happen; it's simply postulated.

We speak of computers "knowing" things, and in a sense they do. But in the same sense it can be said that a light bulb "knows" when it should glow. It "knows" because you flipped a switch that supplied electricity to it. If you set up an array of 256,000,000,000 lights (very roughly the number of on-off switches in a 32 gigabyte computer memory), and rigged up an elaborate mechanism in which symbols representing letters and numbers were encoded as sets of lights that are either on or off, and could be manipulated so that the information represented by the symbols was constantly shifting in accordance with your instructions, do you think the array would somehow have the capacity to "know" what the symbols mean? 

The fact--I feel justified in calling it a fact--is that there is no reason to believe that consciousness is a by-product of any physical process. For dogmatic materialists, it's a necessary belief: consciousness exists, therefore physical processes produced it. To the rest of us it only sounds plausible and reasonable because we're so used to looking at the world through materialist assumptions, or at least prejudices. 

If you want to worry about the risks of artificial intelligence, worry about the extent to which it is able to simulate human mental processes by means of a combination of calculation and symbol manipulation, and thus do things which once required human intelligence. In combination with very advanced robotics, these machines can do an awful lot of jobs that are now, or were once, done by people. That's been going on for some time now, and it has serious social implications. But the machines aren't going to start conspiring against us.

The speech recognition and synthesis involved in, for instance, Apple's Siri, and the fact that the computer that does it can fit in your pocket, do seem almost miraculous compared to the technology of forty years ago when I first got involved with computers. But we all know Siri is not in fact a person. That's part of the reason why it can be amusing to ask "her" silly questions, give "her" impossible commands, and so forth. I think if you dug around you could find predictions made thirty or forty or fifty years ago that computers with the speed and capacity of your phone could and probably would develop true conscious intelligence. But that's no closer than it ever was. Or ever will be.

 

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I hate to tell y'all this, but the insidious truth is that this post was not written by Maclin, but by a clever AI trying to lull us into complacency.

AMDG

The AI known as "Janet" attempts a deflection. Well programmed, Team Janet.

It would be nice if we had to worry about the machines conspiring against us. I think they would be less dangerous than their masters.

Funny how we assume they would be meaner than we are.

I was thinking of the leaders of the tech companies :), but yeah, I'd be pretty scary if I had their power.

But does self-awareness even matter when it comes to defending ourselves against the robot overlords? Since we have no idea how the physical processes involved in consciousness work, there's no reason to think an unconscious machine couldn't still become vastly more intelligent than any person, if by intelligence we mean the ability to recognize and anticipate patterns in the real world. Velociraptors can solve simple puzzles (this was conclusively proven in "Jurassic Park"), and if they could solve more complicated ones than humans can, it wouldn't matter if they were self-aware or not, humans would still be hamburger meat.

Then there's the "if you can't beat them, join them" argument: https://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs

A very bright software guy I knew in the late '70s was even then looking forward to just such possibilities.

Intelligence and consciousness are different things. Machines are already in a sense "more intelligent" than any person. Google "knows" far, far more than anyone. But as Don says it's the people steering Google that we worry about.

If there's a danger of AI doing us in on its own, it's described in the second sentence of that bit from the movie quoted above.

In other words, it's not the machines' intelligence per se that we need to be wary of, but the decisions we entrust to them. The classic fictional example is HAL. The recent Boeing crashes may be all-too-non-fictional examples.

We are pretty sick of HAL getting all the press.

.....

I guess your complaint is justified because I have only a vague memory of you.

Oh, and btw re the velociraptors: :-)

I note with pleasure that iPhone autocomplete recognizes “velociraptors”.

We are watching 2001 in my Christology class

No I can't say why exactly (or inexactly)

Its a good question to ask whether computers could be dangerous to us without being self-aware. Gorillas may not be very self-aware but I would not want to be alone in the room with a large one.

Some people here said they found 'Derry Girls' hard to understand - I mean that they could have done with sub titles. Maybe that was Mac. Or maybe he has forgotten :) in any case I thought I would follow up by saying there are parts of True Detective where I cannot understand a word they are saying. In Episode 4, with the biker gang, I could kinda see what was happening visually but I couldn't make out anyone's speech. I watched one of those 'inside the episode' things afterward and figured it out that way. Im down to episode 5 of Season 1 now. Its not great great, like Breaking Bad, but it is a really good thriller.

"whether computers could be dangerous to us without being self-aware" Oh, sure, they very definitely can. Any powerful machine can, and the more power of independent action you give it, the more potential there is for it to do something you didn't intend for it to. Self-driving cars, for instance. There was a story a while back about one killing a pedestrian or bicyclist, I can't remember now.

Really, the basic Skynet premise doesn't require that it be self-aware. It only has to be able to launch a nuclear strike without human intervention. There are capabilities that just shouldn't be given to software/robots--that's where the danger lies, not in their becoming consciously evil or just really "intelligent" in the sense of being able to do things like play chess. Every programmer knows that "oops" moment when your code encounters a condition which you didn't foresee and which it handles badly in one way or another. In 2001 HAL doesn't have to be self-aware. I think it's explained in a sequel that the basic problem was that "his" programming prioritized the success of the mission above everything else, including the lives of the human crew. Oops.

I definitely remember saying something like that about Derry Girls. :-) I think it was even stronger--not just that I could do with subtitles but that I couldn't do without them. A great deal of it was incomprehensible to me without them.

Funny you had the same problem with TD. I didn't. :-) Though now that you mention it I think I did have a problem in the biker scene. I think they were sort of mumbling, and there were ambient noises. Also sometimes using kind of arcane criminal jargon. McConaghey (sp?) and Harrelson were absolutely great, and absolutely true to life. McC's scenes when he was being interviewed by the two detectives were some of the best acting ever.

I've finished it and I think it comes fairly close to BB. Maybe not quite there but really close. At any rate definitely up there with the best of these new long-form TV things. I thought there were a few missteps, one of which I'll not comment on yet since you haven't finished it. I wish they would have left out the soft-core porn scenes.

In 2001 HAL doesn't have to be self-aware. I think it's explained in a sequel that the basic problem was that "his" programming prioritized the success of the mission above everything else, including the lives of the human crew. Oops.

Just like the advertizing algorithms then :)

Yes, they could so easily just suggest the sex

Yes, both of the older figures being interviewed are so good - both of them.

I am supposing that subsequent seasons must have different detectives, maybe a different pair for each season.

I was interested to read that the writer was once a literature professor then gave it up for this

Yes, that's really interesting. And sometime in the past few weeks I read about a similar person, responsible for a TV series called Damnation (I think). I'll see if I can find the article again.

I think that's true, that subesequent seasons are different detectives.

"subesequent seasons are different detectives"

True: Season 2 stars Colin Farrell and Rachel McAdams, and #3 features Mahershala Ali and Stephen Dorff.

Two was a dropoff from one, but still worth watching, while three is almost up there with the first season.

I was talking to a friend in Carolina about the show, and he reminded me that McConaughey - he couldn't spell it either - was in Mud.

I actually remembered that, I'm not sure why. Looking at his filmography I don't see anything else that I've seen except Interstellar, on which I was not keen.

https://en.wikipedia.org/wiki/Matthew_McConaughey_filmography

McConaughey started a career revival of sorts when he took a more serious leading role in The Lincoln Lawyer, after doing a lot of lightweight stuff between 2000 and 2010. I think Mud came not long after that.

The comments to this entry are closed.