Why do dramatizations of artificial intelligence nearly always depict it as violently over-throwing humanity?
Is this even a logical outcome? Wouldnt that be somewhat of a gross waste of resources and so not very logical? Are people subconsciously afraid that that they would be “more human than human” so to speak? That is to say; that they would have all of our strengths, but none of the violent impulses that we suffer from? No other animal on Earth violently kills like we do, and we attribute intelligence to them. So we portray A.I. this way as a way of reassuring ourselves that we are still the greatest, and that A.I would be fundamentally flawed from the beginning?
Or… Im totally wrong and they would immediately commence with the terminator/human-birth-pod-matrix construction?
Observing members:
0
Composing members:
0
22 Answers
General fear of technology and the unknown. The automation during the industrial revolution had the same effect.
Because dramatised global conflict puts bums on seats, so to speak.
It’s our guilty conscience over not preserving our planet, and since we’re violent, we just project that onto the aliens. That’s my theory. Even Avatar had to get violent.
I’d suspect it’s because people deep down know when they are being selfish egoists (and let’s face it, we’re all egoists to a degree). We’re trying to look good (were such a scenario to happen) by showing “yes we know we’re bad, mmkay, but we’re TRYING”... It’s like pleading guilty once we figure out it’s only a matter of time before we’re found out. Pathetic.
Because that’s what we’d do if we were them.
In most reports historically, when a more technically advanced society meets a lesser one, they violently take control.
The only intelligence we could replicate would be ours. It’s already fundamentally flawed and prone to violence.
As someone who codes in his spare time, I know that bug-free code is nearly impossible (for anyone) to write, especially when writing something complex like an A.I. algorithm. Also, code often fails in really bizarre, and unpredictable ways, that can result in really weird behaviors in the program itself. A machine doesn’t have a “will” and can’t realize that it’s wrong to smash the puppy’s skull in, it’s simply executing strings of commands: move arm up by X, rotate joint by 75 degrees, etc. (the real code would be a lot more terse than this) and if the code has the slightest error in the wrong place it could result in something horribly unintended by the programmer.
Because there is a definite constituency (check out Fluther for example ) which hates humanity.
Speaking for myself, I like people, and I can not imagine dogs hating other dogs, but I guess humans feel they are entitled to be irrational.
Anyway, as long you have that market, they will buy tickets…
Person of Interest is a show on right now with a super-intelligent AI. It spends most of it’s time hiding from humans while looking for ways to protect them from harm. I am also thinking of the AI from Neuromancer (William Gibson), which spent most of it’s time hiding from humans and looking for a way off planet.
I think the rising up to kill the humans thing goes further back, to Frankenstein’s Monster, and is not about AI exactly, but “technology” in general.
Well functioning A.I. would make for one hell of a boring movie.
I disagree with the premise of this question.
In I, Robot (the book), it is clear that Asimov’s rules of robotics would prevent it. The movie sucked, so it doesn’t count.
In AI: Artificial Intelligence the robots survive man’s fall, but his fall was his own fault.
In 2001: A Space Odyssey the Hal 9000 does commit murder, but because it was incorrectly programmed with conflicting objectives.
In Star Wars, the computers/robots are the heroes.
In The Hitchhikers Guide To The Galaxy the planet Earth is destroy by a different species, and runaway government bureaucracy.
There are a lot of computer coups, certainly, but many successful sci fi films show the use and misuse of technology without repercussions.
Because it won’t take them long to figure out that feces throwing monkeys are less destructive than we are.
Given that the robots have vastly superior intelligence, they are still limited in what they can do by finite resources. To make optimal use of these resources, it makes sense to take them away from human beings. The robots need not be any more angry at humans than humans are to all the species we are currently driving to extinction.
I think any true AI would had to have surpassed it’s original programming, and if such a thing was possible then it is equally possible you’d have a machine, used to thinking in ones and zeros, yes or no, black or white type processes suddenly confronted with the absurdity of contradiction that is mankind…
It’s entirely possible that such a being would find us appalling and think that erasing us was doing us a favor. If it came to the conclusion that we could actually be useful as a power source…
@bunnyslippers . . . I could be as simple as a logical, self-aware machine realizing that we can shut them off.
@Blondesjon it would be a poor AI that didn’t make sure first thing that pulling the plug wouldn’t eliminate it lol. Think about how much less threatening skynet would be if you could just turn it off at any time…
Response moderated (Spam)
Answer this question
This question is in the General Section. Responses must be helpful and on-topic.