If a robot commits murder, who is guilty?
Asked by
squirbel (
4297)
March 20th, 2008
Observing members:
0
Composing members:
0
22 Answers
In this case, it’s not murder; it’s suicide. The method is just a more complicated machine than usual.
This is like asking “if a gun commits murder, who is guilty?” The person who pulled the trigger.
If we could establish that the robot was intelligent, the answer might be different.
A robot without artificial intelligence can not commit murder. It could be used as a tool in a murder. However it would the person wielding the tool that would be guilty. It is like saying, “If a gun commits murder, who is guilty.”
So let’s take it out of the context of this particular article, and assume the robot has been developed with artificial intelligence? This article was just a starting point for a thought I had :)
@squirbel – then you could compare it to “if a dog commits murder, who is guilty?” Though if the robot in question has emotions, then I suppose you can blame and punish him (it?).
Robots can’t have motive, so you can’t make a murder conviction stick. Conceivably, the programmer or owner might haves used the robot as a murder weapon, in which case they’d be guilty of something.
I would say if the robot thought (as in artificial intelligence) of the murder and conducted it against the wishes of the victim – it would be a murderer. But considering this was just a mindless machine programmed to pull a trigger, it is the fault of whomever programmed it – in this case the suicide. Have you watched “I, Robot” with Will Smith? Watch it, if you haven’t. It deals with the same issues in a future setting. And the movie wasn’t too bad either.
WTF? I just woke up. How the fuck long have I been sleeping? Since when did we have robots?
I guess that sometime in the future when robots will inevitably form a much more significant part of society – and will be more sophisticated than they are now – the laws will be changed to allow for such a situation. Then what? Would they be destroyed or just re-programmed? Can’t imagine robots in prison!
As Fluthermother said, watch ‘I Robot’ if you get the chance – it’s a pretty good movie.
Sorry to tell you folks but our military services already have robotic weapons that are autonomus—I know becuase I do the system safety engineering on some of the ones in devlopment.
Like any other weapon they can be used for good or for evil—like money – it is a neutal object or technology. Which way they are used is dependent on the morality and ethics of those who control their use. The real question should be can they reporduce themselves and take control at some point or otherwise take over society.. BTW one of my favorite movies TRON might be good to watch—LOL
Boston dynamics big dog,you tube it,we got the structure,now it needs a brain
And, unfortunately, we are probably going to have the same clashes we always had as to whether or not they are “deserving” of all the basic rights before we change those laws. Deserving is in quotes because it was meant to be sarcastic. We have always subjugated one type of person or another – be it women, minorities, homosexuals – do you think we will be any different with robots just because they can “feel” or make decisions on their on without someone pushing a button? – or is that another question for our Flutherites? I would be very interested in the outcome when that time comes.
the matrix is another perfect movie
Great question. It’s going to be an ethical dilemma we face at some point in the future for sure.
I would imagine we will end up with laws that state the inventor/seller/programmer or company director must take overall responsibility. The problem will be when such machines are used in wars. I’ve always thought the Terminator films could turn out to be more of a prediction than pure fictional entertainment.
or company director
Well. Everyone knows that the first rule of robots is that they may not harm a human or allow a human to get harmed. Therefore the robot was against the law and must be shutdown.
We already have liabilty laws that can and have been enforced agianst companies or individuals who did not exercise due dilligence to ensure their products do not create harm to another individual—I expect the laws will just be modified – expanded as required.
::sigh:: I love Will Smith movies as much as anyone but give credit to Isaac Asimov for posing this question in his story long before Hollywood gave us its version. My own feeling… As long as computers (robots) need us to program them, a computer-caused death would be either accidental or intentional on the part of a programmer.
But isn’t the purpose of an AI to grow and change, and to make decisions on its own – based on a ruleset?
And yes, Asimov (the author of I, Robot) deserves recognition, among many other futurists.
The question should be, “Does the robot have a value system and what is it’s position on fellow sentients?”. Does it know (what we term) right from wrong? Courts call this competency.
But that’s not the question I asked. smirk
I understand your aim, however – and your point is valid.
The robot. Send it to robot jail.
Answer this question
This question is in the General Section. Responses must be helpful and on-topic.