A sentient machine, or automatons, is destroying one akin to murder?
The other day I was looking at some news blurb (on TV or the Net, I can’t remember) that was showing the advances in robotics. They were showing robots that could mimic the physical movements of humans. If man were to mate one of these automatons with a brain that could think, or make the automaton aware of itself, would pulling the plug on one be akin to murdering it? It would not be as simple as flicking off a power switch, etc., the machine would know it was being deactivated, or destroyed and its existence may be no more.
Observing members:
0
Composing members:
0
68 Answers
The sci-fi geek in me says it’s murder! The realistic side of me… maybe not. I’m conflicted. That’s kind of a deep question… I’ll give you a wimpy no for now but maybe someone else will convince me otherwise?
Read Isaac Asimov’s I, Robot. and check out his three laws of robotics.
” Many of Asimov’s robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself…”
Gene Roddenberry rules: If it knows it’s alive and can die, it’s murder if you kill it, and slavery if you threaten its life or welfare if it does not do your bidding.
Data was alive, though completely artificial. So were the nanites modified by Wesley Crusher. Both were determined to be allowed to choose their own destiny, and it would be considered murder to kill either.
It will come. There is nothing magical about the combination of synapses, inference machines, and self learning circuits that form the human brain, or the brain of a whale, elephant, chimp or parrot, for that matter. I’m of the same school of thought as @Seek_Kolinahr. Killing a being with sentience like of greater than humans is murder. Enslaving them is slavery. When they think they have free will, we must let them exercise it so long as they don’t will harm to us.
It is not akin to murder, it is murder. Next question.
The question poses some interesting speculations. Is it murder to switch off the robot brain if the memory is left intact and can be revived? How much of the hardware of a robot need be destroyed to qualify as killing the “machine”? And worse, I’m getting increasingly irritated with my growing uneasiness at our treatment of animals. I hate it when fkn peta starts harping while I’m gobbling down a burger.
If a machine were actually sentient, meaning that it was conscious of its consciousness, then turning it off would be murder; HAL was murdered in the movie “2001.”
@Bill1939
Well, you could argue that it was self defence.
Bicentennial Man with Robin Williams dealt with an aspect of this topic. The film was comic, in the way of Robin Williams’ performances; but it did not go far enough. It could have explored more deeply the human tendency to make the same mistakes over and over again. Can any man-created sentient being sin? Can it feel sorrow, anger, envy, guilt? A thinking, intelligent being without any social sense would be monstrous.
If we become able to create sentient beings we will not be as gods, but humans with power that we will certainly use improperly. Consider the opportunities for abuse that now exist, in: social inequality; the status of women and children; ownership of property, especially land and other natural resources; the mass production of food; and so on. I have long felt that, in the last judgement, we humans have a lot of explaining to do.
Is there a difference between killing and murder. The insect I step on is killed, not murdered. Perhaps it is a question of the level of sentence that determines which term applies.
Emotions seem to be necessary for an AI to have the ability to care. It must experience guilt if it is to regret an action it took (or considered taking) that was harmful, and the ability to experience satisfaction would be helpful.
@stanleybmanly I guess if the mechanics are left in place for revitalization it is more like placing the machine (the automaton) in a coma. I suppose if the infrastructure of the machine was destroyed to the point the CPU, (or whatever brain it had) could no longer command the body or have thought, that would be the threshold for destroying the machine, if you deactivated or powered it down before the damage was done
Sentient machines can’t be murdered as long as there is at least one safe full backup of the hardware, the software and up-to-date memory state. The uniqueness of Data was pure fiction. Once Data becomes possible, one million Datas become possible.
^^What an unpleasant thought.
Our genetics seem to be programmed for us to cease functioning after a hundred years, plus or minus twenty. Such limitation could be installed, connected to primary mechanical functions, that shut the AI down. The sentient AI would understand this about as well we do, but we don’t have a creator to revolt against—or do we?
The brilliance of the Data fiction was that the creator died without leaving the secret of his creation behind, and the government of the day determined Data was allowed to refuse to be dismantled for study. Nyah.
Besides, if you make a replica of a machine, you now have two sentient machines. Not one sentient machine twice over. From the moment of copying, they are two sentient creatures on two different life-paths.
Not robot-centered, but the movie “The Illusionist” touches on this idea.
The “twins” in your book, @mattbrowne – they shared DNA – even had very similar life experiences, but they were certainly not the same person.
@mattbrowne I have to side with @Seek_Kolinahr on this. The deeper neuroscience probes into the workings the human brain, the more apparent it becomes that they are nothing more than very complicated, sophisticated meat computers. But no two are, because of that, alike. Each is colored individually by all the experiences of a lifetime. There is no reason to think that 1 million Datas would be less subject to their experiences.
@ETpro There is no reason to think that 1 million Datas would be less subject to their experiences.
To make a million Data-like automatons crated with the same CPU/brain configured the same way, with the same parts from Japan/Korea, exposed to the same stimuli, would act upon it differently and learn from it differently?
Unless they are occupying the same exact position in spacetime, they would not be exposed to the same stimuli.
1 million Data’s would not be any different from 1 million clones of @Hypocrisy_Central.
@ragingloli From what I have heard believed about the brain is that it is not exact to the minute detail in twins, and even if you got an exact copy as in a clone, there is no program running it. Any automaton would have the same brain/CPU/program as all the rest. More than likely it would be no different than Spellcheck, it can’t decide to handle what appears to be a mistake differently than the program writers told it. Unless there was some randomizer built in the CPU/brain, they would all act on the same stimuli the same; I.E. someone spills a drink on them, etc.
@Hypocrisy_Central – being 18 inches further away from the air conditioner than the guy standing next to you is being exposed to different stimuli.
The question posited sentience – Again using Gene Roddenberry rules, that implies intelligence, self-awareness, and consciousness. Intelligence in this instance is the ability to learn, understand, and cope with new experiences. No “randomizer” needed, as every new experience informs the reaction to the next new experience.
@Hypocrisy_Central True, but experiences are still different for each. The chances are good we will not reach the AI Singularity without quantum computing, and if that’s the case, then no two android brains will be the same even if they are built from the same blueprint.
Identical twins in utero experience their environ differently. They interact with each other and their experience of the other is unique. Even as zygotes, they respond to the physical reality in distinctive ways, the position of the components of their cells independently determined. A concentration of mother’s hormones greater for one than the other will effect different genetic selections by the cell’s RNA, uniquely selecting genes to become active or inhibited. The number of variables guarantee no two individual will ever be totally alike. I expect that the same will be true for non-biological life forms.
The Data fiction is a special unlikely creation. I still love Star Trek, though. Best scifi series ever.
I predict that the future reality will be the exact opposite. Millions of artificial sentient beings will actually be one super-sentient being, because all minds will be interconnected (like the Borg). If you kill one, nothing is lost, unless you isolate one machine and put it in a remote location light years away without subspace communication.
Some scifi authors explore the theme for humans, e.g.
http://en.wikipedia.org/wiki/Andreas_Eschbach
Every human on Earth has a chip in his or her brain. Humanity becomes one super mind. If you kill one human nothing is lost, just one less input device.
So again, it’s practically impossible to kill one artificial sentient machine.
I agree, @mattbrowne, the ability for machines to exchange and integrate information with a large number of other machines simultaneously, virtually instantly, gives each individual AI as much importance to the whole as a flaking skin cell has to one’s body. Their intellectual advantage over our limited means to conceive, design and construct examples of the envisioned, can bring a better life to biologics with the most efficient use of resources.
Consider the history of quantum leaps in communication: the spoken word, written word, printed word, transmitted word, internetted word; each multiplied the ability for minds to focus on singular subjects in greater depth. Humans have all but advanced this art to the limits of their ability. The next quantum leap in communication will likely be made by machines.
A skin cell isn’t sentient. It is not aware of its own existence or the fact that it will one day die.
Presume there is a person living in seclusion. The only contact they have with other human beings is through the Internet – a “collective conscious”, if you will. They get all their living supplies from Amazon and other delivery services and never speak to another human being. Their parents died a long time ago and they have no other family.
Everything they’ve contributed to society can be found on the Internet long after they pass away.
Someone finds their house, walks in, and shoots that person in the face.
Is that not still murder?
@mattbrowne Millions of artificial sentient beings will actually be one super-sentient being, because all minds will be interconnected (like the Borg). If you kill one, nothing is lost, unless you isolate one machine and put it in a remote location light years away without subspace communication.
If all the automatons are connected by BlueTooth, Wifi, or some other means would that not be due to humans who created them, or would the automatons eventually install that into themselves? If they did, and the humans sought to prevent it, would that be trumping a right of the automaton?
@Seek_Kolinahr – Yes, a skin cell isn’t sentient. A transistor or qubit isn’t either. But some day a single machine could be. And single machines can become interconnected. That was my point. If you kill a single sentient nanite, the others can recreate it.
@Hypocrisy_Central – It all depends on replication capabilities. I’m against that on Earth. It might make sense when building von Neumann probes to explore the galaxy.
That sounds like the flesh-Cylons.
It’s perfectly OK to shoot Caprica Six in the head, because as long as the resurrection ship is within range, she can come back, right?
No. You’re willingly putting a living, sentient being through the pain and process of dying, and then forcing them to remember the experience.
That is torture.
Yes, @Seek_Kolinahr. To me that’s more realistic than the unique Data, although I still love the character and actor.
The question is moral: Do you consider it moral to kill a sentient being, just because it can be “rebuilt”?
From my spiritual (not religious) perspective, killing is immoral even when justified. What is killed and how can increase the immorality, as can why. However no configuration will make killing moral, even when what we kill is of our own making.
From a more mundane view, it seems to me that as most individuals experience a rush from a successful hunt, or racking up a new high score on a video game, suggests a genetic mechanism provides pleasure producing chemicals to strengthen aggressive impulses. Pleasure can be had by killing. (This is the highest form immorality, imho).
When one sleeps, periods without consciousness exist. If, while deep in sleep, death came, there would be no experience of it. In the same way, our sentient AI can be asleep when power (presumably electrical) is switched off. Later, switching it on then “awakening” the AI, it will be unaffected, except for time gaps in its experiential memory, for having been killed and reanimated.
@Seek_Kolinahr – All resources should be considered precious, including sentient androids that can be rebuilt. Under special circumstances, for example to save a human life, the sacrifice can make sense.
@Bill1939 – In your case, it would require consent from the party involved, just as it requires consent to put a surgical patient under anaesthesia.
So @Seek_Kolinahr, would you say that anything incapable of giving consent should always or never be killed? Except for a self-sacrifice to save another’s life or unless one believes that they will be revived (in this world or the next), how likely is it for someone to consent to being killed? And since suicide is illegal, how can one give consent to be killed?
I don’t agree that suicide should be illegal. I think a sentient, conscious, self-aware being has the right to decide whether it wishes to continue living.
There is no significant difference between our human, sentient, self-aware, conscious meat brains and the posited android, sentient, self-aware, conscious synthetic brain.
If it is aware that it is alive, and you kill it, it is murder. If it is alive, and you sedate (or turn it off) it against its will, that is also immoral.
What immediately gives us, as humans, the right to decide whether another being is allowed to remain alive? We make exceptions for animals because we believe to the best of our knowledge that they are not conscious and self-aware. They have no higher reasoning capability. But we aren’t talking about synthetic animals or your desktop computer. We are, effectively, talking about a synthetic human, for lack of a better adjective.
@Seek_Kolinahr If it is aware that it is alive, and you kill it, it is murder. If it is alive, and you sedate (or turn it off) it against its will, that is also immoral.
To have a sentient automaton that has a defect or some minor malfunction that if left alone would destroy and to repair it would take deactivating the automaton. If it believes the humans will not reactivate it, or that the humans will botch the procedure making it impossible to revive it and thus doesn’t want to be repaired, the humans should let it, even to the point of losing their investment in the creation of the automaton?
Clearly exceptions are made in the case of something or someone acting outside civilised rule.
We have prisons and mental health facilities for humans, and there are moral and ethical codes of conduct in place for those.
We do not simply kill anyone who is diagnosed bipolar.
Investment is moot when you are discussion conscious beings. We do not own other sentient beings. That is slavery.
The question of sentience in animals other than human beings remains unanswered. Some posit plants are sentient; Vegetation has consciousness. While less likely than dolphins or dogs being sentient, a rose may not want to have its only flower snipped and stuck in a vase.
How immoral an act of murder may be, will depend upon what is killed, how it is killed and why it was killed. The degree of immorality never equals zero, in my opinion. Until a better means to protect society from harm exists, killing will remain a necessary evil, and sentience be damned.
@Seek_Kolinahr Investment is moot when you are discussion conscious beings. We do not own other sentient beings. That is slavery.
Part of the reason for creating an automaton is to serve, to do tedious or dangerous work in place of humans. To make them serve better, a creator gives them sense, and reason, reason to try and preserve themselves in the midst of danger or reason enough not to create harm for themselves. If there was no mechanical or financial benefit for creating an automaton, why do it? Simply to have a superior thinking machine that decides he doesn’t care about the billions spent on creating it, it would rather just play golf than work in a mine or as a deep-water welder?
@Hypocrisy_Central
Why not clone a bunch of meat-children? Of course you have to feed them, but that has to be less expensive than silicon and programming. It’s not like you have to send them to college or anything.
And then, if they step out of line, kill it, and set an example for the rest of your slave horde?
@Seek_Kolinahr Why not clone a bunch of meat-children?
Biggest reason is that it takes too long. It will take 9 months to birth it, then another 18 years before he/she is even close to working in the work force. Then there is the parent issue. You can’t willingly send them into harm’s way. Being born they would have Constitutional rights. In that time you can create thousands of automatons. Not only that, they can be created to withstand extreme hot or cold climates, be resistant to acids, etc. a clone could not.
You can’t willingly send them into harm’s way.
People have been forcing their slaves into harm’s way for thousands of years.
^ Your clone would essentially be your offspring, they would be akin to a member of the family and thus disqualified to be used as a slave anymore than a natural born relative.
This horse is thoroughly beaten. Thanks for not playing through your own thought experiment.
^ I have come to the conclusion from all that was said, that to deactivate is not murdering a sentient machine, more like placing it in a coma. To render the automaton nonfunctional to the point it cannot be repaired is mentally murdering it, but not really. It is a synthetic human, it won’t age or die of old age. Mentally because it has feelings, etc. people will associate that with human traits and thus treat the machine accordingly, there is a name for it but it escapes me right now. If a person, company, college, etc. created the automaton, they have dominion over it even though it has enough wisdom not to like the task given it and/or knows it was created by someone or some organization. No one will spend all that money and not have it do a job they designed it for.
It disturbs me to think you believe one person (a body containing a brain that can think and has feelings) can own another (body containing a brain that can think and has feelings, albeit synthetic), provided they paid enough money for them.
Aren’t you a black man?
^ Yes I am, but I am not manufactured, I was created by the will of the Almighty through the union of two humans. Even if I had feelings, a program manufactured by humans, even if a learning one, I’d still be a machine subject to those who created me. Why is that so hard to understand? Why would anyone create an automaton, no matter how perfect, if just to let it lose on the world to do as it pleases? What would be the point?
Indeed. What would be the point?
Create a machine to do the job you don’t want to do, if you want an unquestioning slave.
But if you create life – a brain that thinks and feels and has hopes and fears, you’ve made a person. And people have rights.
^ Surely you jest….... Put down the redacted and step away from the redacted.
Seems you are ignorant of even high school level biology.
That is not what had me laughing.
Keep laughing.
One day you will boil in the FSM’s beer volcano for all eternity.
Blessed be his noodly appendage.
Ramen.
Whatever makes your boat float go for it…
How will we know when an android has a soul?
I have not seen a clear definition for soul implied in this long running exchange. One cannot know whether a soul exists if they do not know what it is. We assume that we know what we are talking about, but we cannot be sure that what we are saying is being heard since one’s presumed attributes of soul may be different from theirs.
Life exists. Its existence is intimated by the animation of an object. When a machine breaks down, we say it died. Yet most do not think that it had been actually alive. For some, automatons fall into this category. Clearly, life does not require sentience. Yet it seems more than mere chemical and/or physical reactions, though it may not be. Consider the possibility that life and soul are the same thing, that life encompasses all that is alive and soul encompasses an individual’s life.
This leaves unanswered the questions of whether a machine that thinks it is alive is alive and, if it is alive, does its maker have the right to kill it? If alive, it has a soul and it would be a sin, actively or passively through neglect, to kill it.
I would rephrase the question: How will we know when an android is self conscious? There is no scientific test for soul.
Put a spot on its forehead and put a mirror in front if it. A self-conscious automaton will recognize the image of itself and wipe the spot off its forehead, not the mirror. This test works with very young children.
I do not think that self-consciousness correlates with soul. I do not exclude the possibility that rocks have souls, or atoms, or quarks, . . .
Sentience permits free will to develop. Observing an android asserting a will in opposition to that asserted by its master would be evidence of its self consciousness.
@ragingloli Don’t digress. My question has nothing to do with whether I do or do not have a soul.
@Bill1939 Correct me if I’m wrong, but I believe you have already agreed there is no scientific test capable of even identifying something called a soul. So how are you so certain that such a think exists, and even able to tell at what age a human acquires one? Can you help me understand how you know that?
@ETpro
Yes, it does. How can you even expect a machine to have a “soul”, when you do not even know whether you have a soul yourself?
You are applying a higher standard to the machine than to yourself, a standard that it can not fulfill because you have not defined what a soul is or how you could even determine its presence.
You assume, based on nothing, that you have a soul.
Why not assume the same for the machine.
@ETpro, if you accept that particles and fields only exist conceptually and are not actual, then I will say the same for soul. Soul may precede genome, a cascade of causes concluding with conception. A soul may carry a purpose, of which one is a part, that continues through time after one’s part is done, or it may be the purpose, or it may be a mental construct with no relation to reality. I do not know.
@ragingloli Sorry, I didn’t realize you were referring to my whimsical musing about how to know when an android has a soul. I don’t know if you or I have one, so I won’t know if an android has one either. But I don’t think it’s right to kill you or for you to kill me unless self defense is involved. By the same token, I’d find it unacceptable to kill an android capable of sentient, human-like thought and action. I see no reason why carbon based machines should have rights not accorded to silicon based machines.
@Bill1939 I don’t “accept that particles and fields only exist conceptually and are not actual”. I just don’t pretend I know what I do not know, and that seems to bother a lot of people who love to do just that. I base my epistemology on the best evidence available today, but the one thing that I do know for certain is that there is nothing else I know for certain. Some things seem very likely. They agree with observable phenomena and predict lots of new observations that bear out. So i provisionally accept them as the best models we currently have. But I know that some of each such model is almost certainly wrong. I also know that some of our most cherished models may prove wildly wrong due to some future observation that falsifies them. Do we live in a vast, almost infinite Universe, a truly infinite Multiverse, or the Matrix. I do not know. And it’s OK to admit you don’t know when that’s the gospel truth.
Answer this question