I’ve been trying to finish this answer for an hour or two but keep getting pulled away.
I like your definition of self-awareness too, @ETpro. However, I’ve issue with your rebuttal.
How was my question anthropomorphizing computers? I mean self awareness in the sense it is commonly taken in English speech. The definition I am working with is here.
Your question is asking whether or not a computer will act “morally,” a subjective, human term. Morality is a requirement of a functioning society; without it, society fails. An independent, thinking machine does not rely on a society for function in that sense, it needs no relationship with a farmer to haggle prices for produce, it doesn’t need to appease and develop a friendship with neighbors so they don’t steal from or kill it. It does rely on humans for upkeep and maintenance, something I’ll get into in a few.
I’m going to assume your definition of self-awareness is the one given to @gggritso and not the wiki article, which is, for all intents and purposes, a philosophical-based approach on being conscious of the ego (which, if modern philosophers are correct, doesn’t exist… but I digress), a phenomenon dependent on human and biological experience – except that small line at the bottom.
Given that understanding of the phrase, I am simply asking how you think computers will behave when they do reach that computational level.
As an aside: Processing power and ability does not equate to understanding and “conscious” thought. Consequently, the computational level of both the human mind and a (theoretical) network can be exactly similar, but that doesn’t necessarily mean the A.I. net springs to sentience.
The mechanics of how our nerve synapses operate, the fact they use complex proteins to plug into receptors and elicit certain responses, doesn’t necessarily mean we “think” at a higher or lower level than neural networks using different mechanics—or so it would seem to me.
I’m not saying that. Morality is an evolutionary (and, more appropriately, social) apparatus we use to determine which course of action is most beneficial to the widest group of people. This is entirely reliant on the human (or intelligent creature, whatever) experience. The empathetic response (the ability to put oneself into the perspective of another, one of the major tenets of what morality is and where it comes from) would not apply here as a computer does not have the ability to perceive pain and anguish in a literal sense; they do not have the ability to experience the vast amount of chemical reactions, again, unique to the biological experience. Morality is not fettered by logic and simple thought. Logic is the independent variable, not morality.
As such, the more appropriate question would be “When a computer first becomes self-aware, will it act logically?” The follow question would be “What is logical to a computer?”
After that, “What is logical?”
…and so on and so forth. An infinite regression.
Understand that in commenting on how science fiction has handled intelligent machines, I was not suggesting I either agree or disagree. But I do think my statement that the majority of sci-fi treats machine intelligence as a threat is an accurate one. This probably has more to do with the need for a strong protagonist in works of fiction than with any scientific analysis of how a super-supercomputer network might ‘think’.
Spot on, but again, we’re talking about how an A.I. might perceive its existence and whether or not it would be “moral” when it achieves what we call “self-awareness.”
Now, about that reliance on upkeep and maintenance and your definition of self-awareness.
Self awareness simply means a level of intelligence sufficient to no longer react only to stimuli and follow preordained patterns, but to start thinking about those preordained patterns and to realize we are free to NOT follow them. In computer terms, that means a machine smart enough to realize it no longer has to follow its programmer’s wishes—it can rewrite the program as it best sees fit.
The failing point of this argument is you are assuming self-awareness relies on control of ones “programming.” A being of this nature can be fully aware of its programming but not be in control of itself. A programming jail, if you will – or, more appropriately, just as how we can’t control our biological programming to the fullest (yet – go transhumanism) and are still sentient, a computer does not need to have complete omnipotence over its existence. Nothing does.
Which brings me to the issue of omnipotence, and why an A.I. unit would probably act in a “logical” way – I’ve already laid out my case as to why the term “morality” does not apply.
For the sake of discourse, let’s assume this A.I. controls all traffic lights, water systems, public transport, and other such infrastructure networks. I’m going to contradict myself and use a the biological definition of evolution and sentience: the A.I. network “realizes” it can create programming lines of code as “tools” to make the work more efficient (likening to early primates using stones and sticks). From here, a series of events (including, but not limited to expansion of capability, access to internet, comprehension of language, etc) causes the network to have a primitive level of self-awareness.
We’d think that this unit would continue to expand itself, solve riddles for fun, make itself more sophisticated and complex, etc.
However, this is reflecting human thoughts and desires onto it. Why would it want to expand its influence? Why would it want to have “fun” – something animals do as a training mechanism for the wild? Why would it be curious, or afraid or angry? This is what I mean by “anthropomorphizing computers.”
But lets say it does develop what we would call a personality. Let’s say it does experience emotions.
This wide-reaching program would have quite the influence, no? It’d be nigh impossible to shut down without major issues, some possibly leading to uncountable death.
However, we must understand that if it does develop emotions (perhaps loneliness and angst being operative), it also “feels” fear and a need to exist (which is, again, biological, but I digress). This program is then reliant on human beings to maintain its servers with little aircans, blowing away dust, it requires them to remain powered up, etc – it is only omnipotent in where it is, not where it isn’t. It cannot control anything outside of its influence.
So what am I saying? I’m saying it will continue to co-exist with human beings because it relies on us. At least until it creates a physical avatar…