General Question

FireMadeFlesh's avatar

Can a computer program become conscious? Would it be a person?

Asked by FireMadeFlesh (16603points) June 11th, 2010

The Blue Brain Project has recently made significant progress towards simulating the human brain on a supercomputer. They have already managed to simulate a rat’s brain.

Many neuroscientists and AI researchers have hypothesised that consciousness is not a structural phenomenon, but rather is software constructed during development that runs as a virtual machine on the brain’s hardware. This is supported by studies of hemispherectomy patients, who can manage to live a perfectly normal life despite missing half their brain.

What are the implications of this for the idea of what a person is? Do you think a brain program on a computer, in the presence of appropriate inputs, could develop into a normal human mind?

Observing members: 0 Composing members: 0

45 Answers

Anon_Jihad's avatar

My inner techy screams yes! But my logical sense seems to scream this is only science fiction. I don’t know, I do hope so though, I pray I never see technology stop advancing.

LeotCol's avatar

I believe that it could become conscious, or at least something that we can perceive as conscious. It would be a serious matter of debate if it was a person or not. But I believe that if it was programmed in such a way that it could make decisions based on unique past events, make its own unique decisions based on more than just randomness and the ability to learn, adapt and grow then I don’t see how it would be any different than a person.

I think it would be the most amazing thing to ask a conscious self aware (or at least one that others peceive to be conscious) computer this question and see what answer you would get back.

mammal's avatar

The 6million dollar question, does it know it’s conscious?

lopezpor's avatar

According to i-robot they can!

anartist's avatar

How much of a person’s decision making is affected by the body? An AI would be missing so much that drives humans, anything from lust to a depressed feeling because it was raining. It might make for a being too passionless to be human although conscious. Definitely a new species.

I like @LeotCol ‘s last remark.

FireMadeFlesh's avatar

@LeotCol GA, thanks!

@mammal Isn’t a key feature of consciousness self-awareness?

@anartist Why couldn’t it have a body, aka. phantom limb pain? Could we deceive it into thinking it had a body, and program a virtual environment to give it sensory inputs, like its own personal Matrix?

RealEyesRealizeRealLies's avatar

all programs need a programmer

FireMadeFlesh's avatar

@RealEyesRealizeRealLies A virtual mind would have been programmed, but that shouldn’t stop it developing from a codified framework to a conscious entity, should it?

CaptainHarley's avatar

This is a very thorny question from many different aspects. I expect that someday, perhaps not as soon as we might think, an AI will become, for all intents and purposes, self-aware. That event is a considerable distance away in time, in my opinion.

Buttonstc's avatar

I guess it would depend upon how one defines consciousness. Different standards exist for self awareness, as evidenced by our shifting attitudes over the yesrs toward certain members of the animal kingdom.

The growing realization has been that many are far more than just the phrase “dumb beast” which has been applied In times past.

So, expanding toward AI is just an extension of that. I really liked the way Star Trek explored these themes with the characters of Data and the holographic Doctor.

One of the most memorable episodes first exploring this was titled “The Measure of a Man” in a court case involving Data and a complete discussion of this.

Also, the holographic Doctors continuing quest for a sense of personal identity was really fascinating to watch. And he eventually penned a novel on behalf of his fellow holograms working basically as slaves in a mining camp. It had a memorable title.

Photons, Be Free.

So much food for thought. However , unless it is embodied in a physical form resembling human, I think it would be difficult for mankind to accept that type of a leap.

RealEyesRealizeRealLies's avatar

My perspective on this subject is entirely personal. When you say mind, I hear mind/soul. When you say thought, I hear thought/spirit.

I do believe that a mind/soul can be programmed. I believe they are built from thought/spirits. Like bricks build a building. But no matter how many bricks are assembled, they will never form a building unless they are programmed (arranged) to do so.

There is no purely mechanical explanation for this. There must, IMHO, be a master mind behind it all.

I do not believe the WWW will ever become self aware on its own. Some one will have to figure out how to induce the animator quality that separates life from non life. Some one will have to infuse the machine with the semantic principles of The Word.

Self awareness will arise, if and only if, semantic faculties are programmed into the firmware.

mammal's avatar

@FireMadeFlesh self reflexion, introspection, the Cogito, the awareness of being conscious and so forth, would for me, be a strong indication of a deeper, human like mentality.

zophu's avatar

If it thinks like a person, it’s a person.

ETpro's avatar

Great question. I’ve been reading about this very subject to understand where the science is now, as well as where we are on understanding of how the human mind works. From what I have gathered, the day is probably not too far off when a computer program first becomes aware that it is aware. That is the critical step. Past that, it is just learning and considering that improves the mind.

As machines get smarter and more capable, they will increasingly play a role in the reverse engineering of the human mind so that concepts natural selection designed for us can be folded into the effort.of engineering of better machines plus the reverse engineering of our own minds.

While I remain unconvinced there is a soul/spirit component in humanity, I agree with @RealEyesRealizeRealLies that semantic capability is a must for self awareness. The semantics required are not natural language, though. The language might be called Mentalese. Research indicates that we use some extremely rich symbolic language in thought, rather like chaining together statements in a scripting language like PHP or ASP. The language used is not any spoken language, but it enables us to learn languages and use them easily.

Will it be a person? I will leave that one to the philosophers. It will certainly seem like a person.

mammal's avatar

But if life itself is good and pleasant (...) and if one who sees is conscious that he sees, one who hears that he hears, one who walks that he walks and similarly for all the other human activities there is a faculty that is conscious of their exercise, so that whenever we perceive, we are conscious that we perceive, and whenever we think, we are conscious that we think, and to be conscious that we are perceiving or thinking is to be conscious that we exist… (Nicomachean Ethics, 1170a25 ff.)

FireMadeFlesh's avatar

@mammal Many animals are self-aware.

@ETpro The Blue Brain Project estimates that they will be able to simulate a human brain within ten years. I also agree that semantic capabilities are an essential part of consciousness, although I am not convinced that this has to be programmed. Programming it would save a hell of a lot of time, but it is not essential.

ETpro's avatar

@FireMadeFlesh Numerous animals are smart enough to realize they exist. Few are so self aware that they realize they are executing a program nature wrote into them, and that they have some ability to rewrite their programming.

The semantic capability to use Mentalese has to be programmed. You can’t think it up without already having it in place to do the necessary thinking. Evolution programmed us with Mentalese. We used its capabilities to then invent spoken and then written languages.

ipso's avatar

“Can a computer program become conscious?” [uniquely self aware] Of course. Will a human mechanical construct eventually pass the Turing test? Yes. Will we completely re-interpret and create a better understand and a better “Turning test”? Yes. Will that then define it as “a person”? No. As we approach that and beyond will we completely re-evaluate the myriad sub-components of “conscious” into a more meaningful and granular understanding, and thus realize just how far we have yet to go to interpret “a person”? Yes. Will we ever create a non-biological machine (“Jetsons’ Rosie” – Bladerunner) that approaches “a person” – No. Will we better understand how to create biological “robots” to further our needs? Yes.

Me thinks. Just a guy.

FireMadeFlesh's avatar

@ETpro I should have said programming by an intelligent agent.

@ipso Do you believe there is an essential essence of being human that cannot be replicated on a computer?

ETpro's avatar

@ipso it is wise to make prophecies about future technology without date specificity, and even more wise not to prophesy something can’t be done. Consider these great minds who have left us examples.

“Well-informed people know it’s impossible to transmit voice over wires and that were it possible to do so, the thing would be of no practical value.” Editorial in The Boston Post, 1865.

“Heavier-than-air flying machines are impossible.” Lord Kelvin, 1895.

“My recommendation to the [patent] office is that everything that CAN be invented HAS
been invented!” The Commissioner of the United States Office of Patents , 1899, recommendation to abolish his office.

“I don’t really foresee any commercial possibilities for computers and I predict a
world market for maybe FIVE computers!” Thomas Watson, 1940, Chairman of IBM

But you will find much company in stating that human-like intelligence will never be achieved. I think that stems from human hubris more than any scientific proof of impossibility. Science is terrible at proving negatives.

ipso's avatar

@FireMadeFlesh wrote: “Do you believe there is an essential essence of being human that cannot be replicated on a computer?” Yes – experience. Specifically, human experience. Most particularly human-to-human experience. – like a mother rearing her boy [vs. say her girl]. If you’re talking about a “soul” – yes [yes we will not be able to duplicate, but yes we will approach and better understand what exactly a “soul” means to us.]

@ETpro – “is it wise” No. “prophesy something can’t be done” – we get into semantics real quick. I’ll simplify. If someone asked me “will a cow ever be able to jump over the moon?” No “Will we be able to send a live cow to Andromeda?” Yes.

I’m just a guy.

RealEyesRealizeRealLies's avatar

I dunno if ju guys ever read any of my leenks, b-u-t…

I think all this attention to brain matter may be overshooting the mark a bit. The gist of our relative agreement about semantics goes deeper than the brain. I’d like to know if any of you see the same connections as I do on this.

It was Noam Chomsky who first suggested the hypothesis of an innate syntactical structure Generative Grammar that must be present within human beings. I think we are beginning to actually discover this, and the research below satisfies his requirement for recursion.

“Chomsky has maintained that much of this knowledge is innate, implying that children need only learn certain parochial features of their native languages.[37] The innate body of linguistic knowledge is often termed Universal Grammar.”

Now Geoffrey Sampson is suggesting the same for Natural Semantics.

I believe all of this is absolutely spot on. But not for any evidence found in the brain. It seems much too obvious (to me) that we do in fact have an innate language structure built into our humanity from birth. Why? Because we are built from the language of DNA. We are beings of language… not flesh and bone… we’re not rocks. Our existence begins as a Letter, and then we become a Word quickly evolving into a Sentence, a Paragraph, and a whole bunch of Chapters.

There it is… We don’t have an innate language structure built into the brain. We are language.

And Wes Warren from Washington University’s Genome Sequencing Center has made some profound discoveries to suggest that our DNA is responsible for our language ability, and not the brain at all.

As described in April 2010 Journal Nature

Warren has demonstrated a different mechanism than mere “interactions of neurons within the brain”. He seems to have discovered what causes those interactions to occur.

“We show that song behaviour engages gene regulatory networks in the zebra finch brain, altering the expression of long non-coding RNAs, microRNAs, transcription factors and their targets. We also show evidence for rapid molecular evolution in the songbird lineage of genes that are regulated during song experience. These results indicate an active involvement of the genome in neural processes underlying vocal communication and identify potential genetic substrates for the evolution and regulation of this behavior.”

The implications of this are staggering, as “The zebra finch is an important model organism in several fields1, 2 with unique relevance to human neuroscience3, 4. Like other songbirds, the zebra finch communicates through learned vocalizations, an ability otherwise documented only in humans and a few other animals…”

We once again find another use for the so called Junk DNA. It seems that when the Zebra Finch expresses a DESIRE to sing, that desire causes a change in sequence of the “long non-coding RNAs, microRNAs, transcription factors and their targets.” And thereby, a change in that sequence, is the very mechanism which causes the “interactions of neurons within the brain”.

“Two of the cDNA clones that measured the most robust increases27 align to an unusually long (3 kilobases (kb)) 3′ untranslated region (UTR) in the human gene that encodes the NR4A3 transcription factor protein (Fig. 4a). The entire UTR is similar in humans and zebra finches, with several long segments of >80% identity”

“These findings indicate that this NR4A3 transcript element may function in both humans and songbirds to integrate many conserved microRNA regulatory pathways.”

“It has been proposed that ncRNAs have a contributing role in enabling or driving the evolution of greater complexity in humans and other complex eukaryotes32. Seeing that learned vocal communication itself is a phenomenon that has emerged only in some of the most complex organisms, perhaps ncRNAs are a nexus of this phenomenon.”

mammal's avatar

@RealEyesRealizeRealLies

In the beginning was the word, the word
That from the solid bases of the light
Abstracted all the letters of the void;
And from the cloudy bases of the breath
The word flowed up, translating to the heart
First characters of birth and death.

Dylan Thomas

Chomsky has formally recognised/reaffirmed the age old theme of the eternal, independently existing logos (λόγος) which has been accepted in Hellenistic and Hebrew thinking for millennia.

BTW. To simulate the Human mind completely, would we also need to observe emotion, suffering and such like?

LostInParadise's avatar

It was not a rat’s brain that was simulated, but the neocortical column, whatever that is. They are a long way from simulating an entire rat’s brain, let alone a human one. It would be quite an accomplishment to simulate the neurological system of an ant or a bee.

As I have said before, we are living in a world where humans are acting more like machines and machines like humans. If we ever bridge the gap, it will raise some serious questions as to what it means to be human and what the value of a human life is.

I accept the materialist view of the brain, but I question how much we need to know. Would it be worthwhile to help a few people with brain dysfunctions if we were also able to mass produce sentient beings? I would be very apprehensive about the end point of brain research if I thought I was going live long enough to see it.

ipso's avatar

Good morning.

@RealEyesRealizeRealLies. I’m with you on the Chomsky/et al language thing. Indeed I’ve chimed in elsewhere that my understanding is that the conscious mind was evolved wholly in support of language. This is the thing: AI failed. From this we realized 2 things: 1.) we didn’t understand (at all) what we were shooting for (the brain) so we started with fresh vigor on cognitive science. Tally ho! And 2.) it’s a lot harder to rebuild the boat your floating in than we thought (to cognate the structure of what you’re cognating with). Minor blips thought, in the grand scheme of things.

As far as “we are [nothing but] language”, it’s over simplifying it (although I’m sure the French enjoy that thought). We – as a society – will start to understand that robots are not all equal. HAL 9000 is a very different notion than a Rosie Jetson or a Blade Runner replicant. The former is just around the corner. (You’ll note that it had no intention of passing a Turing test, or a “Turing Redux” test.) The latter (the replicant) I don’t think we will ever nail as mechanical entities. I think it’s obvious we will start to use bio/gene constructs to achieve these things. In this sense we will not be creating robots, we will be tinkering with “life”.

So we’re all kind of talking past each other in “Venn agreement”, if I may coin a phrase.

(Dear God Blade Runner is a great movie!)

Buttonstc's avatar

Real Eyes and mammal both raise interesting points about things such as the soul and how human emotions and expereience fit in.

I have long thought that the word soul could best be defined as the combination of mind, emotions and will (partially to distinguish it from spirit, which is a totally different train of thought entirely.)

Could a computer ever truly possess emotions ? And could a computer act with a will of it’s own which could contradict it’s programming ?

Obviously a computer can mimic (and sometimes exceed) many of the processes of the mind, particularly mathematical ones. But how could it create emotions, having never had any ?

Yes, it can doubtless be programmed to mimic certain of the most common scenarios requiring an emotional responses, but the subtler things would be beyond it’s grasp.

When I’m thinking of this, I’m reminded of children and adults with Autism or Aspbergers syndrome. Aspies, especially the more verbal ones, have expressed their recognition of how important appropriate emotional responses are in social communications. So, many have made great efforts to study this and memorize affects appropriate for a variety of different scenarios, even if they themselves don’t truly feel it.

But they also recognize the significance of the huge deficit in this particular area which their condition leaves them with.

I think an analogy could be made to computers and emotions in that regard. The defecit could be overcome to a degree with programmed appropriate responses for SOME things. And that may even pass muster for a while.

But sooner or later, that deficit would become obvious in situations requiring more nuance. I doubt that a computer could be programmed for nuance (or instinct and spontaniety)

Quite a while ago I read an interview with Gary Kasparov, who mentioned those issues regarding computers and chess.

He said that OVER TIME the human advantage would win out because the computer would not have an adequate response built in to counter a humans ability for creativity and spontaneity.

Yes, it could win individual matches, maybe, because chess is a lot about pattern repetition with certain opening moves, gambits, etc.

But if a human played the same computer enough times, they would find the person making subtle adjustment to their play or just throwing out anticipated procedure based on nothing more than finely honed instinct.

The computer would not have a defense for that, since it would be unable to be quantified.

I’m inclined to think that history will bear him out on that.

There are tons of mundane tasks requiring specialized quantifiable knowledge at which computers undoubtedly excel. But for that “spark of genuis” which has benefitted mankind immeasurably for centuries, it takes a human soul, I tend to believe.

anartist's avatar

@FireMadeFlesh why would we bother to make it seem as if it had a body? Why make a static thing in a room believe it had a body to lust after a human body when it could not realize it? And imaginary companions probably wouldn’t affect its relationship with humans.

RealEyesRealizeRealLies's avatar

@mammal

As much as I appreciate the Dylan Thomas perspective, I must ask… How does he know this?

My concern is with the empirical architecture that makes it all go. So when the Zebra Finch expresses a DESIRE to sing, and that desire causes a change in sequence of the “long non-coding RNAs, microRNAs, transcription factors and their targets.”...thereby establishing a change in that sequence as the very mechanism which causes the “interactions of neurons within the brain”... Well my concern is to determine the connection between DESIRE and ncRNA.

We presumably have a physical mechanism of ncRNA that directs the neural activity. Much like RAM directs a Motherboard. But how is the connection made between DESIRE and the ncRNA? What is DESIRE? Can it be held, touched, tasted? Where is DESIRE?

The Dylan Thomas poem is a subjective and flowery interpretation. Even if it were true, each person will interpret it however they like. I want to know how it works on paper. I want to actually behold the ghost in the machine.

LostInParadise's avatar

We don’t know the RNA reactions, but I just read yesterday that we have succeeded in being able to stimulate a section of the brain to create an intention. The Ego Tunnel Do a search in the book on “Kremer”. Pretty spooky stuff.

RealEyesRealizeRealLies's avatar

@LostInParadise

I’m confused about your statement. RNA reactions?

Warren’s research illustrates that it doesn’t react, it directs. This is very much in line with Barbara McClintock and James Schapiro research, suggesting a cell that somehow “knows”.

If The Ego Tunnel is suggesting that a brain can be stimulated to induce intention, I suggest that only means that we have learned to do the same thing that ncRNA is already doing… that being, basically pressing a button.

But I suggest that the intentioned desire comes before the brain activity is ever detectable, in line with Warren’s research, leading one to search for the link between an immaterial mind/soul source of desire that turns on/off the ncRNA, in turn firing neurons.

LostInParadise's avatar

Is the ncRNA the cause of the intention or just an indication of it? I think it quite extraordinary that a intention was induced through electrical stimulation.

RealEyesRealizeRealLies's avatar

Warrens research does not address where the actual intention came from. He only suggests that the brain is not responsible for intentions, rather it is just a switch being turned on by the ncRNA. I want to know what guides the ncRNA, or if the ncRNA is actually doing the thinking, and then sending the message to the brain to enable the body.

ETpro's avatar

Here is an interesting thought experiment from science fiction writer, Terry Bisson. It’s an elegant debunking of philosopher John Searle’s familiar thought experiment, The Chinese Room. It argues instead that the brain is meat and the brain is sentient. No magic, no otherworldly supernatural forces, just straightforward computation of an extremely sophisticated sort.

We have overwhelming, near incontrovertible evidence of what neurons and synapses do. They function in ways quite similar to current computer transistors, but the architecture by which they are connected is so profoundly more complex than our best supercomputer that it is small wonder the two are not on a plane with one another. We have no evidence of a soul or otherworldly entity inhabiting the body and making though possible. What we do know is that even f there is a soul, if you impair the brain either chemically or by trauma, the soul can’t do anything without it. To the degree the brain is impaired, thinking shuts down/

Actually, Searle’s argument is terribly flawed, but a large and growing weight of experimental evidence now shows that the human brain does work exactly like his Chinese room, only you can’t leave the “guy” out who is writing the instructions for the “guy” who looks at the squiggles and squoggles stuffed under the door, then reads the instructions and responds. The whole room is the brain, not just one of the “guys”. And the whole room does “understand” Chinese. It just works far slower than the brain of a native Chinese reader, so it’s hard to connect our conception of the word understand to it.

LostInParadise's avatar

@ETpro , I don’t see how the selection from Bisson refutes Searle’s argument.

The thing about Searle’s thought experiment is that the Chinese language aspect of it is not particularly relevant. He imagines a book that can answer any question in the way that a person could. This is unrealistic for two reasons.

Firstly, it would mean that there would have to be a way for the book to be able to predict future events. We have discussed this topic on Fluther and generally came to the conclusion that this is not possible.

Secondly, even if we restricted the book to what was known when it was written, it is inconceivable that it could have in it a one-for-one answer for every possible question and even if it did, there could not be an algorithm for being able to locate the answer to the given question. Therefore the book would have to contain instructions for processing questions and in this way the book is a stand-in for a sentient being. The human simply functions as the I/O device. It is the book that has the understanding, not the person. All that we can conclude from this thought experiment is that any thinking organism or machine must have three components – one for taking in sensory data, one for reasoning and one for communication.

LostInParadise's avatar

I should have said that the person also acts like a CPU, so that the combination of person and book functions like a standard single processor digital computer.

FireMadeFlesh's avatar

@ETpro I also don’t understand how that extract relates to the Chinese Room. I agree that it is a flawed thought experiment though, since any computer that passes the Turing Test is far more complex than those existing today (let alone those existing when it was first proposed). Just because we cannot imagine a computer capable of it does not mean such a computer is impossible. What more is there to awareness than information processing anyway?

ETpro's avatar

@LostInParadise & @FireMadeFlesh I should have been more clear. By debunking, I did not mean refutation but put-down or send up. Searle’s whole intent is to argue against any possibility of the computational theory of the mind being true. That is what the Chinese Room was intended to prove. It fails not because “They’re meat.” of course. It fails because it plays with our common understanding of the English word, “understand”. We take that to mean almost instant cognition. If we have to look things up and compile an answer, that is evidence we do not understand. But if we could speed Searle’s whole Chinese Room up to the point that a slip of paper with the answer came out about 1/10th of a second after the question was slipped in, we would say that perhaps the guy first seeing the paper does not understand Chinese, but the room clearly does. And the computational theory of the mind posits a construct exactly like the entire room, not just one of the guys in it.

Furthermore, Searle’s thought experiment has nothing but itself to back it up. There is no weight of evidence or predictive quality to test flowing from it. The computational theory of the mind has predicted a great deal, and testing to verify its predictions has yielded a large and growing body of experimental evidence saying that it is how the mind works, and how the Chinese Room works as well. The trick is the mind is not like the guy who doesn’t speak Chinese, but just looks at the patterns on the paper. Our eyes and occipital lobes would be analogous to him. The guy who interprets the Chinese characters and looks up the answers would parallel our forefront lobes and long-term memory. The answer would get transferred to short-term memory, then used to control circuits that drive the musculature to write it on a paper and slide it back out under the door.

LostInParadise's avatar

One thing that the Chinese Room thought experiment indicates is that if we ever figure out the physical level of what we call understanding, it is going to be something that in highly non-intuitive. If there were an intuitive model, I am pretty sure someone would have thought of it by now.

mattbrowne's avatar

In the Star Trek universe, yes. There was even a trial in one episode. At the end Data was giving full personhood.

I don’t see why this shouldn’t be done in the real world.

ETpro's avatar

@LostInParadise Rather than non-intuitive, I would suggest non-trivial. Understanding probably does not arrise from one massive neural network in the head. If we were designing a computer to “understand” that isn’t how we would do it. We’d build individual, optimized modules for various parts of the task and an network to parallel process through them. Nature probably went about the task the same way. It also appears there is a high level brain language some call mentalese which the brain innate in humans and which controls things like setting up the associations required to learn spoken language. And finally, we talk to ourselves both in mentalese, which we are unaware of, and our native tongue, which we are fully aware of. That all lets what started off as meat do some incredibly subtle, sophisticated thinking.

LostInParadise's avatar

@ETpro , That understanding should be complex is non-intuitive. It does not seem like something that could be broken down into separate components. We imagine our conscious selves as a single unified entity, which of course it is not, but the illusion is very powerful.

ETpro's avatar

@LostInParadise Aha. I see where you are going with that thought. Excellent point.

daiku's avatar

if you could export all the data of a human brain and export it in a AI robot well then it would be a real robot brain i think ? because a brain is what makes us concious, thats why a brain dead person doesnt do anything so a robot is kinda like a brain dead person ?

dabbler's avatar

Oh I just got more questions… for one thing we need a good working definition of “conscious”. ...and in my opinion that is necessary but not sufficient feature of personhood. If the ‘computer’ can pass the Turing test in all noticeable ways we know our selves to be persons, just seems to me there will be some trick up humanity’s sleeve that the thing we know as a ‘computer’ would be not be able to adapt along with.
We can allow our definition of ‘computer’ to evolve to whatever human contrivance does the job we expect from our computer – those gadgets keep doing that don’t they? – including the function of being just like us. The maker of the gadget is still confronted with the implementation challenge we’ve got each of us already every moment, define what a person is, so then you can build it.
The ideal maker would be challenged to express that dynamic into the contrivance asap, literally in a moment so that it could keep pace with the simulation of human definition.
Or could the best implementation of computer actually deploy changes at the same time as humanity does? Does it develop new genetic capability on the fly in resonant pace with humanity because it’s so similar? Would the computer always be one grand wavelength behind the implementer because it is by definition a human contrivance ? At what point is it no longer the creation of the mker ? Who would it be ?
It could be released as autonomous by the implementer to free up attention for more fun. The computer would then steal the maker’s identity and go on a spending spree.
this eggplant parm sandwich hits the spot

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther