Do you think that creating AI is a good idea?
If we create a machine that can think – a creature capable of pure logic – would that creature deem it logical for humans to continue to live as we do? If we do feel the need to create AI, should we attempt to separate it from our world in an artificial world of some kind as a safety precaution?
Observing members:
0
Composing members:
0
9 Answers
Yess.Send it to live with my mother-in-law.A creature like that would take all the fun out of life.
Yes. As long as we don’t wire said AI to the worlds nuclear weapons so that it can declare war on humanity eventually traveling back in time to kill us all before it’s actually killed us all.
No. Our social issues are complicated enough already.
Really, what would we do with an AI? Ask it to solve all of our problems? What about programmer’s bias?
Hm, that depends on what it considers logical right?
I’m not sure I understand the concept of artificial intelligence. As far as I’m concerned either something is intelligent or it isn’t.
Artificial intelligence, to my mind, would be a simple action/response, like Eliza or something along those lines. In that case I’m all for it. I don’t see where it would cause any harm and since it’s not a real intelligence I wouldn’t have to worry about my toaster holding my Pop-Tarts hostage.
Should we design machines with some kind of “real” intelligence? Like, a learning machine able to come to its own conclusions and whatever else? Sure, if it can be done I don’t see a big problem with that, either. I have no reason to believe that such a device would want to obliterate the human race unless it was programmed to do so.
@Steve_A
Logic is a wrench. Its practical outcome varies by context.
In other news, the Navy hopes to have fully-autonomous UAVs by 2050. Sound like fun?
I was wondering the same thing about those scientists who are trying to create life. Self replicating life. The potential for problems seems relatively significant. One would think the same of AI because of the movies we’ve seen and books we’ve read.
I really don’t know how true that is though, or if that is a realistic fear.
If we don’t know what “thinking” is, then we can never know if we have actually created a machine that can do it. If we don’t really know what “intelligence” is, how can we know if we’ve created a machine to do it.
We create machines to help us solve problems. What are we looking for? A machine that doesn’t need to be programmed? We can just give it the general outlines of a problem in words, and it will figure out how to solve it? Do we want it to act like a person? To somehow be able to avoid the GIGO problem? I mean humans can’t be very precise—it takes enormous effort for us to communicate clearly.
A machine intelligence would have to have wants and goals and agendas for it to be motivated to do anything. A machine intelligence created by humans will be created to pursue our agendas. Such an intelligence would be meaningless without humans. Besides which, who is going to hook up the electric grid or maintain it when it goes down? The machine would have to have self-replicating helper parts for this to happen.
Then there’s the notion of “Pure logic.” What is that, anyway? Everything is dependent upon goals. How do we get to where we want to go? If you have no goal, logic doesn’t help you get anywhere.
Personally, I don’t think we will ever create an “artificial” intelligence. We might get more and more expert systems, but I think intelligence requires an evolutionary imperative. Without competition, intelligence is really no advantage. Machines exist whether or not they are intelligent and they have no desires. Artificial intelligence requires artificial desires. We would still have to program those desires, and so they would never be real desires.
Bottom line: not gonna happen.
Answer this question