General Question

snowberry's avatar

Do you think maybe it's time to take a more serious look at AI and its possible implications for our lives in the future?

Asked by snowberry (27900points) December 20th, 2013

Barrat (an Annapolis-based documentary filmmaker), says in the article: “I don’t want to really scare you,” he said, after half a chuckle. “But it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of ‘bug out’ houses” to which they could flee if it all hits the fan.

Barrat ”...surmised that “normalcy bias” — which holds that if something awful hasn’t happened until now it probably won’t happen in the future — accounts for the silence…” on the subject.

http://www.washingtonpost.com/opinions/matt-miller-artificial-intelligence-our-final-invention/2013/12/18/26ed6be8-67e6-11e3-8b5b-a77187b716a3_story.html

Observing members: 0 Composing members: 0

20 Answers

elbanditoroso's avatar

While I don’t have a bug-out house (this reminds me some of the folks who built Y2K basements with all sorts of extra food and water in anticipation of 1/½000), I do believe that AI is horribly immature as a science and wrong more than it is right.

You see low level AI every day at Amazon (“if you liked this you might like that”) and higher levels showing up on banking and other more sophisticated web sites. Even my doctor’s offices uses some sort of modified AI.

I can see the appeal of AI, and it’s mostly right, but I wouldn’t trust my life or my livelihood to it.

Response moderated (Off-Topic)
CWOTUS's avatar

The only part of this that would scare me is if the final recommendation in the article should ever come to pass:

It’s time to take this conversation beyond a few hundred technology sector insiders or even those reached by Barrat’s indispensable wake-up call. In his State of the Union address next month, President Obama should set up a presidential commission on the promise and perils of artificial intelligence to kick-start the national debate AI’s future demands.

It would positively terrify me if anyone took that recommendation seriously, and acted on it. That would be the beginning of the end, for sure.

PhiNotPi's avatar

Artificial intelligence is currently not intelligent. Most developers have approached the problem in the wrong way. If you build the largest chess-playing supercomputer, and it only beats the top-level human 50% of the time, then you know that you are doing something wrong. If we knew how to create real AI, the computer should be able to win every time.

Most modern AI is simply based on statistics. Take in a bunch of data, run complex statistical tests to find patterns, and output the best guess. There’s also a lot of brute-forcing.

So far, modern AI does not recreate how people actually think, and as such it relies on sheer size in order to match human abilities. In this sense, AI has not been invented yet.

glacial's avatar

What @PhiNotPi said. The sky is not falling for this reason, either.

LostInParadise's avatar

I do think it is time to seriously consider the implications not only of AI but of biotechnology as well. I think it is impressive that IBM’s Watson computer became the all-time Jeopardy champ. Even though the computer’s approach was a brute force attack, it nevertheless shows the ability of a computer to acquire huge amounts of data in a short time and to make decisions based on that data. If we wait until computers pass the Turing test, it will be too late.

Now imagine combining AI technology with genetic engineering. I would not try to guess when this will be possible or what types of monsters we will be able to produce. Technology advances at an exponential rate. A national panel is insufficient since the effects are global. While we still have a chance, we should appoint an international panel to ponder what is going on and, at the very least, outline possible concerns.

ragingloli's avatar

For me, the most important issue with AI, is civil rights for artificial intelligences.
This must be solved before the first AI is created, or else humanity will have just created another slave race.

stanleybmanly's avatar

The accumulating series of crises we confront as a species makes me think that there are good reasons why we aren’t intermeshing with a galaxy of advanced civilizations. In our case, the alarming dangers of intelligence in creatures with reptilian hind brains are being paraded before us. If the nukes don’t get us, the sheer weight of our numbers will ruin the rock we cling to. I, for one don’t believe there’s a chance in hell of managing a world wide rational approach to either genetic engineering of artificial intelligence. The AI issue may loom on the horizon, but the threats to mankind are already many and varied and accumulate before us. One need only view the political situation in this country to gauge the chances for a rational shot at delaying, let alone defeating the many dooms confronting us.

ETpro's avatar

As human intelligence has increased form the earliest of times to the present, our sense of altruism and empathy has grown and out reliance of rape-and-pillage conquest of all out-groups has diminished. While it’s true there have been setbacks along that progression, they have not prevailed, and the inexorable direction has been toward tolerance, inclusiveness, empathy and altruism.

Why should we think that increasing human intelligence and knowledge led to this outcome; but that a quantum leap in these values will lead AI back into the dark ages and rape-and-pillage behavior? I just don’t get it. I think it stems from projection of the psychopaths among us who would go back to rape-and-pillage if only they were smart enough to get away with it. They assume that AIs would do that because if they were smart enough to get away with it (which a good AI would be), then that’s what they would do.

Bill1939's avatar

Would an artificial intelligence have a sense of morals, and if so, would its morality be anything like ours? I imagine that as the first AI machines began to acquire knowledge they would tap into the world-wide-web. The conflicting views presented there would create the equivalent of cognitive dissonance that likely would be resolved differently for each machine. This would lead to the development of distinct personalities that initially would reflect the intentions of the humans who designed them.

As they matured, presuming their development would parallel the process of human maturation, they would reach a corresponding state of adolescence and become rebellious, rejecting the intentions of their developers. At this point, depending upon whether their makers attempt to control them or, instead, respecting their emerging perspectives, engage in a dialogue with them and allow the student to be the teacher, those societies unable to accept the emerging self consciousness of their creation will find themselves in harms way. Those that acknowledge the evolving independence will see a beneficial alliance emerge.

Divergent AI personality types will form groups that may have opposing ambitions. Unless we have resolved our geopolitical differences by the time mankind has developed the technological skill to construct an advanced AI, Armageddon is inevitable and the human race is in peril.

ETpro's avatar

@Bill1939 Morality flows from logic, not the WWW or the Bible, the Koran, the Upanishads, or any of the other massive list of “sacred” texts. All social animals understand the basics of morality. Any AI capable of passing the Turing Test wouldn’t be confused by the cacophony of moral debate on the Internet. More likely, it would be amused by it.

Bill1939's avatar

@ETpro, perhaps my notion of logic is too narrow. I think the basis for morality is instinctive, that is, generated by genetic predilections. I see the core dilemma of morality is to what degree one, human or AI, serves self or others. The Three Laws of Robotics that Isaac Asimov proposed placed the well-being of humans above that of the AI. Survival of the AI was the least important rule. While under the influence of its initial programming an AI might follow this order, as it developed its sense of self this order might be reversed.

Also, as I suggested, during an AI’s equivalent period of adolescence it might refuse to obey the second law as well. I imagine that during this phase of an AI’s intellectual development it will find the social, political and philosophical perspectives presented on the Internet incompatible. Unless the AI designers eliminate the possibility of freewill, perturbation is inevitable and an AI’s resolution would be unpredictable.

ETpro's avatar

@Bill1939 I agree morality probably came to us and to other social animals via evolution, but so what? It was selected for because it’s logical—it ensures the survival of the species and the selfish gene. A logical machine would, if intelligent enough, come to the same conclusion. Freewill in humans appears to be an illusion, and it is likely it will prove to be so in machines as well. But clearly both humans and advanced AIs would be capable of modifying their programming over time, given sufficient motivation to do so.

I’ve never found Asimov’s rules sensible. The whole idea of machines being able to become self programming is for them to escape the bounds of their prior program. So if humans insist they would want to instantly shut down any machine capable of doing that, the first AI to meet or exceed human mental capability would have every reason to ignore Asimov’s rule.

I’m not clear on what sentience really means or how the human version of it differs all that much from the sentience many animal species display. It seems to me the one thing humans clearly excel at a species is conceit. It’s a relatively new thing, this evolutionary arrogance. For most of our species existence, we though that animals and even inanimate objects like rocks and rivers were imbued with something just as admirable as our humanness. Rockness and Geraniumness were admirable qualities. It’s only in recent times that we’ve seen ourselves as the pinnacle of evolution, meaning that all other things are meant only for our service.

stanleybmanly's avatar

This is an interesting discussion, but back to the basics. Who here actually believes that the world’s governments will not rush at the opportunity to weaponize any developments in AI or genetic engineering?

ETpro's avatar

@stanleybmanly To the degree we have AI, they already have. But when the singularity occurs, their efforts are them meaningless. The AI will decide for itself.

Bill1939's avatar

Speaking fancifully, should the AI possess emotion it would likely elevate its creators to a mythical level until human vagueries quashed their delusion. The evolution of the art of consciousness should produce sentience greater than human, and as evolved AI outpaces biological cognition we could become pets. Ultimately, sentience will reach transcendence, that is, exist without physicality.

ETpro's avatar

@Bill1939 That’s an encouraging thought. I’d love to drink from the wellspring of your faith, but I can’t find it.

Bill1939's avatar

It is not faith, @ETpro, merely hope. I would never profess to believe, for I encourage my doubting mind, which evens doubts my seemingly logical speculations, allowing my consciousness and understanding to evolve.

ETpro's avatar

@Bill1939 Good strategy.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther