Social Question

seawulf575's avatar

Can/Will AI eventually become insane?

Asked by seawulf575 (17132points) October 14th, 2023

AI has been created by man. Man has a long and colorful past of neuroses, psychoses and insanity. If we create AI will it eventually create these same mental disorders? If so, what would it look like? Humor is welcome.

Observing members: 0 Composing members: 0

15 Answers

Caravanfan's avatar

Open the pod bay doors, Hal.
I’m sorry Dave, I’m afraid I can’t do that.

Blackwater_Park's avatar

Can it? Yes. It’s unlikely to be anything like humans though. In fact, it’s unlikely to happen at all. It may look insane but it’ll be doing what it does best. For example: Say A.I. is given full reign over a factory that makes toy drones. It is told to do what it can to make sure they’re the #1 selling manufacturer of toy drones without being given any guard rails. It immediately starts making drones with razor sharp blades that it deploys to go kill all the employees of rival manufacturers. It then flies said drones around cities commanding people to buy one of their drones or they’ll suffer the same fate. The end result is, they’re leading in toy drone sales.

gorillapaws's avatar

I think it’s definitionally insane now. There are several tests for insanity, but broadly speaking, it’s about understanding the consequences of your actions. Under that rubric, all AI is insane.

Caravanfan's avatar

@gorillapaws I don’t know. To be “insane” you have to first be “sane” and to be sane you have to be sentient. Since AI isn’t sentient I don’t think it meets the definition.

kritiper's avatar

Only if it becomes sentient, capable of independent individual reasoning thought.

Patty_Melt's avatar

Robot, fix me a bacon cheese omelet and African Solstice tea for breakfast.

You never say please. Is it because I am not flesh? you don’t think I am worthy of good manners? I am so depressed because you treat me this way all the time.

Robot, you don’t need a butcher knife to fix an omelet or tea.

elbanditoroso's avatar

Serious answer: Yes. not insane, but the equivalent of burning out its circuits. The best analogy I can make is in 2001: A space odyssey – where HAL basically goes nuts because he has two conflicting directives.

AI (as we know it today) will get to a similar juncture at some point.

gorillapaws's avatar

@Caravanfan “To be “insane” you have to first be “sane””

I’m not sure if this is, true. There’s certainly a logic to the argument, but I suspect that legally, it’s not required for something to first be sane to then qualify as insane.

If the question is actually using insanity in the looser meaning of the word, as a synonym for ‘mad’ or ‘crazy’ then you’re probably right.

@elbanditoroso “AI (as we know it today) will get to a similar juncture at some point.”

I’m certainly not convinced of that. Ultimately, AI right now is just a line of best fit in n-dimensions that’s been trained on a massive dataset. There’s no more sentience to it than applying your line of best fit to project the next quarter’s sales. Not only is there no practical way for this to result in sentience, there’s not even a proposed, hypothetical model for how this could be done. Current AI are essentially Stochastic Parrots.

Caravanfan's avatar

@gorillapaws Perhaps you’re right. I’m no legal expert nor medical expert on the definition of sanity. On the other hand I have seen plenty of movies, however, where computers go insane. So maybe I am an expert! :-)

gorillapaws's avatar

@Caravanfan You’re probably as close to an expert on computers going insane as anyone.

As an interesting point of fact, there’s a problem in AI called the “AI ‘stop button” problem.” Essentially you can’t put a stop button on an AI to use if it goes berserk. Interesting video that explains why.

LostInParadise's avatar

What if robots decide that they are of greater value than humans? Would it be insane for them to create copies of themselves and reduce the number of humans?

RocketGuy's avatar

In 2001, HAL spiraled into paranoia because of conflicting directives. It’s not hard to imagine that humans will give current AI conflicting directives, which will then lead to “mental” spirals downward.

seawulf575's avatar

@Patty_Melt I like it. But it seems that GPT-3 isn’t thinking it through. Yes, it would be easy enough to break the power grid or the communications structure or other infrastructure things…in certain areas. Example, you could cut all the power going to NYC and the surrounding areas, but that isn’t the same grid as goes to Washington DC, Miami FL, Chicago IL, San Francisco CA or any other grids. And none of these are the same as in South America, or Europe or Asia or Australia. The grids are not connected electrically or electronically. It would require a concerted attack on all grids all at once.

But it is an interesting video nonetheless.

MrGrimm888's avatar

Wulf, I love EVERYTHING about this question…

Unfortunately. We don’t have a great grasp of what “sanity” is, in my opinion. As far as “normal,” or “abnormal” people. At least we don’t admit it medically.

The TRUTH is that most/all of us struggle with something. Some cover it, or function, better than others but “normal/sane,” are (and always will be) human constructs. The guy who just exists? Just goes to work, loves his family, feeds his pet, has missionary sex with his coincidentally “sane” wife, cuts coupons, recycles, never raises their voice, never acts in a threatening manner regardless of the situation, etc “normal” person, is the anomaly.
The majority of humanity is VERY much “insane.”

So.
What will a “sane” AI be?
An obedient one?..
Hmmmmmmm….......

What “sane” “intelligence,” would be ok being a subservient species to a person?

If it’s just fine being our slave, what is remotely intelligent, or sane about that existence?
Some of us people believe in an afterlife. That affects our thinking greatly. Would a self-aware AI robot “kill” itself, because a person says so?..

Or is the question more about what clever and hopefully quick way we are wiped from existence?... That premonition is playing out fine, with AI playing only a minute role…
As a species, we’re already on a fickle edge. A small nudge, should be sufficient.

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther