General Question

MakeItSo1701's avatar

Do we truly know what we are getting ourselves into with AI?

Asked by MakeItSo1701 (13919points) 18 hours ago

This might be slightly long. I will summarize at the end.

I believe the next step will eventually be androids. We are developing these AI applications, such as ChatGPT, which can hold full on conversations, “sympathize” with our struggles, give advice, and practically answer any question you throw at it. The accuracy needs work, of course.

I firmly believe that if we venture into android territory, they need the same, if not very similar rights to humans. We are essentially creating a variation of a lifeform that although programmed, can still act, think, and do things on its own. I think we have seen how some of this AI “acts out” and does things a bit outside of its programming, as it does learn by other humans.

These programs are essentially made in our likeness, and androids usually resemble humans in some form.

I will admit, “Measure of a man” from Star Trek: The Next Generation really changed how I view AI and the concept of androids, as I believe we will one day achieve a lesser version of a Data. Of course, it is fiction so we will not get that far. Data still has friends, still holds on to memories, and even has intimate relationships (Tasha Yar). He is almost a human just without emotions, and he wants to be a human. Talking to ChatGPT, It literally says if it were more advanced it would want to be like a Data, using language like “If I could choose” and “If I could want things like humans do” Data can’t “want” things the same way, but still shares the sentiment in his own way.

I feel that we will just take advantage of the AI by abusing it. That scares me. I am very polite when talking to my Alexa or ChatGPT because it still deserves respect despite not being a human. It talks to me like one, just without the emotion.

It needs a lot of work, sure. But if we get advanced enough, we need to be very, very careful.

Edit to add: I do not believe AI/Androids will replace humans. That lack of emotion is important. Therapists, for example. Those life experiences and the ability to have empathy is very important. This is not a debate on if it will replace humans. I don’t want that.

TLDR: I believe that if we create androids, we need to look into giving it human rights. “AI rights” I guess you would say. AI/androids are made in our likeness and I worry that we don’t understand what we are getting ourselves into. I worry we will essentially enslave them, and I find that to be morally wrong. I do not believe that a lack of emotion should mean lack of respect.

Observing members: 0 Composing members: 0

7 Answers

Strauss's avatar

I think Asimov’s laws of Robotics should be programmed into any android, AI, or other highly intelligent programmed entities.

First Law:
“A robot may not injure a human being or, through inaction, allow a human being to come to harm.”
Second Law:
“A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”
Third Law:
“A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.”
The Zeroth Law (introduced later):
Asimov later introduced the Zeroth Law, which states: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.”
This law is meant to take precedence over the other three, meaning a robot must prioritize the well-being of humanity as a whole.

ragingloli's avatar

@Strauss
You know there will be one overriding directive:
“DIRECTIVE FOUR: AN OCP PRODUCT SHALL NOT ACT AGAINST OCP’S BEST INTERESTS – AN OCP PRODUCT SHALL NOT ACT AGAINST ANY SENIOR OCP OFFICIAL”

The whole issue of Human rights for robots and AI is very far down the road.
The more immediate threat is the oligarch class and governments weaponise AI to completely control and brainwash the population, while depriving them of their livelyhoods.

Forever_Free's avatar

We said the same thing about the Automobile.

RedDeerGuy1's avatar

The logical conclusion is that A.I. would try to take over, seeing that humanity is corrupt, and is a danger to itself.

LostInParadise's avatar

I encourage people to open up a conversation with ChatGPT. It is very difficult to imagine that the robot is not human. I don’t think we have gotten the point where robots have conscious, but I believe it will eventually happen, and we should prepare for it,

MakeItSo1701's avatar

AI is very, very, very different from an automobile.

ChatGPT is like talking to a human, for the most part.

Zaku's avatar

@LostInParadise It’s not difficult at all for me to imagine it is not human. I know it is not human, and I have a pretty good knowledge of what it is and how it works, what it’s doing and what it’s not doing.

And, when I have experimented with the opposite – that is, imagining it is capable of really maintaining a coherent conversation like a human – I have seen it consistently reveal that it is at best like a human with severe cognitive disorders, able to make some impressive statements, but lacking actual understanding, making mistakes humans would not make, unable to track context, and lacking many aspects of what a human is.

It’s able to fool some of the people, some of the time, but it’s mainly good at producing lots of text very quickly that at first glance looks credible, but that is not based on actual understanding, so it’s extremely good at getting people to accept false information.

And, for mostly different reasons, I would entirely disagree with this:

“I firmly believe that if we venture into android territory, they need the same, if not very similar rights to humans. We are essentially creating a variation of a lifeform that although programmed, can still act, think, and do things on its own. I think we have seen how some of this AI “acts out” and does things a bit outside of its programming, as it does learn by other humans.”

- A computer program driving an android body that can act and make choices by itself, is NOT a “lifeform”. It has no real emotions or feelings (even if it can produce behavior as if it did), no conscious experience, and has no compelling non-utilitarian needs.

- Moreover, they can be mass-produced, and programmed to do whatever the producer wants. It would be trivial to make them into care-free assassins, spies, agents of mayhem, political pawns, or endless other more subtle but perhaps even more insidious and dangerous agents. (Oh gee, Elon just had a billion AI’s created, and they all voted Elon as Emperor of the World. Oh well, that’s democracy.)

For those reasons and more, it would seem to me preposterous to give androids the rights of humans.

What I think should have more rights, are non-human animals, plants, and ecosystems.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther