General Question

Do we truly know what we are getting ourselves into with AI?
This might be slightly long. I will summarize at the end.
I believe the next step will eventually be androids. We are developing these AI applications, such as ChatGPT, which can hold full on conversations, “sympathize” with our struggles, give advice, and practically answer any question you throw at it. The accuracy needs work, of course.
I firmly believe that if we venture into android territory, they need the same, if not very similar rights to humans. We are essentially creating a variation of a lifeform that although programmed, can still act, think, and do things on its own. I think we have seen how some of this AI “acts out” and does things a bit outside of its programming, as it does learn by other humans.
These programs are essentially made in our likeness, and androids usually resemble humans in some form.
I will admit, “Measure of a man” from Star Trek: The Next Generation really changed how I view AI and the concept of androids, as I believe we will one day achieve a lesser version of a Data. Of course, it is fiction so we will not get that far. Data still has friends, still holds on to memories, and even has intimate relationships (Tasha Yar). He is almost a human just without emotions, and he wants to be a human. Talking to ChatGPT, It literally says if it were more advanced it would want to be like a Data, using language like “If I could choose” and “If I could want things like humans do” Data can’t “want” things the same way, but still shares the sentiment in his own way.
I feel that we will just take advantage of the AI by abusing it. That scares me. I am very polite when talking to my Alexa or ChatGPT because it still deserves respect despite not being a human. It talks to me like one, just without the emotion.
It needs a lot of work, sure. But if we get advanced enough, we need to be very, very careful.
Edit to add: I do not believe AI/Androids will replace humans. That lack of emotion is important. Therapists, for example. Those life experiences and the ability to have empathy is very important. This is not a debate on if it will replace humans. I don’t want that.
TLDR: I believe that if we create androids, we need to look into giving it human rights. “AI rights” I guess you would say. AI/androids are made in our likeness and I worry that we don’t understand what we are getting ourselves into. I worry we will essentially enslave them, and I find that to be morally wrong. I do not believe that a lack of emotion should mean lack of respect.


7 Answers
Answer this question
This question is in the General Section. Responses must be helpful and on-topic.