Social Question

flutherother's avatar

Have you any examples of AI causing actual harm to a human being?

Asked by flutherother (34928points) November 17th, 2022

My question was inspired by this article

Observing members: 0 Composing members: 0

6 Answers

Zaku's avatar

Most recent that I know of, this fatal Tesla rampage (still under investigation): https://www.reddit.com/r/ThatsInsane/comments/ytvo2g/tesla_lost_control_when_parking_and_took_off_to/

According to this article , there have been “18 fatal crashes reported since July 2021 that had to do with driver assistance systems”, and “38 special investigations of crashes involving Tesla vehicles where advanced driver assistance systems such as Autopilot were suspected of being used” since 2016.

There is this AI Incident Database and here is a link to incidents with harm to physical health/safety.

Just looking at some of the first results, we’ve got:

* “24-year-old worker was reportedly adjusting a metal sheet being welded by the machine when he was stabbed by one of its arms.”

* “The technician was in the process of installing the robot with a colleague when he was struck in the chest by it and pressed against a metal plate, according to the Financial Times. The technician later died from the injuries.”

* “an unmanned Delhi Metro train on the yet-to-be-inaugurated Magenta Line derailed, crashing into a wall.”

* “In 2016, a KnightScope K5 robot ran over and injured a toddler at a mall in California.”

Not to mention intentional military uses of AI to hit targets .

gorillapaws's avatar

Google’s AI was tagging black people as “gorillas.” (source) Even I understand how fucked up that is.

The other example that comes to mind are things like racial biases in the training data. You can unwittingly create racist AI models if the training data isn’t carefully considered which can lead to systemic oppression by race. Things like creditworthiness, risk assessments for insurance, healthcare, police patrol areas, can all potentially be subject to this.

Lightlyseared's avatar

The CIA’s Skynet program (I fuck you not). This used machine learning on mobile phone metadata in Afghanistan to decide on drone targets. There was not necessarily any human oversight to review the intel before they fired.

Dutchess_III's avatar

@gorillapaws, well, we ARE primates!

kritiper's avatar

Self-driving cars crashing.

Response moderated (Spam)

Answer this question

Login

or

Join

to answer.
Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther