General Question

RedDeerGuy1's avatar

How should auto-pilot in cars and other systems be programmed to answer ethics problems?

Asked by RedDeerGuy1 (24945points) July 8th, 2016

For example Trolley problem for driverless cars?

Observing members: 0 Composing members: 0

44 Answers

JeSuisRickSpringfield's avatar

For now, at least, I think the car should switch back to manual control so that the human operator has to make the decision (thereby avoiding all the complicated questions regarding automation until there has been enough time to work out the moral and legal ramifications of the other various options).

johnpowell's avatar

@JeSuisRickSpringfield :: That is sort of the problem. For me there shouldn’t be levels of automation. It should be all or nothing. If I am having a wank while staring at my iPad the car should not assume it will be able to grab my attention and I will be able to react quickly enough to avoid a truck that swerves into my lanes.

I totally believe that you could have it all automated and be much safer. I just think that when you toss in stupid people the thing goes to shit.

ibstubro's avatar

I totally agree, @johnpowell. Either drive or don’t, but I certainly don’t want the car suddenly demanding critical decisions of me.

Perhaps the cars could be programmed to recognize other driverless cars and react in opposing methods.
As to the trolley problem, I don’t think you could legally program a car to kill one person, regardless of the circumstance. Nature (or a capricious and fickle God, take your pick) would have to take it’s course. You can add game changers to the scenario endlessly, to no good end. What if it’s 5 Muslims and the single person is Donald Trump?
(Okay, the car can wake me up for that one!)

Lightlyseared's avatar

If all the cars (and other road vehicles) are automated it wouldn’t be a problem. Most traffic accidents happen because a driver behaved in a way that others couldn’t predict. If everything is automated every vehicle can communicate with every other vehicle so everything knows what’s everything is is doing and, more to the point ,will be doing in a few moments time and plan accordingly.

zenvelo's avatar

A properly automated car avoids the trolley problem by stopping when it senses something in its path.

It might not avoid being hit, but it will not be the vehicle colliding into something else.

JeSuisRickSpringfield's avatar

@johnpowell Then for now, there should be no automation. But if there is going to be automation, then it needs to be well known that until we have a legally defined solution for these problems, the vehicle can return to manual control at any time (and therefore one probably shouldn’t be having a wank or staring at one’s iPad while traveling in the driver’s seat).

@Lightlyseared Other drivers aren’t the problem. Human beings running into the road are. That’s what the trolley problem is: there are only two possible paths to take, and both have pedestrians on them. There is currently an ongoing debate about how automated cars should be programmed to deal with situations like these (or with situations where the car has to either kill the pedestrian or kill the driver).

@zenvelo How does the car stop when the thing in its path runs out in front of it with inches to spare? I get that most people’s instinct when confronted with thought experiments like this is to come up with some way to pretend the problem isn’t real. But that’s not how we reach solutions.

zenvelo's avatar

@JeSuisRickSpringfield It slams on the brakes just like a driver does now. Or, it swerves sufficiently to avoid collision. But that isn’t an ethics problem like the OP describes. That is a problem for developing sensing devices sufficient to anticipate a child or a dog running out into the street.

negative's avatar

You missed a point as Cars are so natural that it obeys natural law disregarsing favoritism and decides who to survive and who not to survive if it didnt obeyed it
.

Zaku's avatar

I think that ethically, they should all refuse to drive where there are life forms anywhere near their possible path.

LostInParadise's avatar

This is a good question. How do we program computers to answer questions that people disagree on?

Two things we can all agree on. Robotic devices should use their sensors to try to avoid causing injuries to people. If faced with a situation where injuries are possible then they should use an alternative approach that avoids injuries if such an alternative is available.

If injuries are unavoidable, we might also agree that the robot should have a human decide what to do if there is a human around and if there is sufficient time. Failing that, there are two general strategies, and I am not sure which is the better choice. One strategy would be to ignore all alternatives and just execute the original plan. The other would be to choose the alternative which the robot calculates to cause the least amount of harm.

Espiritus_Corvus's avatar

First of all, what is being touted as autopilot in cars today is nothing more than glorified cruise control. There have been two notable accidents with “autopiloted” Teslas. A recent Model X crashed while supposedly on autopilot on the PA turnpike. While the operator was engaged in animated conversation with his son, the car smacked a guardrail and was thrown into oncoming lanes, then collided with two other vehicles. There were serious injuries in all three vehicles, but everyone survived.

There is reason to believe that the autopilot function was not actually engaged at the time, we only have the driver’s word on this at the moment. But, no matter what the investigation determines, the police have stated that they will charge the operator for the accident because the driver is ultimately responsible for the performance of their vehicle.

Earlier, in Florida, a Tesla S crashed while the operator was watching a Harry Potter DVD, causing two deaths. It has been determined that the autopilot was engaged at the time of the accident.

We have had real autopilot for years in aircraft and in ships. I have autopilot on my sailboat. It is guided by various systems, including medium range radar. When the autopilot on an aircraft is engaged, the pilots do not leave the cockpit, or occupy themselves in anything other than monitoring the performance of the aircraft. I don’t leave the wheel when I engage my autopilot. If I were to go below when the autopilot was engaged and hit another vessel I would be charged.

People need to know before this technology becomes commonplace that driverless cars still need to be monitored by a human. Even when a vehicle is placed on a computer-controlled track such as some subways and bullet trains, there is still a human operator on station at all times to over-ride the autopilot system when necessary.

We all know enough about how technology can go wrong, either with faulty or bearded electronics, or deteriorating, buggy software, that it is not perfect and never will be.

LostInParadise's avatar

I agree that we have not yet succeeded in building driverless cars, but I am not convinced that such cars are impossible. AI has made some significant advances lately and it is entirely appropriate to ask questions like this one now about how we want the AI to operate.

Dutchess_III's avatar

It would take me a loooooonnnnnggggg time to feel confident about switching off all of my sensors and leaving my life in the hands of a decision-making machine.

gorillapaws's avatar

@Dutchess_III So you don’t ride in elevators?

SecondHandStoke's avatar

An overwhelming majority of accidents are due to driver error.

If others wish to invest in, and drive driverless cars, great!

Smoother and more predictable traffic around me. No more sudden lurches into a more left lane for no reason. Yay! No more harder than required panic braking, yay! Safe following distances by more cars near me. Super Yay!

I will fight to stay with hands on driving as long as the market, legislation and my ability allows.

Autonomous cars should prioritize the well being of whoever paid for it.

Dutchess_III's avatar

It’s not comparable, @gorillapaws. What I am saying is that I am so used to being 98% alert when I’m driving, constantly making decisions, it would be hard for me to shut that off. I’m not in the habit of being alert in an elevator, because I have no reason to be, unless I’m waiting for that exact second, at the top of its ascent, to JUMP!

Seek's avatar

Autopiloted cars cant come fast enough for me.

Dutchess_III's avatar

I think it will be a huge improvement. No more drunk driving for one.

LostInParadise's avatar

For those of you who are reluctant to turn the driving over to a computer, how much extra in car insurance would you be willing to pay for the privilege? If driverless cars can eventually be perfected, they will be adopted.

SmartAZ's avatar

The question is self contradictory. The nature of an ethics problem is that there is no answer.

The answer is to abolish human controlled machines. Machines don’t have ethics problems. For example there was no way to prevent train wrecks until the time zones were invented and all conductors were required to carry watches. Presto: no more wrecks.

kritiper's avatar

Thank the stars above that automated transportation systems will not have egos to deal with. Everything works smoothly, intricately, autonomously, like a finely crafted watch without traffic lights, stop signs, speed limits, and blame games.

RealEyesRealizeRealLies's avatar

Just another hacker target.

SecondHandStoke's avatar

@LostInParadise

No amount, zip, nada.

I would also vote against misuse of insurance companies as a means for the state to bribe drivers into choosing fully autonomous vehicles.

The wonderful activity that is driving, I’ll pay to keep it in my life.

@kritiper

The problem is hardly ego when compared to incompetence.

LostInParadise's avatar

@SecondHandStoke , You have an odd view of how insurance companies work. Do you think that lowering rates for non-smokers is a way of bribing people not to smoke? Insurance companies are out to maximize their profits, same as any other company. If letting a computer do the driving cuts down on their costs, they have every right to charge less for people who drive that way.

JeSuisRickSpringfield's avatar

@zenvelo You’re still avoiding the actual question. Not everything in life can be anticipated. What should the car be programmed to do when there isn’t enough time or space to slam on the brakes or swerve sufficiently?

(Arguably, I have avoided the question, too. So here’s what I think. When fully automated cars become possible, I think they should be programmed to save the driver if the car must choose between the driver and a pedestrian. And if the car winds up in a trolley scenario where it must kill either one or five, I think it should kill the one.)

Dutchess_III's avatar

The car would be programmed to make the “logical” move….but humans are more intuitive than that so it may not be the move we might make. But….I think, overall, accident rates would go down. (I say this thinking back on the stupid shit I witnessed just today…and everyday that I get on the road.)

Seek's avatar

Rick, what would you ask if you couldn’t swerve or slam fast enough?

I mean, it’s a robot, not Superman.

zenvelo's avatar

@JeSuisRickSpringfield I did not avoid the question. I did not say there would never be an accident. But the trolley question does not arise in a properly programmed auto.

And yes, a child might step in front of a car faster than the braking capability of the car. But that is not the car’s fault, and there would not be a human driver with a better reaction time and better handling to avoid a child in that situation.

gorillapaws's avatar

I recently read an article about this dilemma. Apparently they did some polling and the vast majority of people wanted vehicles to be programmed based on utilitarian principles (save the most number of lives possible, even if that means putting the occupants of the vehicle in greater harm). The kicker is that those same people only wanted to buy automated cars that protected the occupants at all costs.

kritiper's avatar

@SecondHandStoke Can a computer be incompetent?

JeSuisRickSpringfield's avatar

@Seek I’m not sure I understand your question. That the car isn’t Superman is my point (contrary to @zenvelo‘s previous answers, which seemed to imply that the cars can be programmed to never face difficult situations like @RedDeerGuy1 is asking about). And I’ve already said how I think the cars should be programmed.

@zenvelo You are avoiding the question because the question is what the car should be programmed to do in certain situations (like the trolley problem scenario) and you are claiming that the situation would never arise. But you can’t consistently hold to that position and admit that a child might step in front of a car faster than its braking capability because it’s a short step from there to creating a trolley problem based on the car’s limitations. No one is saying it’s the car’s fault. The question is what the car should be programmed to do when situations arise that it didn’t create. Refusing to answer is avoiding the question.

zenvelo's avatar

@JeSuisRickSpringfield But I did answer that: brake as quickly as possible, or swerve. Which of those two would be by the computer evaluating all inputs at microsecond decision making,

There will be unavoidable accidents. Using a computer driver will allow the car to take the same evasive action currently available to a human driver, but with faster reaction time and a better evaluation of all choices.

Dutchess_III's avatar

Rick and I watched a marathon of “Why planes crash.” Some of the crashes occurred because the pilots were concentrating on one thing, assuming the computer had control of the Other thing….and that assumption was fatal. The computers were malfunctioning.
But still, flying is safer than it used to be.

Espiritus_Corvus's avatar

A car that could answer ethical problems. That would be an interesting feature. There you are, in your car, barreling down the road kicking back in your ultra-comfortable plush leather cockpit. I like the female voices that come with the GPS. The Italian one is extremely sexy. Love the accent.

“Marina?”

“Pronto?”

“Marina, I have a problem. I’m in love with a woman at work, but she is married.”

“Ahhh, you are been a bad boy again, Meester Smeeth?”

“No, Marina, I am really in love this time. What should I do?”

“Si, si, si… The ethics of natural law state tha’ you should follow the laws of nature. In this case, that would be the lust you have for this woman. Then there is Utilitarianism, a way of evaluating moral decisions based on the amount of pleasure that it provides. Utilitarianism defines good as what brings pleasure to people. Aristotle argued that happiness is the ultimate good. Therefore, if this woman makes you happy, you are doing the ultimate good. Neitzche says you have no will other than the will to power and that those who pursue power are the supermen—the only ones who have the capacity for the greater good. You aspire to the power that comes with love for this woman, which is the same as aspiring to superman, so are pursuing the greater good.”

“Soooo…. it’s OK, then?”

“Forse… But Gianni Molinari would probably say that you should be castrated.”

“What? Who the hell is Gianni Molinari?”

“He is a follower of virtuism—and the husban’ of the woman you are in love with. Have a good day, Meester Smeeeeeeth.”

ibstubro's avatar

I now think it’s obvious that the response to the Trolley Problem would be random selection.
Pose a scenario, then ask a random sampling of people how they would respond.
Program the top 1–50 into all cars, then randomly program the computer to react, 1–50.

If the car cannot find a scenario? Then the self driving mechanism, in tandem with the other vehicle, shuts off, allows the collision, and the investigation is sent to the court. The scenario is presented to the court.

Dutchess_III's avatar

I am watching “Why Planes Collide.” Because of this episode I’m watching, I had to track down this question.

Not sure of the year, 90’s to mid 2000’s, there was a Soviet passenger plane, carrying a bunch of kids on a field trip, and and a German cargo plane, in the same airspace. This air space was controlled by a Swiss company named Sky Guide. There were supposed to be two air traffic controllers manning the Swiss company’s air space, but one had left to take a break, which left only one controller to man two computer screens “several feet apart.” This was against aviation rules, but one that was tacitly permitted by that Swiss company’s management.

The passenger plane and the cargo plane were at the same altitude, heading for a collision. A ground-based computer generated program on the German side, recently invented, called TCAS, automatically kicked in, told the pilot of the Soviet plane to ascend and told the pilot of the German plane to descend.

Then the one lone human air controller finally realized the planes were in deep trouble, and not knowing of the “commands” of the computer, gave the opposite order to the Soviet plane, to descend, instead.
The Soviets taught their pilots that human commands superseded computer commands, so the Soviet pilots hesitated for a second, before following their training, and followed the air controller’s command.

The planes collided and everybody died. Except not the air traffic controller.

The End.

PS. Wait. Breaking news. I paused the program before it was finished. Apparently the air traffic controller was treated for PTSD.
Then husband and father of 3 of the victims, his wife and two daughter, tracked the air controller down a year later and stabbed him to death.

zenvelo's avatar

The fallacy of this post is that there needs to be an answer to “the Trolley Problem”. But the Trolley Problem won’t exist in a properly programmed car, because the car won’t be a runaway.

And @Dutchess_III your example wouldn’t happened with properly programmed autos, because they would not have been in jeopardy to begin with, because they would not invade the other’s space. And if it did arise, the computerized avoidance would not be overridden by faulty human logic.

ibstubro's avatar

The people are runaway, @zenvelo.

How does the trolley deal with irrational and unpredictable people?

zenvelo's avatar

@ibstubro You are changing the problem.

If there are runaway people the car stops. No ethic problem. The car is not at fault or in a dilemma if people run in front of it, just as a driver on the freeway today is not at fault for someone being hit while running across.

ibstubro's avatar

Okay.
Sorry, @zenvelo.

LostInParadise's avatar

@zenvelo , I took the question to be one where the car had to decide between two different cases where it would run over people. It may not be likely, but suppose several people run in front of it and the car has the option of turning onto another street and hitting a single person crossing the street.

It is not the specific problem that is important so much as the general type of decision that would have to be made.

zenvelo's avatar

@LostInParadise The decision is to immediately STOP and not move again until the path is clear.

If the car has the situational ability to decide to turn into another street which endangers one, it has the situational ability to stop. The car is not a runaway.

LostInParadise's avatar

I don’t want to drag this out. A car is not able to stop on a dime. What I was thinking of was a case where some people jump in front of the car in such a way that there is not sufficient distance to stop, but that the car would be able to swerve into a cross street where someone is crossing the street.

This may not be the best example, but the point of the exercise was to ask about how we would program ethical decision making into a computer. Should computers be able to do calculations to determine whether an alternative action is more appropriate than the direct action that it is currently performing?

Seek's avatar

The car is able to stop faster than a car operated by a human is able to stop.

If faced with an obstruction, the car attempts to stop. If the human is incidentally hit while the car is stopping, the human has made poor life choices.

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther