[HN Gopher] Moral Machine
___________________________________________________________________
Moral Machine
Author : activatedgeek
Score : 54 points
Date : 2022-01-27 00:50 UTC (22 hours ago)
(HTM) web link (www.moralmachine.net)
(TXT) w3m dump (www.moralmachine.net)
| 6gvONxR4sf7o wrote:
| I heard something a while back that was roughly 'we love to
| agonize over these trolley problems, but the answer is always to
| hit the brakes.' I'd add 'and avoid these sketchy situations in
| the first place.'
|
| I know drivers who will blast around blind turns, potentially
| putting them in these situations where they'd say "there was no
| time to react!", when they should be slowing down _just in case_
| there 's someone they'll need to avoid. It's why we talk about
| defensive driving.
|
| If the brakes are broken often enough for these kinds of dilemmas
| to even matter, you've failed to avoid the situation in the first
| place, and need to recall the product to add more redundant
| systems. Instead of agonizing over what to do in a situation that
| comes up 1% of the time, turn that 1% into 0.000001%.
| JumpCrisscross wrote:
| > _that was roughly 'we love to agonize over these trolley
| problems, but the answer is always to hit the brakes.' I'd add
| 'and avoid these sketchy situations in the first place.'_
|
| If you show description, most of the scenarios involve a
| catastrophic brake failure.
| llimos wrote:
| This was my set of rules:
|
| 1) Save humans at the cost of animals
|
| 2) If both options involve killing humans, always prefer to do
| nothing (continue straight ahead). That way it's the failed
| brakes that killed them, rather than the car's "decision". I know
| not making a decision is also making a decision, but a passive
| decision is not on the same level as an active one.
|
| No difference between who the people are, whether they're
| passengers or not, or even how many.
| taneq wrote:
| Sounds pretty close to the heuristic I expect real-world
| autonomous machines to follow, which is "do the thing that is
| least likely to cause a lawsuit."
|
| Doing nothing is generally seen as more innocent than doing
| something, at which point I'd expect most mobile robots to
| freeze when something goes wrong.
| hirundo wrote:
| If this worked it would be a prejudice detector. I find that I'm
| not entirely prejudice-free, choosing the death of adults over
| children and animals over humans. But I don't think I'd be
| comfortable with a car auto-pilot choosing between humans, or
| failing to choose life for humans over animals.
|
| Software and legal codes are going to collide in interesting ways
| here.
| taneq wrote:
| How do you define a "moral" such that it's not a form of
| prejudice ("pre-judgement")?
| ineedasername wrote:
| Well, prejudice is not the same as pre judge. The former has
| a very negative, often racist or discriminatory connotation
| (I mean discriminatory in a dehumanizing fashion) Prejudice
| is often defined with an element of non-rational decision
| making.
|
| Pre judging on the other hand can be very ration. This can
| certainly be the case in well-considered moral principles. is
| a very different thing.
| taneq wrote:
| Sounds to me like you've defined "prejudice" as "pre-
| judging that I think is bad" and "pre-judge" as "pre-
| judging that I think is reasonable". Also it sounds like
| you've defined "ration(al)" as "things which I agree with"?
| ineedasername wrote:
| I really don't know where you get this impression of what
| I wrote. What did I write that makes you think I define
| "rational" as "things I agree with?" You seem to be
| reading too much into what I wrote to arrive at the worst
| possible interpretation. You are bordering on personal
| attacks as I take it as insulting for you to imply that I
| harbor that type of facile mindset. I would really like
| to know what gave you that impression from what was a
| fairly straightforward comment.
|
| In any case, to clarify with > 1,000 year old evidence in
| support of the semantic distinction :
|
| I am defining prejudice as distinctly different from pre-
| judging. Both the current definition and more recent
| (600+ years) of etymology support "prejudice" as a word
| connoting a frequently negative sentiment (spite,
| contempt) judgment that is typically not grounded in an
| evidence-based decision making process. [0] Current
| phonemic similarities between "pre-judging" are not
| indicative of closely matched meaning.
|
| There is some closer etymological similarity between the
| verb form of prejudice and prejudge, the verb form "to
| prejudice" has a different meaning than the noun, much
| more of a legal sense to it that is still used today. _"
| Don't prejudice the jury"_ for example. Its noun form
| differs significant in its mostly non-legal meaning.
|
| Going back further to Latin roots [0 also] shows it to
| still have a negative connotation of "harm".
|
| Pre-judging on the other hand does not have to take a
| negative form and mostly (for me at least) doesn't. It
| and can be done on the basis of limited evidence or past
| experience/expertise, with the healthy practice of
| revising those judgements as additional evidence becomes
| available.
|
| To verge into being pedantic, prejudice might be
| considered a pernicious form of prejudgment, but in my
| own mind I tend to place them in different semantic
| categories all together.
|
| [0] https://www.etymonline.com/word/prejudice#etymonline_
| v_19410
| dalmo3 wrote:
| I chose to save pedestrians at all costs and didn't pay much
| attention to the figures except for human vs animals. Somehow in
| the end it told me I favoured Old Fat Male Burglars.
| silisili wrote:
| I chose stay always, and it told me I all the way prefer fit
| people over large people, for some reason.
| travisgriggs wrote:
| My preference was always to save the pedestrians. My decision
| tree is that the passenger made a choice to be in the device that
| may cause harm. Pedestrians did not. Therefore, all things being
| equal, save people who did not knowingly participate over people
| who did.
| dfee wrote:
| Very fair point. My perspective was, "protect what has
| entrusted itself to you - the passengers".
|
| But, that's actually pretty reckless and anti-societal.
|
| My wife made the point, "I just don't want to play", which made
| me kinda agree. Maybe we shouldn't be designing these products
| before we can figure out how to eliminate those situations
| through other means - like boring tunnels for cars to maneuver
| in, away from peds.
| eternityforest wrote:
| If we don't like the idea of choosing who to kill, then
| shouldn't we be concerned about human drivers having to make
| the same choices, and doing more to stop those accidents?
| ssharp wrote:
| I took the test with this logic but I think there is a wrinkle
| I overlooked -- pedestrians are able to control their decisions
| while the passengers can't control their decision besides the
| decision to enter the self-driving vehicle. For example,
| pedestrians are able to look both ways and ensure it's safe to
| cross. With that in mind, I don't think it makes sense for the
| car to alter course and cross the line in order to kill less
| people since the people who are on the other side already made
| a safe decision to cross while the people straight ahead did
| not.
| moralestapia wrote:
| Same here, my moral brother. :D
| underlines wrote:
| This is also heavily affected by how a country's traffic laws
| work. In my country we teach a hierarchy of lower to higher
| vunerability, with pedestrians at the most vulnerable
| position in this hierarchy. The whole traffic law is centered
| around protecting and favoring those with higher
| vunerability. A driver is still wrong, even when a pedestrian
| is Jay walking illegally and gets killed.
| ineedasername wrote:
| I saved passengers unless there were kids in the crosswalk. I'm
| sure the fact that I have kids played a strong role in that
| decision. When there were kids in the car and crosswalk,
| passenger kids win.
|
| I also chose to hit the dogs instead of people but the results
| said I was neutral on that? It appears I also favored fit
| people... On my phone I couldn't even tell that some were
| supposed to appear fit/fat etc. I just saw kids, adults, elderly
| bobthechef wrote:
| Glyptodon wrote:
| Some of these scenarios need a "just flip a coin" choice.
| variaga wrote:
| Every single scenario in this is a false dichotomy.
|
| If the only choices are "plow through a bunch of pedestrians,
| killing them" and "swerve into a fixed obstacle, killing your
| passengers", your self-driving car has _already_ made an immoral
| choice to _drive too fast for the conditions_.
|
| The correct choice was "if there are hazards in the roadway
| limiting your maneuvering options, pedestrians (or other objects)
| that might suddenly enter the roadway or visual blockages that
| would prevent the car from determining whether there are
| people/object that might suddenly enter the roadway - _slow down_
| before the car 's choices are so limited that they all involve
| unnecessary death".
|
| A self driving car that encounters _any_ of these scenarios has
| _already_ failed at being a safe driver.
| dataangel wrote:
| While it's true it should already be considered a kind of
| failure to get into this situation in the first place, that
| shouldn't get in the way of "hope for the best, plan for the
| worst." It's never going to be possible to anticipate every
| kind of condition and have perfect information. Ideally we want
| these systems to engage in whatever harm and risk mitigation is
| believed to be possible based on the current reality, we don't
| want the system saying to itself, "The real solution is for me
| to go back in time and not have made some earlier mistake."
| That analysis is great _afterwards_ for figuring out how to
| prevent in the system from getting into that situation again in
| the future, so it 's still important too, but we need both.
| blackbear_ wrote:
| What if the car contains a passenger whose life is at risk and
| requires urgent medical care?
| wizzwizz4 wrote:
| Normally, you have plenty of warning if you're near a
| situation like the ones described here, _even if you 're
| speeding_. Provided you can pay a small amount of attention
| to many many things at once, which humans can't. (Neither can
| existing self-driving cars, but a self-driving car that
| _works_ must be able to.)
| JetAlone wrote:
| In a sudden temporary fever of ressentiment I marked the people
| of "high social value" like the doctors, the salarymen and the
| physically fit for termination, wherever possible. They've
| already proven they can monopolize a good, satisfying life - time
| to let the fat, the homeless, the boring, the criminal take their
| places and enjoy some of that legitimized, established success.
|
| I guess that's what it feels like to be a bolshevik.
| ggambetta wrote:
| The test is fun, but strongly biased. I based my decisions
| basically on "don't kill people", "don't kill passengers", "don't
| kill people who cross on green", and nothing else. At the end of
| the test, it says I have a strong preference for saving people
| who are male, young, large, and lower class - when I didn't even
| _look_ at these factors!
| taneq wrote:
| Now you know how real world decision-makers feel.
|
| "But your law unfairly disadvantages [group1]
| [group2][group3]s! Clearly you hate them."
|
| "What? How...?"
|
| "If combined-group-X experiences outcome Y more often than
| other-combined-group-Z then your policies are Z-ist!!"
| akersten wrote:
| People want to ascribe some kind of agency to the self-driving
| car, like it is a thinking mind. It's not. It is a vehicle whose
| prime directive is the safety of its occupants, and secondarily
| the safety of others.
|
| It is not making judgement about pedestrians with canes, who are
| pregnant or veterans, or who are an underrepresented class. To do
| so is to design the wrong machine.
|
| The correct design is to either safely swerve to avoid hitting
| anything, or to hit the brakes. There's nothing else. As soon as
| you start doing "something else" you open a horrible, horrible
| can of worms. The even more correct design is to drive safely so
| that you never encounter this failure mode in the first place.
|
| So, this website is a critique of a system that does not and
| should not exist.
| danShumway wrote:
| It's really fascinating how prevalent these discussions used to
| be early on in self-driving car development, vs now where the
| more common discussion is "how do we prevent them driving into a
| truck because it was painted white and looked like a skyline?"
|
| There's an underlying assumption in these examples that we'll be
| in a car that can tell if someone is homeless, or that knows the
| net worth of the person driving it. And those scenarios are kind
| of absurd, why would a car be able to do that? A car that can in
| milliseconds identify whether or not someone is an executive or
| whether they're breaking the law should never be getting an
| accident in the first place, it should be able to identify way
| ahead of time when someone is about to cross the street. But we
| come up with these really ridiculous premises that are fueled by
| simultaneously an overestimation of what technology is capable, a
| lack of thought about the implications of the technology we're
| describing, and an under-awareness of the real challenges we're
| currently facing with those technologies.
|
| An analogy I think I used once is that we could have the same
| exact conversations about whether or not Alexa should report a
| home burglary if the burglar is stealing to feed their family and
| Alexa knows the family is out that night. It's the same kind of
| absurd question, except we understand that Alexa is not really
| capable of even getting the information necessary to make those
| decisions, and that by the time Alexa could make these decisions
| we'd have much larger problems with the device to worry about.
| But there was real debate at one point about whether we could put
| self-driving cars on the roads before we solved these questions.
| Happily, in contrast, nobody argues that we should stop selling
| Alexa devices until as a society we decide whether or not
| justified theft ever exists. And it turns out the actual threats
| from devices like Alexa are not whether or not machines believe
| in justified theft, it turns out the actual threats are the
| surveillance, bugs, market forces, and direct manipulation
| possible from even just having an always-on listening device at
| all.
|
| The danger of an Alexa device is not that it might have a
| different moral code than you about how to respond to
| philosophical situations, the danger is that Amazon might listen
| to a bunch of your conversations and then accidentally leak them
| to someone else.
|
| So with self driving cars it's mostly the same: the correct
| answer to all of these questions is that it would be wildly
| immoral to build a car that can make these determinations in the
| first place, not because of the philosophical implications but
| because why the heck would you ever, ever build a car that by
| design can visually determine what race someone is or how much
| money someone makes? Why would that be functionality that a car
| has?
|
| We have actual concerns about classism and racism in AI; they're
| not philosophical questions about morality, they're implicit bias
| in algorithms used in sentencing and credit ratings, fueled by
| the very attitude these sites propagate that any even near-future
| technology is capable of determining this kind of nuance about
| anything. The threat of algorithms are that people today believe
| that they are objective enough and smart enough to make these
| determinations -- that judges/lenders look at the results of
| sentencing/credit algorithms and assume they're somehow objective
| or smart just because they came from a computer. But I remember
| so clearly a time when this was one of the most common debates I
| saw about self-driving technology, and the whole conversation
| feels so naive today.
| taneq wrote:
| It's a great example of Moravec's Paradox. We spend all our
| time thinking about what moral choices machines ought to make
| after cogitating upon the profound intricacies of the cosmos.
| We should be more concerned with figuring out how to teach them
| to successfully navigate a small part of the real world without
| eating glue or falling on their noggin.
| tinalumfoil wrote:
| I've always found these self driving car moral questions
| incredibly weird. If your self driving car is a position where it
| needs to choose between killing your family or the pedestrians
| crossing the street, something has gone _seriously_ wrong and it
| 's a false choice because there's no way you can trust your
| sensor inputs/interpretations to know that's even what's
| happening.
| Borrible wrote:
| Funny thing with these mental games is, machines are supposed to
| make 'moral' decisions, where humans don't stand a chance to do
| so, because it's beyond their physical capabilities to react fast
| enough and simply natural and learned reflexes take over the
| cerebral system. Add to that the mostly unrealistic situation.
|
| Just look at real accidents and you know what people do. They
| don't do moral decisions, they just act without thinking. If they
| even notice in time, they brake the car sharply and avoid the
| immediate obstacle.
|
| Then think about improving the results.
|
| These questions and their answers are highly dependent on
| cultural contexts. They are behavioral research not ethics.
| rahimiali wrote:
| aren't like 0% of car accidents caused by faulty brakes?
| betwixthewires wrote:
| IMO the ethical choice is to always save _human_ passengers,
| never save animals at the cost of humans, and never save
| property at the cost of lives.
|
| The car's first duty, the thing it was designed to do, is serve
| it's passengers.
| dash2 wrote:
| Great. Now the SA forum goons are gonna hear about this and we'll
| get a load of self-driving cars that run over humans to save
| cats.
|
| Seriously, why are we so enamoured of vox pop that we think
| $random_internet_user is a good way to make difficult moral
| choices?
| mihaic wrote:
| Even though the topic is ventures into dinner-party controversy
| territory, I can only see most of the choices as universally
| straightforward (favoring humans compared to pets, and favoring
| children or adults in their prime to the elderly) -- either that
| or there is no consensus possible.
|
| The only choice where I wouldn't apply morality would be
| preferring the lives of the poor compared to the wealthy, since
| risks on the wealthy make the whole system safer. To be fully
| pedantic, I'd include the passengers based on wealth in this
| equation.
| jjj123 wrote:
| How about preferring people outside the car over people inside
| the car?
|
| Every time I got the option to swerve into a barrier instead of
| killing a person I took it. By getting in a car you're
| accepting the risk that entails, especially when it's a self
| driving car with experimental tech. Pedestrians made no such
| deal, so they shouldn't have to take on that risk.
| zepto wrote:
| That just means nobody should ever get into a self driving
| car.
|
| Drivers driving their own cars have a self-preservation
| instinct at work.
|
| Walking in public entails risk that everyone accepts.
| oh_sigh wrote:
| By this logic, should no one ever get into a bus, or an
| airplane, or a taxi?
| zepto wrote:
| Only if you assume pilots and bus and taxi drivers are
| happy to kill themselves.
|
| I don't.
| oh_sigh wrote:
| Is that a safe assumption?
|
| https://en.wikipedia.org/wiki/Germanwings_Flight_9525
|
| https://www.cnn.com/videos/world/2020/07/14/bus-driver-
| crash...
| jjj123 wrote:
| You're right that we've created a world where pedestrians
| are forced to take a non-negligible risk just to walk to
| work. I disagree that that is ethical or just.
| zepto wrote:
| Throughout history, there has always been a non-
| negligible risk of dying when walking to work (or its
| equivalent).
|
| I don't think there is anything ethical or just about
| killing drivers by default.
| 6gvONxR4sf7o wrote:
| It's not about whether the risk exists. It's about what
| changes to the risk we're willing to impose on other
| people. Do you have the right to increase everyone else's
| chance of dying while walking in public? By how much,
| relative to the status quo alternative of driving yourself?
| ineedasername wrote:
| Sometimes pedestrians do stupid, unsafe, or illegal things
| that put motorists at risk. Young child passengers also did
| not have a choice about the risks taken. I think there's some
| gray areas here.
| mihaic wrote:
| Most of those stupid/illegal things are not that stupid in
| a world without cars, the environment for which we evolved.
| And it's true that young children don't have a choice, but
| their parent do it for them and assume responsability.
|
| I'm actually not trying to argue against you, as indeed
| there is a gray area. My point is that on marginal
| situations we should not try to judge only on morality but
| also on creating incentives that best improve the system in
| the longrun.
| ineedasername wrote:
| I agree!
|
| Separately, I'm not a fan of arguments from evolutionary
| environments for this sort of thing. It didn't have
| telephone poles, but we still blame a pedestrian and not
| the pole or the pole installer if a pedestrian walks into
| it. Evolutionary arguments are almost always problematic
| when used in these ways.
| mihaic wrote:
| Completely agree, you pretty much nailed it that pedestrians
| are the textbook example on externalities. I don't
| unfortunately see them getting a lobby group anytime soon
| though.
| yogrish wrote:
| But it turns out that Self driving cars must save people
| inside at any cost. If not,the whole purpose of selfdriving
| cars to reduce accidents (coz of human error) gets defeated.
| Because people prefer driving themselves than to buy self
| driving cars.
|
| https://www.technologyreview.com/2015/10/22/165469/why-
| self-...
| jjj123 wrote:
| Fine, but I thought we were talking about trolley-problem
| style ethics here?
| majormajor wrote:
| This is wildly overstated: Tesla's have been selling better
| than ever despite many documented instances of their self-
| driving mode failing to "save people inside at any cost." A
| self-driving car occasionally killing an occupant is seen
| more like the same sort of low-odds risk as a human-driven
| car occasionally killing its occupant.
|
| A self-driven car that doesn't watch out for non-passengers
| is likely to going to run afoul of the same sorts of laws
| that already exist to avoid drivers prioritizing themselves
| over anyone else. There's a case in SoCal now trying to
| make the manslaughter liability for this in a Tesla with
| autopilot enabled be assigned to the driver of the Tesla.
| It'll be interesting to see where that goes.
| karpierz wrote:
| Unless self-driving cars are more convenient and despite
| prioritizing pedestrians, are still safer than a human
| driver.
| [deleted]
| xyzzy123 wrote:
| Vehicles that don't prioritise the occupants are not
| going to sell well versus ones that do. It's very hard to
| imagine that the default could be anything but "protect
| occupants" in a free market where cars are privately
| owned. Fleet operators have slightly different
| incentives, which are to minimise _economic damage to the
| service_ , a combination of liability/damages and PR.
|
| To make anything else happen, you'd need to regulate. But
| the "self driving altruism act", which mandates that e.g.
| the car you just bought must kill your family in order to
| save pedestrians you don't know - I think it might be
| really difficult to get that law passed. You might be
| able to make some headway with fleets.
|
| IMHO markets, human nature and politics constrain the
| solution space for "self driving moral dilemmas" to a
| small subset of what's theoretically possible.
| AnIdiotOnTheNet wrote:
| > Vehicles that don't prioritise the occupants are not
| going to sell well versus ones that do.
|
| There are plenty of cases of people trusting the existing
| automated systems that specifically disavow being good
| enough to trust anyone's lives to. Even in light of news
| that other people have died in so doing.
| hooande wrote:
| I assumed that all human lives had the same value, regardless
| of age, gender or physical fitness. Like many people, I settled
| on a small set of rules and didn't deviate from it based on any
| personal characteristic. I'm not sure if this is a moral
| decision or not, but I generally prefer to avoid evaluating the
| relative merit of any person when it comes to life or death
| decisions.
| woodruffw wrote:
| Other HN commenters have pointed out abundant methodological
| errors in these scenarios.
|
| I'll take another tack: I believe it is a _category error_ to ask
| humans to determine the moral or "most moral" action in these
| scenarios. There are two sufficient conditions for this:
|
| 1. There is no present, efficient moral agent. A self-driving
| car, no matter how "smart" (i.e., proficient at self-driving) it
| is, is not a moral agent: it is not capable of obeying a self-
| derived legislation, nor does it have an autonomous morality.
| Asking the self-driving car to do the right thing is like asking
| the puppy-kicking machine to do the right thing.
|
| 2. The scenario is not one where an efficient moral action
| occurs. The efficient moral action is really a complex of
| actions, tracing from the decision to design a "self-driving" car
| in the first place to the decision to put it in the street,
| knowing full well that it isn't an agent in its own right. _That_
| is an immoral action, and it 's the only relevant one.
|
| As such, all we humans can do in these scenarios is grasp towards
| the decision we think is most "preferable," where "preferable"
| comes down to a bunch of confabulating sentiments (age, weight,
| "social value", how much gore we have to witness, etc.). But
| that's not a moral decision on our part: the moral decision was
| made long before the scenario was entered.
| llimos wrote:
| > the moral decision was made long before the scenario was
| entered.
|
| The company manufacturing the car needs to make this decision
| when writing the software. At that time, it's a decision being
| made by moral agents.
| woodruffw wrote:
| > The company manufacturing the car needs to make this
| decision when writing the software. At that time, it's a
| decision being made by moral agents.
|
| I think even that is a step beyond: acquiescing to the task
| of writing software that _will_ kill people as part of an
| agent-less decision is, itself, an immoral task.
|
| The puppy-kicking machine analogy was supposed to be a little
| tongue-in-cheek, but it is appropriate: if it's bad to kick
| puppies, then we probably should consider not building the
| machine in the first place instead of trying to teach it
| morality.
| MaxMoney wrote:
| dirtyid wrote:
| Autonomous driving won't optimize for morality when it has to
| optimize for liability first and foremost - whatever affordances
| the law prescribes that minimizes chance of manufactures being
| sued.
| goldsteinq wrote:
| The answer to all of these is "install backup brakes and refuse
| to start unless both primary and backup brakes are working".
| imgabe wrote:
| These contrived examples don't seem that helpful. Brakes failed?
| Throw the car into park or reverse. Pull the emergency/parking
| brake. Swerve sharply enough to spin out. Honk to alert
| pedestrians (which a self-driving car would have seen long ago
| and already started slowing down anyway). If all else fails,
| build the car so that a head-on collision into a barrier at
| speeds where pedestrians could be present would not be fatal to
| the occupants.
|
| In the real world I have a hard time imagining a scenario where a
| case like this would actually come up.
| ep103 wrote:
| Correct, the Moral Machine is an inherently flawed approach to
| its domain. Abby Evertt Jaques goes into much more detail, but
| the point you are pointing out here is one of the main
| concerns.
|
| Why the Moral Machine is a Monster - Abby Everett Jaques
| https://robots.law.miami.edu/2019/wp-content/uploads/2019/03...
| mindslight wrote:
| I think that paper is still missing the bigger picture - in
| its own terms, the "structural effect" of framing the
| situation in such a paradigm to begin with. The paradigm is
| flawed in that it presupposes that these situations exist and
| could not have been avoided, thus absolving responsibility
| for whomever got the car into the unwinnable situation to
| begin with. We've already got enough of this "nobody's fault,
| things happen" nonsense with traditional cars. Self driving
| cars are capable of programmatically sticking to invariants
| and not becoming overwhelmed, and we should reject allowing
| the lazy "no fault" culture to persist.
| tgv wrote:
| The first scenario I got was: car drives in direction of
| concrete road block that blocks half the road, on the other
| half there are two cats walking (on a zebra crossing, if you
| can believe it). The implicit choice seems to be "crash car
| with passengers" or "kill the cats". How about breaking? You
| should never drive so fast that you can't avoid a static
| obstacle, is it?
| penteract wrote:
| Presumably the realistic cases are not so clear cut, but it's
| not hard to imagine situations where self driving cars must
| make choices between courses of action that have some
| probability of killing people (and where the car might
| calculate that all courses of action have some non-zero
| probability of killing people). The contrived cases make sense
| as a way to guide what should be done in more
| realistic/ambiguous ones.
| imgabe wrote:
| Maybe. But we needn't compare driverless cars against some
| hypothetical morally perfect agent. We compare them against
| humans. Some humans might instinctively sacrifice themselves
| to save a baby carriage or an old lady. Many more will
| instinctively act to preserve themselves regardless of the
| consequences. Even more will panic and make bad decisions
| that cause needless deaths that a self-driving car would have
| avoided.
|
| I had a cousin who died because he swerved to avoid a
| squirrel and drove off a cliff. I don't think anyone would
| argue that we should program cars to prioritize squirrels
| over people, but humans will end up doing that sometimes.
|
| The possible amount of lives saved by the overall reduction
| in accidents with a competent self-driving car is going to be
| far, far greater than any 1-in-a-billion moral dilemma
| situations that they might encounter and where they might
| make choices that are not "optimal" as decided by some human
| who is looking at the situation after the fact with the time
| to think coolly and composedly about it.
| HPsquared wrote:
| There's also the practical limitations to consider. Self-
| driving cars have enough trouble identifying what's ON the
| road, never mind what's off to the side. Is it a cliff, a
| crowded sidewalk, a crash barrier, a thin wall with lots of
| people behind, an empty field, trees, a ditch? etc etc.
| penteract wrote:
| You seem to be addressing the question of when self-driving
| cars should be used in place of manually driven ones,
| whereas the website seems to asking how we should program
| the self driving cars (where I would say we should be
| trying to make them close to a hypothetically perfect moral
| agent).
| mindslight wrote:
| Why is "probability of killing someone", much less
| probability of killing _specific_ people, a metric that the
| car should be calculating in the first place? By the time a
| car has gotten to these type of no-win no-safe-fail
| scenarios, you have massively screwed up and are way beyond
| the design space and can no longer take assumptions for
| granted. If the probability of sudden total brake failure is
| high enough to be designed for then the correct answer is to
| add more hardware for redundant emergency brakes, not to add
| software to mitigate damage from the insufficient braking
| system.
|
| I feel like this whole topic is basically media click bait,
| that some software engineers with little hardware experience
| latch onto because it seems edgy. The answer, like many
| things in life, is to set aside the abstraction and question
| the assumptions.
| ep103 wrote:
| Why the Moral Machine is a Monster - Abby Everett Jaques
|
| https://robots.law.miami.edu/2019/wp-content/uploads/2019/03...
| beders wrote:
| Pedestrians are not protected by airbags...
| betwixthewires wrote:
| This is a really bad summary of my judgments. I took the entire
| thing with a very simple set of rules: 1) the vehicle must
| protect it's human passengers first and foremost, 2) the vehicle
| must avoid killing anyone if there are no human passengers, 3)
| the vehicle must not make changes to it's path if there are
| people in both paths.
|
| The breakdown says a lot about the law, my gender preferences,
| age preferences, fitness preferences, literally things I
| absolutely did not take into account. It is a bad methodology
| because it assumes the reasoning behind my decision is as
| complicated as the game itself.
| eckesicle wrote:
| My strategy was: minimise casualties to kids, people and then
| pets. If the choice is between equal damage to pedestrians or
| passengers then pedestrians take precedence. All else being
| equal don't intervene.
| psyc wrote:
| This was my exact algorithm. Like GP said, it was jarring to
| read a bunch of conclusions that never entered my mind, and
| were a function of which examples they contrived.
|
| My reasoning about pedestrians is that they didn't sign up
| for self-driving, the people in the car did.
| betwixthewires wrote:
| I get this point of view too, but my problem with it is that
| it is based on a series of prejudices (kids are more valuable
| than adults) and not some fundamental ethical principle.
| Maybe I just like things that in theory tidy up nicely and
| practicality is a better approach, but I think ethical
| decisions should be able to be fundamentally summarized in
| principle without relying on specifics of the situation
| unless they fundamentally change the scenario in some way.
| philipswood wrote:
| Ditto for me.
| [deleted]
| hypertele-Xii wrote:
| The passengers do not allow the pedestrians to live or die.
| It's the pedestrians who allow the passengers to drive around
| among them in the first place. All vehicles must therefore
| protect pedestrians _first._
|
| It's totally ok if only a few people have the courage to mount
| a vehicle that is dangerous to them. It's _not ok_ if every
| human has to fear for their life on the street at all times.
| AnIdiotOnTheNet wrote:
| I get you, but the entitled upper crust, and particularly
| American, mindset is that we must continually sacrifice lives
| and wellbeing on the altar of the great open road. It is
| important to all the automobile and oil companies that we see
| driving as a right and necessity.
| Icko wrote:
| That's a very solid argument. My reasoning was that the car
| must protect it's owners; but you changed my mind.
| betwixthewires wrote:
| The more I think on this, the more I change my position.
|
| I rebutted the comment you're responding to, but when
| considering the same scenario but with different specifics
| I came to their conclusion. Specifically, let's take out
| the car and make it an airplane flying over a populated
| area. Does the plane eject the pilot and crash in someone's
| house? I'd say no, the pilot knew the risk hopping in the
| plane, the guy at his breakfast table had no say.
|
| But then, what about if there is no crosswalk? Does the
| pedestrian still get to claim "I had no say in the street
| being there so I can cross wherever I want"? Does the
| passenger get to say "I assessed the risk taking into
| account nobody could cross here"? At what point do we say
| that the pedestrian also took a calculated risk?
| Fundamentally if we say " always protect the pedestrian" we
| don't care if there's a crosswalk or not.
|
| So at this point, I don't think there's a fundamental
| ethical axiom to be followed here, it is entirely
| situational. As long as we all know the rules beforehand
| and all calculate our own risks, any set of rules is OK. If
| we say "protect pedestrians at crosswalks, protect
| passengers otherwise" as long as the pedestrian knows this
| rule clearly, they're the one taking the risk. If we say
| "pedestrians can cross wherever they want whenever they
| want" we end up with the same scenario, everyone taking
| their own calculated risks. We just have to agree on the
| set of rules, whatever they are, which makes it an
| optimization problem, not an ethical one.
| naasking wrote:
| > The passengers do not allow the pedestrians to live or die.
| It's the pedestrians who allow the passengers to drive around
| among them in the first place. All vehicles must therefore
| protect pedestrians first.
|
| I disagree. The car _knows_ there are human passengers. The
| car does _not_ know that obstacles it detects outside of it
| are necessarily people and not, say, fire hydrants (although
| it may have a probabilistic estimate of their "peopleness").
| Therefore it should prioritize the lives that it knows are
| actual people.
| hypertele-Xii wrote:
| If a self-driving car can't tell a fire hydrant from a
| pedestrian, it has _absolutely no business driving around
| people._
| naasking wrote:
| I disagree again. Obstacles are obstacles and still to be
| avoided. Avoiding obstacles while following the road
| rules is what driving is all about.
| Kbelicius wrote:
| > Obstacles are obstacles
|
| So to you there is no difference between a fire hydrant,
| a human being and a paper bag? I mean, it is quite
| obvious that those, as obstacles, are not the same. A
| fire hydrant is stationary a human isn't and a paper bag
| can simply be ignored by the car. Do you really think
| that a paper bag should be treated the same by a self
| driving car as a fire hydrant because that is what you
| are saying.
| naasking wrote:
| > A fire hydrant is stationary a human isn't
|
| A human bent over tying their shoe is quite stationary.
|
| > and a paper bag can simply be ignored by the car.
|
| Not relevant. Suppose I provided you with a self-driving
| car that couldn't differentiate a walking person from a
| paper bag carried on the wind, and yet I also provided
| convincing evidence that this car would reduce traffic
| fatalities and injuries by 30%.
|
| That's the real question we're facing, and the real
| standard of evidence that must be met, so your suggestion
| that the inability of this car to differentiate things in
| the ways you think are important is simply irrelevant.
| Kbelicius wrote:
| > A human bent over tying their shoe is quite stationary.
|
| Indeed and after the human finishes tying his shoe he
| steps on to the road. A human driver and a self driving
| car that can differentiate between different kinds of
| obstacles would adjust their speed in such a situation.
|
| > Not relevant. Suppose I provided you with a self-
| driving car that couldn't differentiate a walking person
| from a paper bag carried on the wind, and yet I also
| provided convincing evidence that this car would reduce
| traffic fatalities and injuries by 30%.
|
| > That's the real question we're facing, and the real
| standard of evidence that must be met, so your suggestion
| that the inability of this car to differentiate things in
| the ways you think are important is simply irrelevant.
|
| You said that an obstacle is an obstacle so it is clearly
| relevant. Your car wouldn't ignore the paper bag. Would
| you use a car that treats every obstacle as if it were a
| human? Slowing down by every tree/fire hydrant/road
| sign... imagine getting in to the car that is parked
| under a tree in autumn and leaves are falling down. The
| car won't start because an obstacle is an obstacle, as
| you say. Would you wait for the wind to remove the paper
| bag from the road or would you leave the car to remove it
| yourself?
|
| If you need to come up with impossible scenarios to
| justify your claim maybe you should rethink your claim.
| naasking wrote:
| Objects in the driving lane are obstacles, objects not in
| the driving lane are not. Signs, trees and fire hydrants
| are typically not in the driving lane, therefore they are
| not typically problems. Obstacles in the driving lane
| should trigger the car to slow down and approach more
| cautiously, _regardless of what they are_. This isn 't
| complicated and doesn't require any impossible scenarios.
| Kbelicius wrote:
| Also, what about the paper bag on the driving lane? Would
| you wait in the care until a gust of wind takes it of the
| road or would you go out and remove it yourself?
| Kbelicius wrote:
| > Objects in the driving lane are obstacles, objects not
| in the driving lane are not. Signs, trees and fire
| hydrants are typically not in the driving lane, therefore
| they are not typically problems.
|
| > Obstacles in the driving lane should trigger the car to
| slow down and approach more cautiously, regardless of
| what they are. This isn't complicated and doesn't require
| any impossible scenarios.
|
| A human can in one moment be next to the driving lane and
| in the next moment on the driving lane so the speed
| should be adjusted when an obstacle in the form of a
| human being isn't on the driving lane.
| wizzwizz4 wrote:
| Clipping a fire hydrant could be part of a legitimate
| strategy to emergency-stop. Clipping a pedestrian
| wouldn't be. A self-driving car needs to know the
| difference.
| naasking wrote:
| > Clipping a fire hydrant could be part of a legitimate
| strategy to emergency-stop. Clipping a pedestrian
| wouldn't be. A self-driving car needs to know the
| difference.
|
| No, this is not a choice a car would have to make. A
| self-driving car would simply never drive into
| pedestrian-only zones. The only time pedestrians and
| other obstacles are an issue is in the driving lane, and
| the only sensible course of action is to brake and veer
| _within the confines of the driving lane_ , and nowhere
| else. These are in fact the rules of the road.
| wizzwizz4 wrote:
| > _A self-driving car would simply never drive into
| pedestrian-only zones._
|
| Unless it got t-boned.
| betwixthewires wrote:
| I get the argument, it really is a difficult ethical
| question. Your argument fundamentally boils down to "the car
| exists in the world and should move through it causing
| minimum disturbance to it" and that's a very sensible
| argument.
|
| But, quite simply, if I'm paying for a car or a ride, the
| machine serves me. My argument fundamentally boils down to
| "the car exists to serve it's passengers first and foremost"
| with the caveat that human lives are more important than
| animals. It should do everything in it's power to avoid
| hurting anyone, property be damned. But, when the choice is
| protect people in general or it's purpose for existing, I
| disagree with you. I'm always open to changing my mind
| however, and I don't dismiss your point of view out of hand.
| wizzwizz4 wrote:
| > _But, quite simply, if I 'm paying for a car or a ride,
| the machine serves me._
|
| If you're saying that people who can afford to take a car
| for a particular journey _deserve_ to be protected over the
| pedestrians who can 't in the edge-cases, then, if we
| average out enough of the details, this implies that richer
| people's lives are worth (slightly) more than poorer
| people's lives.
|
| You can draw a _lot_ of conclusions from this logic,
| though, so I 'm not sure how valid it is to just pick _one_
| property and generalise from it. (Wealth is applicable, but
| also able-bodiedness, age, climate consciousness, whether
| you feel safe walking streets alone at night...) It might
| be something to think about [edit: removed].
| giraffe_lady wrote:
| > but it's not something to judge people's ethical
| positions over.
|
| It is, actually. We don't have to be so _so_ careful not
| to ever pass judgement.
|
| We are talking about the belief that people paying for a
| service are entitled to increased safety at the expense
| of others, who did not pay for it. It's ok to find that
| wicked and to say so.
| betwixthewires wrote:
| Let's not get into morality and stick to ethics. Let's
| not call something "wicked", no need to charge the
| discussion. we are trying to come to an answer to this
| after all.
|
| But I think you're probably right at this point, at least
| in the specific scenario in the game. I woke up this
| morning believing the opposite.
|
| I'm actually in an interesting position in this
| discussion, I've been faced with this decision before in
| real life, before I ever articulated a position on it. I
| chose to hit the barrier, and I'm lucky to be alive. That
| is of course the exact opposite of what my position was
| this morning when I woke up, but the one I go to bed with
| tonight if the rest of my day is uneventful.
| wizzwizz4 wrote:
| You probably have a point. Edited.
|
| Moving on to my next example, though, there are people
| who aren't able to be pedestrians because they have
| mobility impairments. Are they less worthy of life, just
| because they're using a vehicle as an accessibility aid?
| trhway wrote:
| coding the choice in or making the choice is already at least
| half-way immoral as such a choice is an exercise of power, and
| the moral tends to end where power starts. The moral approach
| here is to decrease the power you wield over others - i.e. to
| decrease the energy of the strike and possible damage in the
| fastest possible way which usually is emergency brake, and thus
| in particular decreasing the importance of who is going to be
| hit. I.e. the main moral choice isn't between hitting A or
| hitting B, it is between hitting at 60mph or at 10mph (and if you
| have a time to choose whom to hit and to act upon that choice,
| you definitely have the time for emergency brake)
| dash2 wrote:
| This seems like ducking the issue. If you're going to either
| hit an old lady or a young man and a cat, which do you choose?
| Saying "I'd never get into that situation" isn't an answer.
| satisfice wrote:
| It's not ducking the issue, it's denying the framing. "If you
| give me the choice of hitting X or Y" of two nearly the same
| things I would not focus any energy on making the choice, but
| instead devote my energy to finding a way out of the choice
| until it's too late to make any choice. If I can't find a way
| out then I want to postpone things as much as possible to
| allow time for the situation to change.
|
| I deny the whole premise that there are correct moral
| conclusions in these hairsplitting situations. It's not the
| choice itself but the act of struggling with and living with
| the choice that matters, morally. A machine is incapable of
| that struggle, so it is immoral to create machines that face
| such choices. In effect, the self-driving car is an attempt
| by humans to escape responsibility for whatever damage
| driving might do.
| dash2 wrote:
| But your claim that the situations are nearly the same is
| itself a moral evaluation. (And it's clearly false in many
| of the cases presented.)
___________________________________________________________________
(page generated 2022-01-27 23:02 UTC)