[HN Gopher] Hypothetical Judgment versus Real-Life Behavior in T...
___________________________________________________________________
Hypothetical Judgment versus Real-Life Behavior in Trolley-Style
Moral Dilemmas
Author : nabla9
Score : 58 points
Date : 2021-02-25 13:50 UTC (9 hours ago)
(HTM) web link (journals.sagepub.com)
(TXT) w3m dump (journals.sagepub.com)
| andrewclunn wrote:
| An abstract from a study to show that hypothetical moral /
| philosophical quandaries are impractical. Okay logicians and
| empiricists, get ready to have a meta fight about which approach
| is more out of touch in the comments section of a news aggregator
| site run by a tech start up firm.
| whiddershins wrote:
| I think the Trolley Problem is deeply flawed because of the
| presumption of personal responsibility. I would bet when people
| put themselves in that position, they don't think of themselves
| as in charge of the trolley track switching.
|
| I would bet if it were reframed as 'you are in charge of the
| trolleys. All day long, you control the switching of the tracks.
| Now you see a situation ...' you would get a totally different
| answer.
|
| I mean, whenever I imagine this problem, I think ... I would be
| nervous to pull some lever. I think subconsciously I factor in
| the unknown of whether I really understand the situation and the
| result of my action.
| ravi-delia wrote:
| > I mean, whenever I imagine this problem, I think ... I would
| be nervous to pull some lever. I think subconsciously I factor
| in the unknown of whether I really understand the situation and
| the result of my action.
|
| That puts into words exactly my feeling on the matter. In the
| abstract I can say that I would sacrifice 1 to save 5, but I
| know for a fact that in real life I would never pull that
| lever. Perhaps it's irrational, but I'd be terrified that I
| missed something! Would the lever actually move? Are the tracks
| really lined up that way? Isn't there someone that's supposed
| to be moving the tracks right, and would my throwing of the
| switch interfere with their trying to do the same thing?
|
| None of that detracts from the philosophical question at the
| core of course, but I'd bet that a lot of people that say they
| wouldn't pull it because they're thinking about that kind of
| thing.
| ramoz wrote:
| agree... yea I think it's a bit of a future-tech fallacy.
| Autonomous systems should have the infrastructure in place that
| allows for accurate prediction well ahead of the time needed
| for risk mitigation.
| wongarsu wrote:
| The youtube channel vsauce tried putting people in this
| situation by putting them in a fake railroad switching outpost
| and having the operator step out for a phonecall. They took
| care to educate the subject exactly on how the system works
| without them suspecting anything, trying to eliminate many of
| the factors you mention.
|
| The video is well worth a watch
| https://www.youtube.com/watch?v=1sl5KJ69qiA
| cbozeman wrote:
| So if your life is in the hands of most people, you're as
| good as dead.
|
| That's incredibly depressing.
| sneak wrote:
| I think the "in the hands" is the part up for debate. The
| question is a matter of responsibility, ultimately, in my
| view.
|
| A better rephrase would be "if you are default-dead, you
| are probably default-dead", which is less depressing.
| jjcc wrote:
| There are variations of the original version though. But there
| are some important variations missing to detach from reality so
| people could have less controversial or provocative dilemma.
| For example:
|
| 1.If there are 50k people instead of 5 people on the other
| track?
|
| 2.If the decision maker is not a bystander but an operator
| whose duty is responsible for the safety of the system?
|
| 3.If two tracks are invisible (Except the switch operator)
| before the switching happens. And only one track could be left
| to be seen by modern human's eye after switching happen. Ether
| 50k die or 1 die but there's only one track left.
| hntrader wrote:
| The Trolley Problem is a good example of the status quo bias and
| the opt-in/opt-out framing bias, and it shows how our moral
| reasoning can be driven by cheap heuristics.
|
| Someone else (in this case, the experimenter) has chosen that A
| (5 people dying) is the default, and so most just go along with
| that default choice.
|
| If the experimenter instead chose NOT(A) (1 person dying) as the
| default, everyone would go along with that instead.
|
| The usual excuse for going along with the default choice (in this
| case, to kill 5 people instead of 1), is that opting out of the
| default is "an active choice that leads to death", and that doing
| "nothing" isn't anything like it morally speaking. I believe this
| to be a figment of people's imagination used to justify their
| decision after the fact. They can either choose A or NOT(A).
| Either way they've made a choice and acted upon that choice, and
| there's a causal reality that follows from either choice that
| they made.
| anaerobicover wrote:
| That should be straightforward to check up on, by randomly
| choosing which one to present as the default to a large number
| of people.
| ghodith wrote:
| Why? Do you really think we need to check if people would
| switch the track from hitting one person to hitting five?
| wombatmobile wrote:
| You are incredulous because to you it seems obvious that 1
| death is a relatively superior outcome to 5 deaths. But
| have you considered the possibility that doing nothing
| means the actor (or the non-actor in such a case) is then
| not actively responsible for any death? She is then
| plausibly just an innocent bystander.
|
| What you would be testing if you changed the default would
| be to test the friction involved with taking active moral
| responsibility i.e. getting involved in a moral choice,
| versus not getting involved.
|
| You are correct that the total number of deaths is going to
| be what it is, but by not getting involved, the test
| subject's personal culpability is reduced to zero rather
| than [1,5].
|
| In practice, the friction is even higher than in theory,
| because the rules of the game are not so clear and obvious
| in the field. There is a cognitive overhead to parsing the
| circumstances and understanding what choices are available.
| It's easier to not parse the emergency and just turn away.
| [deleted]
| NickM wrote:
| The bias doesn't seem _entirely_ wrong, though, if you look at
| it outside of these kinds of contrived situations.
|
| If you truly believe that not saving a life is equivalent to
| killing someone, then you have to start accepting some pretty
| controversial things. For example, if you can statistically
| save one life by donating a few thousand dollars to a charity
| that buys mosquito nets for people, does that mean everyone who
| can spare that money and chooses not to donate is morally
| equivalent to a contract killer? In both cases, you have one
| person who ends up richer in exchange for someone else dying,
| no?
|
| Or to take it a step further, what about people who can't
| afford to donate that money, but could've had enough money to
| donate more if they'd chosen a different career path? Does that
| make them less guilty, just because the "opt-out" decision is
| farther removed from the result? I think most people would say
| yes, but if you really believe that action is equivalent to
| inaction then it doesn't seem any different.
| rfw300 wrote:
| The rough answer is that we have a duty to help to a degree
| that is reasonable enough that we'll keep doing it. So
| organizations like Giving What We Can, for example, ask
| members to pledge to donate 10% of their annual income to
| effective charities, which is a fairly reasonable amount for
| middle-class people in a developed nation that won't
| sacrifice much quality of life.
|
| Yes, there is always some subtext that a life could've been
| saved had you given that much extra. But I think it's a
| mistake to view moral failure as a binary rather than a
| scalar; every life saved is that much better and we ought to
| keep that in mind rather than embracing a kind of nihilism
| that we're not doing everything we can and therefore we
| should do nothing.
| hntrader wrote:
| Many very good points.
|
| One thing to note is that in these real-life examples you've
| laid out, there's a non-trivial difference in effort and
| action between choosing to save people (e.g changing your
| entire career path to earn more money) versus not doing so.
|
| In the Trolley Problem, the action can be made arbitrarily
| insignificant, trivial and ambiguous. For example, instead of
| pulling a lever, we could instead define the action as
| nodding one's head, or blinking twice. At a certain point of
| triviality, the action/inaction dichotomy and the do-
| something/do-nothing dichotomy loses coherency and we realize
| that ultimately this is a choice between two mutually
| exclusive options, and subjects are intentionally choosing to
| kill 5 to save 1.
|
| Perhaps a more fitting real-life analogy to the Trolley
| Problem would be as follows. You currently donate $100/month
| to charity A, which saves 1 life per month. A more effective
| charity comes along called charity B, which for $100/month is
| capable of saving 5 lives per year. The process to switch
| from A to B is easy and trivial. Despite knowing all this,
| you purposely choose _not_ to switch, thereby causing four
| extra deaths per month.
|
| Does that make the person equivalent to a contract killer
| who's killing 4 people a month? Society has deemed not,
| probably for reasons pertaining to pragmatism (we want to
| disincentive real murderer) and evo psych (we didn't evolve
| to have disdain for opportunity cost deaths, irrespective of
| how important it is morally speaking).
| jerf wrote:
| "Perhaps a more fitting real-life analogy to the Trolley
| Problem would be as follows."
|
| A non-trivial part of why the trolley problem bothers so
| many people is that it can't be salvaged because it is
| intrinsically not anything like real life. In real life you
| are never presented with such a binary problem. Everything
| is a mix of arbitrarily unbounded priorities, predictive
| ambiguity, and even ambiguity in the outcome.
|
| My personal "resolution" to the problem is, yes, obviously,
| you save as many lives as possible even if that means
| actively changing the outcome... but that's essentially
| irrelevant, because you will never in your life be faced
| with such a stark, mathematically-rigid choice. There's
| always other options... not always relevant ones or
| desirable ones, but always something.
|
| Even when you construct the problem as close to a real life
| one as possible, you end up with something more like "You
| just realized you're driving on ice and going way too fast.
| In front of you are two pedestrians crossing in front of
| you, one on each side of the road. You have about .3
| seconds to consider this problem. Your System 1 instincts
| think you might be able to weave between them but there's a
| chance you'll start spinning and hit both. You could also
| try to jank to the right and go up on the curb, but you
| might crash yourself into that tree pretty hard and still
| hit one of the pedestrians. Regardless of what you choose
| System 2's gonna be second guessing you pretty hard for the
| rest of your life." You don't get "Choose A and this will
| happen, choose B and this will happen", it's a big mess.
| It's always a big mess.
|
| Even if you literally encountered the trolley problem in
| real life, one must consider whether your understanding is
| accurate, or if the person who has explained it to you is
| lying or has ulterior motives, and so on. (And that's
| ignoring "this is clearly the trolley problem and I must be
| on TV or in a psychology experiment"; let's pretend it's in
| a world that isn't obsessed with this particular problem.)
|
| Now, that is arguably the _point_ of the trolley problem,
| to get down to that level and ask ethics problems. But it
| 's dozens of layers of abstraction away from anything
| relevant to the real world. It's a mistake to think
| otherwise and stress about it as much as the gestalt seems
| to.
| Wowfunhappy wrote:
| > In real life you are never presented with such a binary
| problem. Everything is a mix of arbitrarily unbounded
| priorities, predictive ambiguity, and even ambiguity in
| the outcome.
|
| One real-life example that comes to my mind is whether we
| should have done COVID vaccine challenge trials. I always
| thought they were a good idea. Yes, it would have put
| people at risk, but it could have potentially saved many
| more lives!
|
| My father is a doctor, and in fact is an infectious
| disease specialist. He's been very busy this past year,
| working primarily on COVID research. When I talked to him
| about the idea of a challenge trial, he was _completely_
| against the idea--so much so that he got a bit upset, and
| we had to stop talking about it. That doesn 't happen
| often.
|
| The idea of purposefully giving participants a deadly
| disease--even with their consent--seemed fundamentally
| wrong to my father, regardless of how many other lives
| could have been saved in the process.
| davidgay wrote:
| https://en.wikipedia.org/wiki/Primum_non_nocere would
| seem to be the ethical principle at play here .
| Wowfunhappy wrote:
| Right, but that's the essence of the trolly problem,
| isn't it? Whether you can take one life to save multiple
| lives.
| jerf wrote:
| "One real-life example that comes to my mind is whether
| we should have done COVID vaccine challenge trials."
|
| You don't know what the result of the trial will be,
| already taking us out of the domain of the trolley
| problem. If you _knew_ it was going to kill everyone who
| had it but you 'd get this particular bit of info that
| was guaranteed to have this result for the general
| population, then we'd be much closer, but there are
| _multiple_ false assumptions to get there... and we still
| haven 't discussed all the different ways we could run
| the trial because it's not a matter of "Either we do the
| trial this exact way, or we don't"; do we do a double-
| blind? Base it on multi-armed bandit math? Do we focus on
| certain demographics? And so on and so on.
|
| Not a binary choice between these two guaranteed results;
| a huge range of possibilities, unknown outcomes, even
| things that are hard to measure outcomes.
| Wowfunhappy wrote:
| Sure, real life adds complexity! I still think the
| simplified question is a useful framework, similar to how
| the concept of an infinite two-dimensional plane is
| useful in mathematics. We can use it to understand the
| more-complicated real world.
|
| And, note that my father's stated position was that
| regardless of anything else, the risk of giving COVID to
| participants wasn't worth the potential of a faster
| vaccine, period. He wasn't thinking of those other issues
| you raised, because he hadn't gotten there yet.
|
| He did propose scenarios in which he would support a
| challenge trial--for instance, if we found a drug which
| rendered COVID non-deadly, but could only be manufactured
| in low quantities (had had a specific candidate in mind).
| But that's basically opting out of the problem--he would
| pull the lever if the trolly wasn't deadly to the other
| guy.
| vlovich123 wrote:
| I think you're begging the question. The more steps between
| an action & an outcome, the more other people need to be
| involved, etc the less guilty you are. If someone is beaten
| as a child & then turns into a child predator, we don't
| convict the parents as guilty for the child predation act.
| One person killing another is a crime. One country choosing
| to wage war isn't generally considered to be such.
|
| With a charity, the inaction is offset by those that _are_
| taking action. The inaction is also mitigated by an awareness
| of the impact of diminishing returns (there could be other
| things more worthy of investment in the long term).
|
| The trolly example artificially makes the outcome solely
| dependent on your action, which is extremely rarely/never how
| things work in reality.
| blamestross wrote:
| The fundamental with this philosophy is that is permits
| "responsibility" to diffuse into the population such that
| nobody feels responsible for a reality we all must act to
| address.
| vlovich123 wrote:
| Congratulations. I think you've just articulated why
| government & regulations exist.
| dnautics wrote:
| Even one more step further, if everyone donated 1k to
| mosquito nets, then that's more money than the problem can
| handle and again you are a serial killer for not having
| donated to, say, drug abuse clinics instead.
| hinkley wrote:
| That Coast Guard movie with Kevin Costner has a line people
| used to quote a lot but I haven't seen lately.
|
| "How do you chose who lives?"
|
| "[...] I take the first one I come to or the weakest one in
| the group and then I swim as fast and as hard as I can for as
| long as I can. And the sea takes the rest."
| roenxi wrote:
| I broadly agree with a lot of the principles here, but there
| are distinctions that should be drawn.
|
| Firstly, there is the question of uncertainty. I am not
| certain that donating money will result in lives saved. The
| charity might be corrupt.
|
| Secondly, there is the donation-vs-useful-work. Someone like
| Bill Gates is a fantastic example - the more money he donated
| pre-becoming-a-billionaire, the more less helpful he'd have
| been. It was better that he focused on making more money so
| he has more to give away.
|
| Thirdly, there are complicated but powerful arguments that
| people have to privilege their own life beyond what the
| trolley problem answer suggests.
|
| - Practically, people are going to do that.
|
| - An individual can be much more certain that they exist and
| benefit from living than others (maybe they are surrounded by
| philosophical zombies?).
|
| - If moral people make choices to weed themselves out of the
| gene/social pool then society will become less moral over
| time.
|
| Cynical, self serving, but true.
| conception wrote:
| Aaron Schwartz had a post about this, that comes from a talk
| he went to -
|
| http://www.aaronsw.com/weblog/handwritingwall
|
| Definitely worth a read, even if you don't agree in the end.
| skizm wrote:
| > so most just go along with that default choice.
|
| Wait, but in the trolley problem the vast majority (over 90%)
| of people opt to pull the lever and kill the one person. Almost
| no one goes with the default option unless you tinker with the
| parameters (the one person is a loved one or something).
| atq2119 wrote:
| The thing is, the trolley problem presents a very theoretical
| dichotomy. In real life, such dichotomies are _usually_ false.
| At the very least, the underlying facts are usually less clear.
| Therefore, "do nothing" is a very good heuristic in practice:
| it gives the subject more time to try to find a third solution.
|
| So yes, not acting is in a sense a choice, but evaluating the
| usual presentation of the trolley problem is unsuitable for
| judging whether that choice is moral, or even whether it is
| optimal in a utilitarian sense.
| trhway wrote:
| People like to think that the lever in their hands affects the
| outcome, and they like to play the God who decides who to live
| and who to die.
| gjm11 wrote:
| In the sidebar of the SagePub page about the article is a link to
| another article, commenting on this one:
| https://journals.sagepub.com/doi/10.1177/0956797619827880
|
| I've read only its first page (the rest one has to pay for),
| which mentions another study that did the same sort of thing as
| this one but got the opposite result.
|
| In _this_ study, they asked people about what they would do in a
| trolleyesque problem involving hurting mice, and then they put
| the people in a situation where (so they believed) they had
| exactly that choice to make. The experimental subjects were much
| more likely to choose to shock one mouse rather than letting five
| be shocked than they had predicted.
|
| In _the study referenced by the other paper_ , they did a similar
| thing except that instead of "giving shocks to mice" it was
| "taking meals away from human orphans". In this one the result
| was the other way around: the experimental subjects were _less_
| likely to choose to take a meal away from one orphan to save five
| others from losing a meal than they had predicted.
|
| (No mice and no orphans were actually hurt or harmed in either
| study.)
|
| I suspect (with no evidence, and without knowing whether either
| set of researchers have already proposed this explanation) that
| what's going on is that what people _say_ they will do is more
| governed by wanting to look good than what they actually do is;
| and that, perhaps because the stakes are lower, "shock a mouse"
| looks worse relative to "allow five mice to be shocked" than
| "deprive one orphan of a meal" does relative to "allow five
| orphans to be deprived of a meal".
| nerdponx wrote:
| I doubt it's entirely about looks.
|
| I would sooner believe that it has to do with the limits of
| human squeamishness not extending far past what we can
| personally see, touch, and hear.
|
| This is the classic problem about war. War is always a
| hypothetical until you personally experience combat. It's
| harder to be shocked by drone bombings than infantry raids.
| Torture that isn't called "torture" is just a tool in the War
| on Terror. Etc.
|
| Same goes for environmental destruction. It's easy to sleep at
| night after dumping a bunch of chemicals in the river as long
| as you personally don't depend on the river for your survival.
| alasdair_ wrote:
| In the simple "1 person versus 5 people" scenario, I'd almost
| certainly NOT pull the lever to save five and kill one.
|
| The reason is simple: if I pull the lever, the family of the dead
| person could likely sue me, which could be devastating for my own
| family even if I win the case. I'd probably be harmed in all
| kinds of other ways for taking the action.
|
| In addition, my taking an action like this would help deflect
| blame from the true guilty party - the one who set the experiment
| up in the first place. Allowing the guilty to avoid punishment
| doesn't seem especially moral either.
|
| As for the more interesting case where some of my loved ones were
| on both sides of the track, I'd simply have to choose randomly,
| and make that clear to everyone involved.
| wombatmobile wrote:
| "In theory there is no difference between theory and practice. In
| practice there is."
|
| -- Yogi Berra
| Ambolia wrote:
| Even without moving to the practical realm, if you ask most
| people if they would slaughter 1 random healthy person if his
| organs could be used to save 5 people will say no.
|
| The trolley with the pulley is too sanitized.
| 13415 wrote:
| Many if not most discussions of Trolley problems are about
| eliciting "intuitive" moral judgments and possibly identifying
| conflicting principles people might use when coming up with those
| judgments, not to muse about how people would actually react.
|
| I'm sure the authors of this very interesting study are aware of
| the differences (normative vs descriptive), just wanted to
| mention this for completeness.
| crazygringo wrote:
| I came here to comment the same thing. Trolley problems are
| thought experiments to probe where different moral models (e.g.
| fair procedure vs. fair outcome) conflict.
|
| I've never even heard of them being used to represent what
| people would actually do in the moment. Especially since such
| moments are often extreme circumstances that most people would
| (thankfully) go their entire lives without ever confronting.
| ravi-delia wrote:
| I suppose the interesting thing about looking into how people
| would react in the real world is that some actual ethical
| systems are attempts to formalize exactly that intuition.
| Virtue ethics, for instance, fit well into the mold of 'the
| world is very hard, the best way to avoid screwing up is to
| stick to the script'. Given that the world is, in fact, very
| hard, it's pretty tempting.
| ChrisArchitect wrote:
| (2018)
| dash2 wrote:
| Pussies. Experimental economists would have used a real trolley
| and human victims. Anything less isn't incentive-compatible.
| SPBesui wrote:
| I have nothing special to add here, just wanted to bring
| attention to the excellent episode of The Good Place (S02E06)
| that handles the Trolley Problem in a humorous (and very literal)
| way.
| cowpig wrote:
| > The IP you are accessing the site with (101.032.239.020) has
| been blocked because it has triggered one of our security
| measures. Please see the reason below:
|
| > Block reason: This IP was identified as infiltrated and is
| being used by sci-hub as a proxy.
|
| > To restore access, please contact onlinesupport@sagepub.com
| citing this message in full.
|
| :(
|
| ... does anyone have a mirror?
| SpikedCola wrote:
| https://biblio.ugent.be/publication/8589719/file/8589729.doc...
| carols10cents wrote:
| This is my favorite variation of the trolley problem, which is
| absolutely tested in real life all the time:
| https://ruinmyweek.com/wp-content/uploads/2020/07/live-laugh...
|
| I especially love this variation because I _always_ take the time
| to put stray carts back in the cart return and get them all in a
| neat line. And I warm myself with my moral superiority over all
| you jagoffs.
| Mountain_Skies wrote:
| It's interesting how much the stray cart problem varies from
| place to place. Where I live now it's not an issue at all.
| Everyone returns their carts so there are no stray carts for
| anyone to round up. Other places I've lived, it was probably
| less than half the carts that got placed into the cart return.
| Judging by the way carts ended up clustered, once a few people
| didn't return their carts, others saw it as permission to not
| return their carts either, though they often pushed them to be
| next to other unreturned carts.
| carols10cents wrote:
| And then there are places that have a coin deposit you can
| get back if you return your cart properly, which provides an
| interesting comparison of intrinsic vs extrinsic motivation.
| mgkimsal wrote:
| "even though you gain nothing".
|
| Gaining a sense of moral superiority is a gain. Quelling some
| sense of unease around chaos (stray carts) is a gain for some
| people.
| carols10cents wrote:
| Ahh, I see you've read this philosophy classic:
| https://www.amazon.com/Elephant-Piggie-Reading-Nothing-
| Butto...
| playdead wrote:
| As philosophers keep saying, over and over, the trolley "problem"
| is just a thought experiment in normative ethics about different
| types of moral judgment -- it's not an actual problem demanding
| solutions or recommended behaviors, and it's not designed to make
| claims in cognitive psychology, behavioral economics, etc.
|
| The way people actually behave is of course interesting and
| important, but it's not the issue at stake.
| renewiltord wrote:
| The Trolley Dilemma is not a dilemma. We solve it everyday.
|
| I solved it yesterday in order to buy a GoPro Max and an Apple
| Watch. Sorry, African kid, malaria it is. I gotta record myself
| get my steps walking the Sausalito waterfront. Maybe I'll do that
| like two or three times and then let this gather dust on a shelf
| instead of saving that kid's life.
| js8 wrote:
| Judging it hypothetically, I would set the lever half-way in an
| attempt to derail the trolley.
| smnrchrds wrote:
| Then the train starts to slide on its side and hits people on
| both rails.
| kgwgk wrote:
| https://i.kym-
| cdn.com/entries/icons/original/000/000/727/Den...
___________________________________________________________________
(page generated 2021-02-25 23:02 UTC)