[HN Gopher] An Anatomy of Algorithm Aversion
___________________________________________________________________
An Anatomy of Algorithm Aversion
Author : bookofjoe
Score : 21 points
Date : 2024-06-22 18:14 UTC (4 hours ago)
(HTM) web link (papers.ssrn.com)
(TXT) w3m dump (papers.ssrn.com)
| jbandela1 wrote:
| I think part of the reason is that people understand that while
| in games such as chess, etc the entire state of the "universe" of
| the problem is provided to the algorithm, in the real world, they
| don't have that confidence.
|
| There are all sorts of cofounders to algorithms in the real world
| and an expert human is better at dealing with unexpected
| cofounders than an algorithm. Given the number of confounded
| possible, in real world use, it is likely that there will be at
| least 1 confounder.
| gurjeet wrote:
| OP > Algorithm aversion is a product of diverse mechanisms,
| including ... (5) asymmetrical forgiveness, or a larger negative
| reaction to algorithmic error than to human error.
|
| Related:
|
| The legal rule that computers are presumed to be operating
| correctly https://news.ycombinator.com/item?id=40052611
|
| > In England and Wales, courts consider computers, as a matter of
| law, to have been working correctly unless there is evidence to
| the contrary. Therefore, evidence produced by computers is
| treated as reliable unless other evidence suggests otherwise.
| scotty79 wrote:
| Your belief that this rule is wrong is an example of algorithm
| aversion. You feel like computer systems should be judged
| harshly despite mistaking far more rarely than other things
| assessed in courts like police witness accounts or possibly
| even DNA evidence.
| smogcutter wrote:
| https://www.bbc.com/news/business-56718036.amp
| scotty79 wrote:
| Of course there are mistakes. But still fewer than with
| other sources of evidence.
|
| DNA evidence is also trusted by default and yet:
|
| https://www.science.org/content/article/forensics-gone-
| wrong...
|
| https://www.nbcnews.com/news/us-news/investigation-finds-
| col...
| mjburgess wrote:
| Or whenever you automate a decision process you take all the
| resilience out of it. Human social institutions are built to
| survive all kinds of dramatic environmental change, the kinds of
| machine decision making available are not.
|
| In particular, algorithsm do not offer advice. Advice is a case
| where your own goals, ambitions, preferences, desires have been
| understood -- and moreso, what ones you arent aware of, what
| needs you might have that arent met... and these are lined up
| with plausible things you can do that are in your interest.
|
| There is no algorithmic 'advice'
| makmanalp wrote:
| I mostly agree with the first bit.
|
| Re: advice, well, there could be, but the people who put these
| things in place aren't necessarily thinking in those terms.
| They're thinking about a statistical edge and acceptable
| negative outcomes on their end and no one else's. They're not
| maximizing what's good and helpful for you unless it helps them
| also, they're probably maximizing short to medium term profit
| seeking. Computers are an amplifier of human bad behavior.
|
| See also, "computer says no":
| https://en.wikipedia.org/wiki/Computer_says_no
| kemitchell wrote:
| Reading just the syllabus, I was surprised to see no mention of
| accountability. Quick Ctrl+F searches for "accountability",
| "appeal", and "review" gave no results. "Reputation" appears, but
| in a section rather harshly titled "Herding and Conformity",
| about the reputations of the people not trusting algorithms, not
| the people making or deploying them.
|
| In my own experience, human forecasters and decision-makers tend
| to be much easier to hold accountable for bad forecasts and
| decisions. At a minimum, they stake their reputations, just by
| putting their names to their actions. With algorithms, by
| contrast, there's often no visible sign of who created them or
| decided to use them. There's often no effective process for
| review, correction, or redress at all.
|
| The fact that high-volume, low-risk decisions tend to get
| automated more often may partly explain this. But it may also
| partly explain general attitudes toward algorithms, as a
| consequence.
| qsort wrote:
| "A computer can never be held accountable, therefore a computer
| must never make a Management Decision." (1979)
| NeoTar wrote:
| My only problem with your comment is that human forecasters and
| decision makers are also often not held accountable for their
| work.
| mcint wrote:
| "Humans approximating human taste preferences perform worse on
| the validation set".
|
| It's a sort of lazy argument that one can imagine a homo
| economicus which might might better decisions on a proxy
| variable, less lazily, bemoaning that they don't optimize the
| authors' preferred measurables.
|
| It shows self-awareness at times
|
| > It is worth noting, however, that the algorithm in the study
| was designed to optimize system-wide utilization rather than
| individual driver income. > The algorithm's design weakens any
| conclusion about algorithm aversion, for individual drivers may
| have been better off optimizing > for themselves rather than the
| system.
|
| It has the air of a future cudgel. The title works as a
| punchline, and as for the strength of the argument, well it's
| published (posted at all) online, isn't it.
| oldgradstudent wrote:
| > (4) ignorance about why algorithms perform well;
|
| Au contraire. It is the correct understanding, born out of deep
| expertise, that algorithms, outside very structured artificial
| environements, often do not work well at all.
|
| Even provably correct algorithms fail if there is even the
| slightest mistmatch between the assumptions and reality,
| imprefect data, noisy sensors, or a myriad other problems. Not to
| mention that the implementations of these provably correct
| algorithms are often buggy.
|
| When of algorithms are based on user input, users learn very
| quickly how to manipulate the algorithm to produce the results
| they actually want.
| advael wrote:
| As someone who spends almost all of my productive time on earth
| trying to solve problems via algorithms, this paper is the kind
| of take that should get someone fired. God I forget how much
| stupid shit academics can get away with writing. Right from the
| abstract this is hot garbage
|
| > algorithms even though (2) algorithms generally outperform
| people (in forecasting accuracy and/or optimal decision-making in
| furtherance of a specified goal).
|
| Bullshit. Algorithm means any mechanical method, and while there
| are some of those that outperform humans, we are nowhere near the
| point where this is true generally, even if we steelman this by
| restricting this to the class of algorithms that _institutions
| have deployed to replace human decision-makers_
|
| If you want an explanation for "algorithm aversion", I have a
| really simple one: Most proposed and implemented algorithms are
| bad. I get it. The few good ones are basically the fucking holy
| grail of statistics and computer science, and have changed the
| world. Institutions are really eager to deploy algorithms because
| they make decisions easier even if they are being made poorly.
| Also, as other commentators point out, the act of putting some
| decision in the hands of an algorithm is usually making it so no
| one can question, change, be held accountable for, or sometimes
| even understand the decision. Most forms of algorithmic decision-
| making that have been deployed in places that are visible to the
| average person have been designed explicitly to do bigoted shit.
|
| > Algorithm aversion also has "softer" forms, as when people
| prefer human forecasters or decision-makers to algorithms in the
| abstract, without having clear evidence about comparative
| performance.
|
| Every performance metric is an oversimplification made for the
| convenience of researchers. Worse, it's not a matter of law or
| policy that's publicly accountable, even when the algorithm it
| results in is deployed in that context (and certainly not when
| deployed by a corporate institution). At best, to the person
| downstream of the decision, it's an esoteric detail in a
| whitepaper written by someone who is thinking of them as a
| spherical cow in their fancy equations. Performance metrics are
| even more gameable and unaccountable than the algorithms they
| produce
|
| > Algorithm aversion is a product of diverse mechanisms,
| including (1) a desire for agency; (2) a negative moral or
| emotional reaction to judgment by algorithms;
|
| In other words, because they are rational adults
|
| >(3) a belief that certain human experts have unique knowledge,
| unlikely to be held or used by algorithms;
|
| You _have_ to believe this to believe the algorithms should work
| in the first place. Algorithms are tools built and used by human
| experts. Automation is just hiding that expert behind at least
| two layers of abstraction (usually a machine and an institution)
|
| > (4) ignorance about why algorithms perform well; and
|
| Again, this ignorance is a feature, not a bug, of automated
| decisionmaking in practice with essentially no exceptions
|
| > (5) asymmetrical forgiveness, or a larger negative reaction to
| algorithmic error than to human error.
|
| You should never "forgive" an algorithm for making an error.
| Forgiveness is a mechanism that is part of negotiation, which
| only works on things you can negotiate with. If a human makes a
| mistake and I can talk to them about it, I can at least try to
| fix the problem. If you want me to forgive an algorithm, give me
| the ability to reprogram it, or fuck off with this
| anthropomorphizing nonsense
|
| > An understanding of the various mechanisms provides some clues
| about how to overcome algorithm aversion, and also of its
| boundary conditions.
|
| I don't want to solve this problem. Laypeople should be, on
| balance, more skeptical of the outputs of computer algorithms
| than they currently are. "Algorithm aversion" is a sane behavior
| in any context where you can't audit the algorithm. Like, the
| institutions deploy these tools are the ones we should hold
| accountable for their results, and zero institutions doing so
| have earned the trust in their methodology that this paper seems
| to want
| foundart wrote:
| The note on the primary author's name says 'We intend this essay
| as a preliminary "discussion draft" and expect to revise it
| significantly over time' so if you have cogent revisions to
| suggest, you should strongly consider sending them.
| oldgradstudent wrote:
| Weird, I have never encountered a single case of aversion to
| Booth's multiplcation algorithm, quicksort, binary search, DFS,
| BFS, Miller-Rabin primality test, or Tarjan's strongly connected
| components algorithm, .
|
| Is there something special about the algorithms people are averse
| to? Maybe not actually working?
| 12_throw_away wrote:
| Wow, this paper is ... mystifyingly awful. It reads like some
| crank's blog, but it's actually written by two harvard lawyers,
| including a pretty famous one [1].
|
| [1] https://en.wikipedia.org/wiki/Cass_Sunstein
___________________________________________________________________
(page generated 2024-06-22 23:01 UTC)