[HN Gopher] We don't have a hundred biases, we have the wrong model
___________________________________________________________________
We don't have a hundred biases, we have the wrong model
Author : salonium_
Score : 86 points
Date : 2022-07-21 17:14 UTC (5 hours ago)
(HTM) web link (www.worksinprogress.co)
(TXT) w3m dump (www.worksinprogress.co)
| mstipetic wrote:
| 40% of US believes the earth was created in the last 10,000
| years. Any model relying on rationality has no chance here
| twawaaay wrote:
| We have some wrong models but this does not mean we don't have
| biases.
|
| The simplest way to disprove it:
|
| If we did not have biases but wrong models, fixing models would
| make us unbiased.
|
| But that works very rarely in real life.
| leobg wrote:
| My reading of the article is an application of Chesterton's Fence
| to so-called cognitive biases. Not to see them as a mere defect,
| or proof of our fallibility. But to instead look for the
| objective for which they perhaps truly are the most reasonable
| solution.
|
| Example from the article:
|
| > Many costly signals are inherently wasteful. Money, time, or
| other resources are burnt. And wasteful acts are the types of
| things that we often call irrational. A fancy car may be a
| logical choice if you are seeking to signal wealth, despite the
| harm it does to your retirement savings. Do you need help to
| overcome your error in not saving for retirement, or an
| alternative way to signal your wealth to your intended audience?
| You can only understand this if you understand the objective.
| hirundo wrote:
| For instance we have a neural-cognitive "bias" toward
| recognizing moving versus stationary objects. Our attention is
| prejudiced in favor of things-that-move. This is useful when it
| comes to detecting potential predators, prey, mates, etc. So a
| lack of a bias can be a defect to the economic actor.
| nine_k wrote:
| Conspicuous consumption can be rational, or at least beneficial
| in the evolutionary sense.
|
| If you are a lawyer with a good practice, you are expected to
| drive a nice large car. If you drove a battered old economy-
| class car, your clients might see it as a sign that something
| is wrong with you (there are several plausible ideas) and shun
| dealing with you. There go fat fees and investment savings.
| csours wrote:
| I'm not sure I understand. It seems like the author is looking
| for "learned behavior" as the model for human decision making.
| [deleted]
| throw93232 wrote:
| >> epicycles were still not enough to describe what could be
| observed.
|
| Epicycles based models were far superior in practice, such as
| predicting planetary conjunctions. Heliocentric models did not
| really catched up, until Newton invented gravity and calculus.
|
| And centre of mass of solar system (barycenter in Newtonian
| physics), is outside of Sun, so heliocentric models technically
| never gave solid predictions! Stellar parallax (main prediction
| from Copernicus theory) was not confirmed until 19th century!
| Heliocentrism is mainly philosophical concept!
|
| I will stick with my primitive old thinking and biases, thank
| you! If I get mugged a few times in a neighbourhood, I will
| assume it is not safe. There is no need to overthink it!
| croes wrote:
| Invented gravity?
| dataflow wrote:
| They probably mean invented gravity as a formal concept, not
| as a physical phenomenon.
|
| Like, say, the invention of the number 0.
| https://en.wikipedia.org/wiki/0
| deckeraa wrote:
| Agreed.
|
| Prior to Newton's conception of gravity as objects
| attracting one another, the primary model used was the
| Aristotelian one, in which things tended to go to the
| "zone" where they belong. Things composed of earth (like a
| rock) tended to sink towards the center of the earth, while
| things composed of fire or air tended to rise towards the
| sky.
| throw93232 wrote:
| English is not my native language obviously.
| andrewflnr wrote:
| Not at all obvious, you did fine.
| smegsicle wrote:
| gravity is a notation for describing and predicting an
| arbitrary subset of natural processes
|
| you might as well contest that he invented calculus
| zwkrt wrote:
| I would normally be skeptical of an article that starts with a
| description of epicycles because it probably means that
| whatever is going to be described next is totally bullshit.
|
| In this case I'm not so sure. As a plebeian normie, it seems
| like the "rational actor" model of economics has a lot of
| problems.
|
| Now I do believe that All people are All of the time trying to
| achieve their goals and meet their needs as can best be
| achieved in the given situation and in the way that they best
| know how.
|
| But this includes a junkie digging through trash for things to
| sell, a housewife poisoning her abusive husband, and a
| schizophrenic blowing up mailboxes to stop an international
| plot against her. It includes a recent widower staying in bed
| for two weeks. It certainly includes your exclusion of an
| entire neighborhood and its thousands of inhabitants from your
| care due to some harrowing experiences.
|
| As I understand it, most economists, and certainly the ones
| that influence policy, are not really thinking of these things
| as "rational". To them rational means "increasing your own
| wealth or exchanging your money in the most efficient and
| expedient way possible". And that's very good because this is
| the way that corporations and rich people that hire people to
| manage their money effectively operate. But it doesn't really
| work for normal people in normal situations. Our lack of
| information about our surroundings and our incredibly wide
| array of emotional states doesnt leave a lot of room for
| rationality.
|
| I won't really expound on it because this is already so long,
| but having a single definition of rationality also excludes any
| possibility of having an informed multicultural viewpoint.
| throw93232 wrote:
| People are rational, model works.
|
| But you can not approximate complex system like human brain
| with couple of variables. There are not hundreds, but
| millions of biases.
|
| Advanced epicycle models had dozens moving parts. JPL
| planetary ephemerides (modern equivalent in polynomials) have
| several millions of parameters and terabytes of equations.
| JoshCole wrote:
| If you apply the cognitive biases model to algorithms which have
| superhuman performance in various games - like AlphaZero,
| DeepBlue, Pluribus, and so on - the natural result is to conclude
| that these models are predictably irrational. The reason you get
| this conclusion is because it turns out to be necessary to trade
| off theoretical optimal answers for the sake of speed. The
| behavioral economic view of human irrationality ought to be
| considered kind of dumb in view of that result. But it is
| actually so much worse than that for the field, because the math
| shows that sacrificing optimality for speed would be something
| that even an infinitely fast computational intelligence would be
| _forced_ to do. It isn 't irrational; it is a fundamentally
| necessary tradeoff. In imperfect information games your strategy
| space is continuous, EV is a function of policy, and many games
| even have continuous action spaces. If you thought Go was high
| branching factor you thought wrong; Go is an example of a
| freakishly low branching factor. It is infinitely smaller than
| the branching factor in relatively trivial decision problems.
|
| If you've never looked at cognitive biases through the lens of
| performance optimization you should try it. What seems like an
| arbitrary list from the bias perspective becomes clever
| approximative techniques in the performance optimization
| perspective.
|
| I often think about why this isn't more commonly known among
| people who call themselves rationalists and tend to spend a lot
| of time discussing cognitive bias. They seem to be trending
| toward a belief that general superintelligence is of infinite
| power, doubling down on their fallacious and hubristic
| appreciation for the power of intelligence.
|
| I say this, because when you apply the algorithms that don't have
| these biases - the behavioral economist view wouldn't find them
| to be irrational since they stick to the math, they follow things
| like the coherence principles for how we ought to work with
| probabilities as seen in works by Jayne, Finett, and so on - they
| either don't terminate, or, if you force them to do so... well...
| they lose to humans; even humans who aren't very good at the
| task.
| bobthechef wrote:
| It sound like you're talking about, or at least brushing up
| against, prudential judgement[0]. Sometimes, the optimal move
| is not to seek the optimum.
|
| An obvious class of problems is where determining the optimum
| takes more time than the lifetime of the problem. Say you need
| to write an algorithm at work that does X, and you need X by
| tomorrow. If it would take you a week to find the theoretical
| optimum, then the optimum in a "global" sense is to deliver the
| best you can within the constraints, not the abstract
| theoretical optimum. The time to produce the solution is part
| of the total cost. An imprudent person would either say it's
| not possible, or never deliver the solution in time.
|
| [0] https://www.newadvent.org/cathen/12517b.htm
| Barrin92 wrote:
| >why this isn't more commonly known among people who call
| themselves rationalists
|
| because most of these people do nothing else but writing blogs
| about rationalism. Same reason university tests are sometimes
| so removed from practicality compared to evaluation criteria in
| business, the people who make them do nothing else but write
| these tests.
|
| I suspect if you put some rationalists into the trenches in the
| Donbass for a week they'd quickly have a more balanced view of
| what's needed to solve a problem besides rational
| contemplation.
| snovv_crash wrote:
| The thing about continuous space solutions is that they are
| typically differentiable, which means you can use a gradient
| descent or LM optimization rather than needing to fully explore
| the solution space. Typically there are large regions which are
| heuristically excludable, which is what you are getting at I
| think, but even an unbiased sampling plus gradient descent
| often makes problems much more tractable than discrete
| problems.
| whimsicalism wrote:
| Only if the local optima are good.
| JoshCole wrote:
| The type of learning problem where I agree with your point is
| in something like learning how to classify hand written
| digits. My point about the continuous nature being
| unsearchable in practice is about recursive forms - if I
| choose this policy, my opponent will choose to _react_ to the
| fact that I had that policy.
|
| In your learning problem where thing were made tractable by
| differentiation you have something like an elevation map that
| you are following, but in the multi-stage decision problem
| you have something more like a fractal elevation map. When
| you want to know the value of a particular point on the
| elevation map you have to look for the highest point or the
| lowest point on the elevation map you get by zooming in on
| the area which is the resultant of your having chosen a
| particular policy.
|
| The problem is that since this is a multi-agent environment
| they can react to your policy choice. So they can for example
| choose to have you get a high value only if you have the
| correct password entered on a form. That elevation map is
| designed to be a plain everywhere and another fractal zoom
| corresponding with a high utility or a low error term only at
| the point where you enter the right password.
|
| Choose a random point and you aren't going to have any
| information about what the password was. The optimization
| process won't help you. So you have to search. One way to do
| that is to do a random search; if you do that you eventually
| find a differing elevation - assuming one exists. But what if
| there were two passwords - one takes you to a low elevation
| fractal world that corresponds with a low reward because it
| is a honeypot. The other takes you to the fractal zoom where
| the elevation map is conditioned on you having root access to
| the system.
|
| This argument shows us that we actually would need to search
| over every point to get the best answer possible. Yet if we
| do that we have to search over the entire continuous
| distribution for our policy. Since by definition there are an
| infinite number of states a computer with infinite search
| speed can't enumerate them; there is another infinite fractal
| under every policy choice that also needs full enumeration.
| We have non-termination by a diagonalization argument for a
| computer that has infinite speed.
|
| Now observe that in our reality passwords exist. Less extreme
| - notice that reacting to policy choice in general, for
| example, moving out of the way of a car that drives toward
| you but not changing the way you would walk if it doesn't,
| isn't actually an unusual property in decision problems. It
| is normal.
| brrrrrm wrote:
| > apply the cognitive biases model to algorithms which have
| superhuman performance in various games
|
| Could you give an example of this?
| JoshCole wrote:
| I think approaching it in this direction is horrible because
| it directs attention to the wrong things; when you look at
| specific examples you're always in a _more specific
| situation_ and if you 're in a _more specific situation_ it
| means that your situation is _more computationally tractable_
| than the _general situation_ which was being handled by the
| algorithm. So trying to focus on examples is actually going
| to give you weird inversions where the rules that applied in
| general don 't apply to the specific situation.
|
| You need to come about it from the opposite direction - from
| the problem descriptions to the necessary constraints on your
| solution.
|
| That said, there are so many examples that I feel kind of
| overwhelmed. Starting with biases that start with A:
|
| - Anthropic bias
|
| The algorithms have this tendency. They use counterfactual
| reasoning to determine that assuming a nash player alike to
| them is their opponent when making their decisions. Sometimes
| they don't have a nash opponent, but they persist in this
| assumption anyway. In the cognitive bias framing this
| tendency is error. In the game theoretic framing this
| corresponds with minimizing the degree to which you would be
| exploited. You can find times where the algorithm plays
| against something that isn't nash and so it was operating
| according to a flawed model. You can call it biased for
| assuming that others operated according to that flawed model.
| From a complexity perspective this assumption lets you drop
| an infinite number of continuous strategy distributions from
| consideration - with strong theoretical backing for why it
| won't hurt you to do so - since nash is optimal according to
| some important metrics.
|
| - Attentional bias
|
| The tendency to pay attention to some things and not other
| things. Some examples of times where we do that are with
| alpha beta pruning. You can find moves that involve sacrifice
| that show the existence of this bias. The conceit in the
| cognitive bias framing is that it is stupid because some of
| the things might be important. The justification is that it
| some things are more promising than others and we have
| limited computational budget. Better to stop exploring the
| things which are not promising since they are not promising
| and direct efforts to where they are promising. Something
| like an upper confidence bound tree search in the cognitive
| bias model would turn balancing the explore exploit dynamic
| as part of approximating the nash equillibrium into erroneous
| reasoning because it doesn't choose to explore everything is
| an example of the lesser form of anchoring effects as they
| relate to attentional bias. It weights the action values
| according to the promising rollout more highly.
|
| - Apophenia
|
| Hashing techniques are used to reduce dimensionality. There
| is an error term here but you gain faster reasoning speed.
| Seen in blueprint abstraction; that we're hasing down to
| something using similarity to help bucket similar things
| gives rise to things like selective attention (another bias,
| and kind of related to this general category of bias).
|
| Jumping ahead to something like confirmation bias the
| heuristic that all these algorithms are using are flawed in
| various ways. They see that they are flawed after a node
| expansion and update their beliefs, but they don't update the
| heuristic. In fact if a flawed heuristic was working well
| such that it won we would have greater confidence rather than
| lesser confidence in the bias.
| toomim wrote:
| I wrote a PhD dissertation that made this point in 2013, _and_
| proposed a new "helocentric" economic model.
|
| The key shift is to move the utility function from evaluating a
| future state of the world to evaluating the utility of an
| opportunity for attention in _the present moment_.
|
| All the "cognitive errors" that we humans make are with respect
| to predicting the future. But we all know what we find appealing
| in the present moment.
|
| And when we look at economics from this new perspective of the
| present, we get an economics of _attention_. We can measure, and
| model, for the first time, how we choose how to allocate the
| scarce resource of the internet age: human attention.
|
| I dropped out of academia as soon as I finished this work, and
| never publicized it broadly within academia, but I still believe
| it has great potential impact for economics, and it would be
| great to get the word out.
|
| https://invisible.college/attention/dissertation.html
| snapcaster wrote:
| Thanks for sharing this. Very interesting idea
| toomim wrote:
| Thank you for saying so!
| phkahler wrote:
| >> All the "cognitive errors" that we humans make are with
| respect to predicting the future. But we all know what we find
| appealing in the present moment.
|
| I like to say that most human problems are a result of the
| conflict between short and long term goals. This is true at all
| levels from individuals to small groups, companies, and states.
| Many, many "failures" can be framed this way. I would say it's
| not even a problem of predicting the future (thought that is an
| issue) but of failure to prioritize the future over the
| present.
| just_boost_it wrote:
| I'm not so sure about this. I'm not an expert at all, but I can
| see in the world around me that biases are real. Sure, heuristics
| are important in the trade off between accuracy and speed, so I
| see that they are necessary. However, isn't the problem that we
| use the same heuristics to bet on a coin flip as we would use to
| bet on whether we make it past a lion to safety? It seems like
| the "right" is model is only correct in a small number of cases,
| but we can't change our unconscious biases to fit the situation.
| It seems that the bias model explains why we make bad decisions
| in many areas of our lives.
| jjk166 wrote:
| The rational actor model assumes that a person will behave
| optimally - using all information available to make and carry out
| the best decision possible for their goals.
|
| I strongly suspect that a better model is that people instead of
| optimizing their outcomes instead optimize the ease of decision
| making while still getting an acceptable course of action. Most
| of our biases serve to either allow us to make decisions quicker
| or minimize the odds of catastrophically bad outcomes for our
| decisions, which fit nicely with this model. The fact is that
| indecision is often worse than a bad decision, and the
| evolutionary forces that shaped our brains are stochastic in
| nature and thus don't dock points for missed opportunities.
| canjobear wrote:
| This is "bounded rationality" [1], where people make the best
| decisions possible given computational constraints on how they
| make decisions. A lot of interesting work tries to derive human
| cognitive biases from this idea.
|
| [1] https://en.wikipedia.org/wiki/Bounded_rationality
| spacebanana7 wrote:
| The idea you're describing sounds similar to Satisficing Theory
| [1]. I agree this approach does a much better job of describing
| real life decision making than the traditional rational actor
| model. Unfortunately, Satisficing rarely gets discussed (at
| least in my experience) in mainstream economics/psychology,
| despite having been around since the 1950s.
|
| [1] https://en.wikipedia.org/wiki/Satisficing
| beefman wrote:
| No mention of ergodicity economics, which resolves a lot of this.
| Reinforcement learning was mentioned, which resolves all or
| nearly all of it.
___________________________________________________________________
(page generated 2022-07-21 23:00 UTC)