[HN Gopher] Trapped priors as a basic problem of rationality
       ___________________________________________________________________
        
       Trapped priors as a basic problem of rationality
        
       Author : yamrzou
       Score  : 86 points
       Date   : 2021-03-13 17:06 UTC (5 hours ago)
        
 (HTM) web link (astralcodexten.substack.com)
 (TXT) w3m dump (astralcodexten.substack.com)
        
       | catlifeonmars wrote:
       | Is it just me or does anyone else find the "just-so" style of
       | explaining phenomenon in this article irritating?
       | 
       | Edit:
       | 
       | Less snarky response here. The "just-so" style that the author
       | uses to explain psychological phenomena appeals to the readers'
       | cognitive biases and avoids confronting the reader with the more
       | rigorous, mechanistic (perhaps evidence-based?) explanations that
       | the same author proposes as a way to combat said cognitive
       | biases. This strikes me as mildly ironic.
        
         | BaseS4 wrote:
         | It's a neuromyth being applied to political domains so that the
         | believers of political narratives don't have to feel
         | accountable for anything.
        
         | heavyset_go wrote:
         | > _The "just-so" style that the author uses to explain
         | psychological phenomena appeals to the readers' cognitive
         | biases and avoids confronting the reader with the more
         | rigorous, mechanistic (perhaps evidence-based?) explanations
         | that the same author proposes as a way to combat said cognitive
         | biases. This strikes me as mildly ironic._
         | 
         | This is endemic to the rationalist community. It's actually
         | quite humorous once you begin to notice it.
        
       | tshaddox wrote:
       | So, can you stick the cynophobe in the room with the well-behaved
       | Rottweiler and also give them some drug that reduces their
       | anxiety just while they're in the room?
        
         | adav wrote:
         | Is the Rottweiler particularly anxious to be in the room?
        
       | philipkglass wrote:
       | Rationality in service to what end, though? You can't just build
       | some complicated instruments to record measurements of the ideal
       | polity, the way you can refine measurements of the electron's
       | mass. Most political disagreements are about values more than
       | wonkish policymaking.
       | 
       |  _But in fact many political zealots never accept reality. It 's
       | not just that they're inherently skeptical of what the other
       | party says. It's that even when something is proven beyond a
       | shadow of a doubt, they still won't believe it._
       | 
       | This could describe cognitive error [1] or it could describe
       | someone who's using deontological reasoning more than
       | consequentialist reasoning. (And perhaps lacking the vocabulary
       | to say "consequentialist arguments won't sway my values.")
       | 
       | It is probably a good strategy to refuse to be talked into
       | positions against your fundamental values by mere evidence. I'm
       | joking but also serious. I have a vague sense that torturing
       | suspects doesn't work to prevent terrorist attacks, but that's
       | not the root of my opposition to torture. Maybe clever torturers
       | could make it work more often than not. That wouldn't sway me,
       | because torturing suspects is against my values. Documenting how
       | well it can "work" won't make me reconsider.
       | 
       | [1] Further food for thought: "cognitive error" and "revealed
       | preference" may also be different names for the same thing. Are
       | they experimentally distinguishable?
        
         | renewiltord wrote:
         | Rationality as a tool to serve yourself. A lot of SSC and AC10
         | is around this idea that rational intelligent thinking with
         | self awareness of bias allows you to model the world
         | accurately.
         | 
         | As for the rest, yes, I agree with you. Often, after having
         | considered a great deal of evidence, I condense the finding
         | into some strength of belief about something. Now, absent the
         | original evidence I have a position. This is usually fine since
         | I actually lack the ability to summon all evidence on demand.
         | 
         | It's okay, though, since I (and I suspect most people
         | intuitively) have both a likelihood notion and a certainty in
         | likelihood notion analogous to a Confidence Interval and
         | P-value, if you will.
         | 
         | So I might hold the belief that GME is going to be between 120
         | and infinity on Jan 13 2022 and have a certainty in that belief
         | that is like 10% but lose the evidence that made me think that.
         | 
         | This is also sort of a failure mode for fast Aumann Agreement
         | even in co-incentivized individuals - the fact that memory is
         | short.
         | 
         | So the problem is that my certainty holding is not well done.
         | 
         | Thanks for the thought prompt.
        
           | astrange wrote:
           | I've always found it odd that the people called
           | "rationalists" are not rationalist. Talking about evidence
           | and realistic priors all the time is good (they don't do
           | enough evidence gathering though) but it is not rationalism,
           | it is empiricism.
           | 
           | Surely a rationalist would be into logic and actively refuse
           | to read studies, like the Austrian economists did.
        
             | renewiltord wrote:
             | It's just a terminology thing. Sort of like how atheist
             | actuaries still account for Acts of God.
        
         | [deleted]
        
         | drdeca wrote:
         | Even if you shouldn't be convinced of a terminal value thing
         | based on evidence, doesn't mean you shouldn't be convinced of
         | an empirical thing that happens to be associated with or
         | championed by those who are promoting different values.
         | 
         | So, for example, in the climate change example he gives, while
         | it is of course appropriate to take into account possible
         | biases of people promoting one position or another, and ways
         | that those may result from the different values that the people
         | with those biases, one should nonetheless endeavor to have a
         | tendency to believe what is actually true about the factual
         | matter, even if it were that the people with the wrong values
         | were biased as an (indirect) result of those values, in a way
         | that led them to be more likely to reach the correct conclusion
         | about the empirical facts, than those with the good values.
         | 
         | (Not to say that I think this is the case in this example.
         | Just, hypothetically. Or, what do you call something that is
         | like a hypothetical but without expressing any position about
         | whether the thing is actually true?)
         | 
         | In addition, while they may get tangled up and which is which
         | may change for a given person over time (due to human thought
         | being fuzzy and wobbly), I think it still makes sense to make a
         | distinction between terminal values and instrumental values.
         | Further, if one values something instrumentally, because one
         | believes that it is instrumentally beneficial for one's
         | terminal values, and it is demonstrably true that it actually
         | does not benefit one's terminal values, then one should come to
         | believe this truth, and so no longer instrumentally value the
         | thing.
         | 
         | I also don't think cognitive error can be entirely reduced in
         | all cases to revealed preferences, because, uh, -- well, ok,
         | one can, of course, model all behavior as arising from a
         | preference to take exactly the sequence of actions that one
         | takes. But I think you will agree that this is not a reasonable
         | model.
         | 
         | People will sometimes understand their past behavior as arising
         | from a cognitive error, and endeavor to address this. I suppose
         | you could regard this as simply conflicting preferences, or
         | preferences about preferences, but I don't think this is the
         | best way to understand things.
        
           | philipkglass wrote:
           | _Even if you shouldn 't be convinced of a terminal value
           | thing based on evidence, doesn't mean you shouldn't be
           | convinced of an empirical thing that happens to be associated
           | with or championed by those who are promoting different
           | values._
           | 
           | That's true, and I have been convinced of some of those
           | things. For example, I am empirically convinced that mass
           | shootings by spree killers are shocking but not a major issue
           | of public health or safety in the US. Even though I believe
           | that Democratic politics gives disproportionate attention to
           | mass shootings, weirdly mirroring how Republican politics
           | gives disproportionate attention to Islamist terrorism, I'm
           | not going to say so in the company of many Democrats. Because
           | _another thing_ I have been empirically convinced of is that
           | people build effective political coalitions by emphasizing
           | common ground and judiciously overlooking their allies '
           | faults. I want action on climate issues, so I'm not going to
           | alienate my natural allies by lecturing them about their
           | irrational fear of semi-automatic rifles. Effective politics
           | is almost the thematic opposite of a personal quest to
           | aggressively probe all things in search of Truth.
           | 
           |  _I also don 't think cognitive error can be entirely reduced
           | in all cases to revealed preferences, because, uh, -- well,
           | ok, one can, of course, model all behavior as arising from a
           | preference to take exactly the sequence of actions that one
           | takes. But I think you will agree that this is not a
           | reasonable model._
           | 
           | It's a model that doesn't lend itself to mathematically
           | grounded predictions. But is it actually _not reasonable_?
           | Without access to another person 's qualia it's really easy
           | to misattribute whether their more surprising decisions are
           | revealing a preference, demonstrating cognitive error, or
           | servicing a fundamentally different value orientation. Like
           | Hume's Problem of Induction I can't effectively grapple with
           | it but I can't exactly dismiss it either.
        
         | throwawaysea wrote:
         | I agree that a lot of political discussions boil down
         | ultimately to values. But along the way people make irrational
         | evaluations that could be improved or corrected, and that may
         | lead to different politics even if the values remain stable.
         | 
         | A good concrete example of political zealots being detached
         | from reality is the George Floyd/BLM situation. Floyd had high
         | levels of drugs in his system, COVID-19, and was complaining
         | about breathing in the car before there was a knee on him.
         | Chokeholds are also a very standard and typically non lethal
         | form of getting a suspect under control. There's no evidence of
         | race playing a part in this incident (no evidence of motive) or
         | it even being an instance of police brutality (standard and
         | typically safe procedures were used), because of these
         | complicating factors. The news media focus, social media
         | virality, sustained legal protests, and sustained violent
         | rioting have corrupted people's ability to judge this situation
         | rationally, to the point that people overestimate the number of
         | unarmed black people killed by police by over 50x
         | (https://www.lawenforcementtoday.com/poll-44-of-liberals-
         | say-...). Better rational thinking from the general population
         | might have changed the last 10 months, and also avoided the
         | embarrassing and dangerous cycle of Minneapolis first defunding
         | and then refunding their police department.
         | 
         | Then there's the possibility that people's values are variable
         | as well, but I've not put much thought into that. I suspect
         | however that sustained rational thinking over a period of time
         | can affect values and so it would change politics.
        
           | smolder wrote:
           | Your post claims to describe a good example of political
           | zealotry and then goes on to be one. To frame Floyd's killing
           | by police as some routine accident is ignorant of the
           | evidence made available to you, probably because you
           | discounted such evidence as untrustworthy and biased. Guesses
           | about how many die to police per year are irrelevant to
           | whether the force used was excessive.
        
             | elefanten wrote:
             | GP actually laid out a full argument while you dismissed it
             | out of hand as zealotry. It really feels like you're
             | proving GPs point.
        
             | ggreer wrote:
             | I think his argument is that it's a freak accident, not a
             | routine one. If it were routine, the stats for unarmed
             | people dying during arrest would be much higher.
        
               | smolder wrote:
               | My phrasing was off. His argument was that the accident
               | was nothing but a freak occurrence, and that the manner
               | in which Floyd was restrained was routine and
               | appropriate, which, as performed, it was not.
        
         | catlifeonmars wrote:
         | I believe that the author makes the basic assumption that
         | rationality _is_ the end.
        
       | jbay808 wrote:
       | A lot of the time, this might not be a property of the update
       | process so much as the prior itself.
       | 
       | A conspiracy theory is often something that can explain any lack
       | of observational evidence for itself (because it's covered up!).
       | If the prior on the conspiracy being true is too high, the update
       | on contrary evidence will be very weak just because the
       | conspiracy theory makes weak predictions about upcoming
       | observations, so it loses probability mass _very slowly_ as a
       | whole, with probability mass mostly just being shifted within the
       | context of the theory.
       | 
       | ("They're covering up aliens!" --> no evidence of this after
       | fifty years? --> "It's worse than I thought; they must be using
       | mind control!")
       | 
       | The only cure for this might be preventative -- just maintaining
       | a strong doubt on deep conspiracy theories. But that also means
       | resigning yourself to never believing in one before it's
       | revealed. If it turns out you live in a reality where the
       | government _is_ using mind control to conceal aliens, you 'll not
       | be the one who realizes it before everyone else.
        
         | yorwba wrote:
         | > the conspiracy theory makes weak predictions about upcoming
         | observations, so it loses probability mass very slowly as a
         | whole
         | 
         | That may be the case for conspiracy theories, but I'd expect
         | the theory that dogs are dangerous to make much stronger
         | predictions. On the other hand, maybe people who're afraid of
         | dogs _don 't_ expect them to behave differently, they just
         | think that everything dogs do is scary (which is true, since
         | they're scared by everything dogs do.)
        
           | Siira wrote:
           | It's about power; Dogs being violent isn't super rare, and if
           | a dog decides to get violent, I have no control over it. The
           | probability of the danger is low, but it is sufficiently bad
           | even in mild cases that I do not want to take any chances.
        
           | jbay808 wrote:
           | I think you're right; it could be either.
           | 
           | In this example, if their fear of dogs causes them to
           | misperceive a dog's excitement as anger, then it's closer to
           | the scenario in the article. But if their paranoia is such
           | that they perceive the dog is acting friendly but interpret
           | that as the dog luring them into a sense of safety to bite
           | them later, then it's more like the conspiracy theory case.
        
       | getpost wrote:
       | > I wonder if it's the equivalent of making trauma victims
       | describe the traumatic event in detail; an attempt to give higher
       | weight to the raw experience pathway compared to the prior
       | pathway.
       | 
       | > The other promising source of hope is psychedelics.
       | 
       | For me, what this part if the essay points to is that everyone
       | needs to do their own work, in whatever context, and there's no
       | getting around that.
       | 
       | Experts, authorities, and managers need to find the line between
       | facilitating a process and doing the process themselves. You
       | can't make anyone believe anything or do anything. You can only
       | hope to facilitate the conditions where helpful beliefs and
       | accomplishments arise.
       | 
       | I hope this is one significant cultural shift during my lifetime.
       | The 20th century was about creating expertise and giving that
       | expertise authority, which is just another expression of the
       | dominator paradigm.
        
       | analog31 wrote:
       | Has Bayesian verbiage become a form of virtue signaling?
       | 
       | Don't get me wrong. I took statistics in college, and we covered
       | Bayesian methods. They can't be controversial because Bayes'
       | Theorem is a theorem, meaning it's proven. But the attempt to
       | apply its terminology to lengthy but "soft" arguments seems like
       | pure clutter.
        
         | fractionalhare wrote:
         | I don't know if virtue signaling is the right term. The better
         | term might be "shibboleth" for so-called rationalists. It's as
         | if you took the kids who loved lists of logical fallacies and
         | told them they could resolve all disputes if they just reduced
         | them to first principles and used words like "prior."
         | 
         | Nevermind that the first principles lose all context of the
         | original issue - we're in a world of pure logic, where we can
         | use _theorems_! To the rationalist community, everything looks
         | like a good opportunity to write a lengthy essay reinventing
         | the wheel with Bayes theorem. It 's a cargo cult of basic
         | statistics dressing up otherwise uninteresting arguments.
         | 
         | It's very tedious for people like myself who actually studied
         | statistics and apply it professionally. Bayes theorem is barely
         | even a theorem - it's literally a basic rearrangement of the
         | axioms of probability. It's not some supreme revelation that
         | imbues arguments with more credibility. Most of the time I see
         | it invoked in discussion, it's not even rigorously quantified.
         | You can choose arbitrary priors, so at the end of the day you
         | never arrive at an objective truth. It's profundity for its own
         | sake, and nothing is really achieved.
        
         | sega_sai wrote:
         | No, I don't think it is. Separately from rationalist community,
         | there is substantial amount of work showing that Bayesian
         | updating and reasoning is in some sense the optimal way of
         | operating under uncertainty.
         | 
         | It is also certainly true that garbage in and garbage out, i.e.
         | if you feed garbage inputs, you'll get garbage out (Bayesian or
         | not), but the advantage in my opinion of having a framework for
         | making decisions in cases of uncertainties is extremely useful.
         | It is often hard to really apply this for ordinary life
         | situations, (and maybe often not worth it), but even then it's
         | useful to think about priors/evidence, even if you don't
         | particularly care about the final number.
        
           | analog31 wrote:
           | That's fair. A lot of what I read about "Bayesian reasoning"
           | boils down to: Check your assumptions, and update your
           | beliefs when faced with new evidence. But those habits have
           | been with us for millennia, and I'm not arguing against them.
           | 
           | Being able to think about probabilities is useful, if they're
           | quantifiable. And conditional probability plays a useful role
           | in that process.
        
           | astrange wrote:
           | I see two main problems with the way the lesswrong-type
           | people try to do Bayesianism as a life philosophy. The first
           | is that claiming to be rational* and having "priors" and
           | "updating" them instead of "beliefs" and "changing" them is
           | humblebragging and doesn't admit that you have beliefs that
           | might be wrong. Even if the original idea was to make you
           | more humble, it's dangerous when it spreads to people like
           | young male VCs and Elon Musk who already have the world's
           | biggest egos.
           | 
           | The second is there's no way to set priors for silly ideas
           | and they won't admit some things just ain't gonna happen,
           | which means they've accidentally started a religion by
           | refusing to not believe in various silly things about Roko's
           | basilisk, evil AI gods enslaving them, and so on. It's like
           | someone presenting you Pascal's Wager and you taking it
           | seriously.
           | 
           | * for some reason they call themselves rationalist which
           | implies being "rational", aka "always right", even though
           | they are explicitly not rationalist, but empiricist!
        
         | andrewflnr wrote:
         | I think it's been common in the rationalist community for a
         | while, in the 5-10 years that I've been aware of them.
        
       | BaseS4 wrote:
       | Every time I see one of these neuromyths about how the brain
       | works being applied to politics, it always bends over backwards
       | to make sure groups of people, when given huge media and
       | financial resources and the motive to influence the decision-
       | making behavior of hundreds of millions of people, are not
       | liable, accountable, or involved in how effective their influence
       | campaigns are. Example:
       | 
       | > Segments of Group: Cut off your children's genitals.
       | 
       | > Person: Well, uh... okay... for the good of the cause, I guess.
       | 
       | > Other Segments of Group: We didn't tell you to cut off your
       | children's genitals.
       | 
       | > Apologists of Group: Anyone who would recommend such a thing
       | would NEVER be part of our group!
       | 
       | Those groups are absolutely liable for their influence campaigns.
       | It's just that individuals have absolutely no recourse to hold
       | these groups accountable.
       | 
       | For proof of that, notice the frequency of active words like
       | "revenge" and "wrath" decrease over time for neutral, passive,
       | and nonthreatening words like "balance" and "equality". People
       | are so demoralized, they just get fed bogus neuromyths about
       | political behavior and accept them because they known there is
       | fuck all they can do about it anyways other than acquiescence and
       | servitude.
       | 
       | https://books.google.com/ngrams/graph?content=Revenge%2Cveng...
        
       | sideshowb wrote:
       | Hey this voice sounds familiar... Yay!
        
       | jhardy54 wrote:
       | Interesting perspective, thanks for sharing.
        
       | tshaddox wrote:
       | The problem that I always have with this author's material and
       | most material in the "rationalist movement" is that after making
       | very valid points about biases, irrational behavior, etc., he
       | seems to leave things at some sort of implicit epistemological
       | nihilism. Since people on the left and the right in American
       | politics both have biases, and both believe their beliefs just as
       | strongly as the other side, apparently we can't discern anything
       | about their political views other than how their views make them
       | feel? That's the vibe I'm getting from stuff like this:
       | 
       | > I've had arguments with people who believe that no pro-life
       | conservative really cares about fetuses, they just want to punish
       | women for being sluts by denying them control over their bodies.
       | And I've had arguments with people who believe that no pro-
       | lockdown liberal really cares about COVID deaths, they just like
       | the government being able to force people to wear masks as a sign
       | of submission. Once you're at the point where all these things
       | sound plausible, you are doomed.
       | 
       | Apparently you're doomed if you believe one of those things
       | describes reality more than the other? They're both extreme
       | generalizations, obviously, and are defined imprecisely ("no true
       | Scotsman"), but couldn't it be that one of those two groups of
       | people have beliefs that correspond much more with reality? Or is
       | the conclusion that, since this is politics, and political views
       | is intensely personal and prone to bias, that both sides are
       | equally correct?
       | 
       | Isn't it possible that there actually are some powerful political
       | groups who have honest and good intentions and others which do
       | not? Isn't that possible even if most people have a tendency to
       | believe that their own groups are the good ones, and that belief
       | is just as strong for each group? Is it impossible for me to
       | believe my group's intentions are honest and good, and you
       | believe just as strongly that your group's intentions are honest
       | and good, but that only one of us is correct? Is knowledge
       | nothing more than a feeling of certainty? Or is knowledge simply
       | impossible?
        
         | andrewflnr wrote:
         | Agnosticism about particular partisan issues is very
         | appropriate when discussing general patterns of thinking and
         | failure modes. And I think he would agree with your last
         | paragraph to the extent of thinking it too obvious to say out
         | loud. His own politics are clear enough from his other writing.
        
         | drdeca wrote:
         | > Isn't it possible that there actually are some powerful
         | political groups who have honest and good intentions and others
         | which do not?
         | 
         | There are people who have bad values, and want things that are
         | bad, yes. (Though generally they don't want them because they
         | think they are bad.)
         | 
         | > Isn't that possible even if most people have a tendency to
         | believe that their own groups are the good ones, and that
         | belief is just as strong for each group?
         | 
         | yes.
         | 
         | > Is it impossible for me to believe my group's intentions are
         | honest and good, and you believe just as strongly that your
         | group's intentions are honest and good, but that only one of us
         | is correct?
         | 
         | I'm not sure what precisely you mean by intentions being
         | "honest and good". If I believe that my intent is honest --
         | well, I suppose I could be fooling myself, thinking that some
         | behavior of mine is "honest" by twisting the meaning of the
         | word "honest" in my mind, while still intending to cause others
         | to have a belief which I believe to be false. Also, by
         | intentions being "good", do you mean that the things one
         | intends are actually good things, or that the one intending
         | them thinks them good for those they impact, and intends them
         | for that reason?
         | 
         | > Is knowledge nothing more than a feeling of certainty? Or is
         | knowledge simply impossible?
         | 
         | Knowledge is more than a feeling of certainty, yes. And, either
         | knowledge, or something very similar to knowledge, is possible.
         | 
         | For example, both of the beliefs quoted, are false. There may
         | be related claims which are true, but neither of the claims, as
         | stated, are. (I don't see the no true Scotsman aspect of either
         | though?)
         | 
         | The point isn't that different sides are equally correct. The
         | point is demonstrating a kind of error that can be made, and
         | giving an example on multiple sides makes it easier to
         | understand. If one only gave examples of a type of error being
         | made by the side the reader agrees with, there would be a
         | danger of this being interpreted as simply an attack on the
         | side the reader agrees with, and so the reader would be less
         | likely to appreciate the point. On the other hand, if one only
         | gave examples of the type of error being made by the side the
         | reader disagrees with, while the reader may take a more
         | positive view of what is being written, they are less likely to
         | internalize the point as a kind of error that they, or others
         | on their side, are prone to.
         | 
         | For something to be a general type of error, the type of error
         | should be one that can be made in multiple
         | contexts/directions/ways.
         | 
         | Then, to demonstrate that something is a general type of error,
         | why not demonstrate that, by showing that it can be made in
         | multiple contexts/directions/ways ?
         | 
         | The point isn't nihilism or relativism. The point is vigilance.
        
           | tshaddox wrote:
           | > I'm not sure what precisely you mean by intentions being
           | "honest and good".
           | 
           | It could mean something objectively moral, or if you're not
           | into that sort of thing, it could just mean intentions that
           | both groups in question would agree are good.
        
         | jbay808 wrote:
         | I think the author only means you're "doomed" in the sense that
         | "your fate is sealed": you'll find it very hard to change your
         | mind in the face of contrary evidence, because your
         | observations will start to feel like they always reinforce your
         | conclusion.
        
           | tshaddox wrote:
           | Right, but the implication is that your fate is sealed _and
           | that your belief doesn't reflect reality or isn't
           | "rational."_ I'm assuming the author wouldn't use the same
           | terminology to describe one's unwillingness to change their
           | views from something he considers "rational" to something he
           | considers "irrational," e.g. he probably wouldn't say "if you
           | believe the Earth is not flat, and you won't change your
           | beliefs unless presented with compelling evidence and
           | explanations that the Earth is flat, then you are doomed."
        
             | andrewflnr wrote:
             | Reflecting reality is different from being rational. It's
             | related to the old idea that true knowledge is believing
             | the right thing for the right reason. The problem is that
             | if your reasoning process is incorrect, what you believe is
             | independent of reality, so whether you believe the truth is
             | mostly random. Your opponents have equally strong (i.e.
             | not) reasons to believe what they do. You should aim to do
             | better.
        
             | jbay808 wrote:
             | I think you're reading that implication in, when it wasn't
             | there. Nobody can know with certainty what's really true or
             | false, and the author isn't claiming to know either or
             | judge people for disagreeing. We can only work on improving
             | our processes for deciding what's true, and noticing that
             | sometimes we can end up in a situation where our beliefs
             | will become immune to contrary evidence is an important
             | part of that. When that happens it doesn't mean those
             | beliefs are certainly mistaken, but it does mean your fate
             | is, more or less, sealed.
             | 
             | If your faith in a flat earth is strong enough that you'll
             | come up with new theories of optics to justify continuing
             | to believe in it _even after you 're launched into orbit to
             | see it firsthand_, your belief is unshakable. But that
             | doesn't guarantee that you're wrong, maybe you're a genius
             | who just revolutionized cosmology and optics. But you're
             | probably wrong.
        
               | tshaddox wrote:
               | > Nobody can know with certainty what's really true or
               | false, and the author isn't claiming to know either or
               | judge people for disagreeing.
               | 
               | Yeah, that's the kind of epistemological meat we really
               | need to slice into. My view is that certainty is
               | impossible but that knowledge is possible, and that we
               | get ourselves into a lot of messes when we confuse
               | knowledge with certainty. The quest for knowledge should
               | not be a quest for certainty (either in the emotional
               | sense or the Bayesian sense) or a quest for a perfect
               | method of obtaining or justifying knowledge.
        
             | emmett wrote:
             | That's not perfectly parallel as a belief. The parallel
             | statement would be "If you believe that flat-Earth
             | advocates don't actually care about whether the Earth is
             | flat or not, they just want to prevent space exploration to
             | keep us trapped on Earth"...that would be equivalent. And I
             | can't speak for Scott, but I think he'd say if you believed
             | that, your ability to reason about the Earth being flat is
             | basically dead. It happens to be that you have the right
             | answer (the Earth is round), but only by chance...if you
             | were wrong, you wouldn't be able to be convinced to change
             | your mind.
        
               | tshaddox wrote:
               | > The parallel statement would be "If you believe that
               | flat-Earth advocates don't actually care about whether
               | the Earth is flat or not, they just want to prevent space
               | exploration to keep us trapped on Earth"
               | 
               | But...isn't that statement either true or false, just
               | like the statement "the Earth is flat" is either true or
               | false? What's the difference between the two statements?
               | Is knowledge about one statement possible, but not the
               | other?
        
               | drdeca wrote:
               | It's the connection between the two statements.
               | 
               | If you believe "flat-Earth advocates don't actually care
               | about whether the Earth is flat or not, they just want to
               | prevent space exploration to keep us trapped on Earth" ,
               | this would pose difficulty for changing the belief "the
               | earth isn't flat".
               | 
               | If someone believed "people who claim that 'flat-Earth
               | advocates don't actually care about whether the Earth is
               | flat or not, they just want to prevent space exploration
               | to keep us trapped on Earth' don't actually care about
               | whether flat-Earth advocates really [...], they just
               | [idk, some absurd motivation for making the claim] ",
               | that would pose difficulty for changing the belief "flat-
               | Earth advocates really do believe that the Earth is
               | flat".
        
               | tshaddox wrote:
               | > If you believe "flat-Earth advocates don't actually
               | care about whether the Earth is flat or not, they just
               | want to prevent space exploration to keep us trapped on
               | Earth" , this would pose difficulty for changing the
               | belief "the earth isn't flat".
               | 
               | I'm not sure why. The normal simple tests to distinguish
               | between a flat Earth and a ball-shaped Earth ought to
               | still work independent of what flat-Earth advocates
               | believe.
        
         | emmett wrote:
         | That quote is NOT Scott saying that pro-life advocates are
         | right about abortion or wrong about abortion. That quote is
         | Scott saying that once you start to believe that "the enemy" is
         | a cackling evil villain, who is lying about their true
         | motivations and secretly is only advocating for their preferred
         | policies because they want to do evil things...you are doomed,
         | at least in terms of seeking truth. Because once you believe
         | that, there is nothing that "the enemy" can ever say or do that
         | will change your mind. It's always just a tactic to advance
         | their evil plot.
         | 
         | Of course, Scott could be _wrong_ about this thesis. It 's
         | possible that maybe believing that pro-life conservatives don't
         | care at all about fetuses doesn't actually indicate that you
         | have a trapped prior that will prevent you from ever hearing
         | new arguments on the abortion issue. But that is the argument
         | that Scott is advancing, not that you can't form opinions about
         | the truth or falsehood of someone's political views at all.
        
           | tshaddox wrote:
           | I guess my question is whether the author believes that it's
           | possible to have knowledge about these sorts of things or
           | not. Everything he says in this article sounds like he's
           | saying that obtaining such knowledge is impossible, or at
           | least that neither side _has_ obtained knowledge. He seems to
           | only judge the validity of the opposing views by how strongly
           | their proponents believe the views, and since the two views
           | seem to be held equally strongly, he seems to conclude that
           | there is no discernible difference in the validity of the
           | views. But if it were possible to obtain such knowledge, then
           | it would be possible to correctly say "both sides believe
           | their views equally strongly, but side A is more correct and
           | side B is less correct."
        
         | oconnor663 wrote:
         | > Isn't it possible that there actually are some powerful
         | political groups who have honest and good intentions and others
         | which do not?
         | 
         | I think it's important to really carefully spell out what we
         | mean by "not having good intentions". Because there are two
         | different reasonable things that that can mean, and they're
         | _really_ different, but the incentive to flip-flop between them
         | can be strong.
         | 
         | The first meaning sounds like "having bad ideas". For example,
         | let's say I'm in favor of some new law to...promote the arts.
         | (Trying to pick a bland example.) You might think my law is
         | actually going to hurt the arts, and argue against it for that
         | reason. Or maybe you agree that it'll help the arts, but you
         | think its other downsides outweigh its upsides. At maximum
         | generality, maybe you don't object to my law per se, but you'd
         | just rather spend the same money on something else.
         | 
         | The second meaning sounds like "having bad values". For
         | example, you could claim that I'm not really trying to promote
         | art at all, and just trying to divert money to my friends. Or
         | maybe I could paint you as some sort of uncultured caveman, who
         | can't appreciate art in the first place.
         | 
         | When we put it that way, I think it's clear that we usually say
         | "good intentions" to talk about the second meaning. Like, the
         | whole point of saying it is usually to _contrast_ someone's
         | good values with a bad outcome they're responsible for, right?
         | 
         | On the other hand, it's much, much easier to take an
         | argumentative position using the first meaning. The world is
         | complicated, everything has downsides, etc. Plus object level
         | arguments tend to be more sophisticated, since expertise and
         | fancy math get involved. So the most tempting strategy is often
         | to equivocate between "having bad ideas" and "having bad
         | values", by making an argument about ideas and yet drawing or
         | implying a conclusion about values. This is the fallacy that
         | internet rationalists complain about constantly, calling it a
         | "motte and bailey" or something like that. And I'm really
         | sympathetic to that complaint: I think that it's very easy to
         | make this sort of bad argument, that it's very hard _not_ to
         | make it without some sort of shared community or commitment,
         | and that the overall effect of this sort of argument on
         | discourse is toxic and (when taken to an extreme) dehumanizing.
         | 
         | So all that said, do I think that some groups objectively have
         | bad values? I...oof...I feel kind of forced to admit that it's
         | true sometimes. I can't sustain a really extreme position here
         | like "everyone's values are special and lovely in their own
         | way", at least not without diluting the concept of values so
         | much that it's not very interesting anymore. But at the same
         | time, I just feel...super skeptical of most "bad values"
         | arguments in practice.
        
       ___________________________________________________________________
       (page generated 2021-03-13 23:01 UTC)