[HN Gopher] The Bayesian Cringe (2021)
       ___________________________________________________________________
        
       The Bayesian Cringe (2021)
        
       Author : EndXA
       Score  : 34 points
       Date   : 2024-03-26 19:49 UTC (3 hours ago)
        
 (HTM) web link (statmodeling.stat.columbia.edu)
 (TXT) w3m dump (statmodeling.stat.columbia.edu)
        
       | parpfish wrote:
       | I noticed a shift in my attitude in strong priors when I switched
       | from academia to industry and have only recently realized why.
       | 
       | When doing an analysis in an academic setting, the goal is to get
       | a paper past reviewers to be published. And the reviewers were
       | adversaries that were trying to disprove your work (at best these
       | were helpful critique; at worst they were bad-faith nit-pickers
       | that were looking for any excuse to reject). If you did a
       | Bayesian analysis in this setting, the mean reviewers would just
       | just point to the priors and say "you can't justify that choice,
       | REJECT".
       | 
       | But in industry, there are no reviewers serving as adversarial
       | gatekeepers. You may present analyses to a skeptical audience,
       | but if they disagreed with your model priors you would work
       | _with_ them to come up with a mutually agreeable model because
       | you 're all on the same team.
        
         | BugsJustFindMe wrote:
         | > _But in industry, there are no reviewers serving as
         | adversarial gatekeepers. You may present analyses to a
         | skeptical audience, but if they disagreed with your model
         | priors you would work with them to come up with a mutually
         | agreeable model because you 're all on the same team._
         | 
         | This experience may not be representative. The web is
         | absolutely filled with anecdotes of gatekeeping and
         | obstructionism within engineering orgs. The phrase "internal
         | politics" comes immediately to mind.
        
           | whatshisface wrote:
           | Internal politics have nothing to do with the content, so I
           | think their point stands. There's no reason not to do a
           | Bayesian analysis, if your proposal fails it will certainly
           | be for another reason.
        
         | vundercind wrote:
         | Most folks' experience of primary and secondary education, and
         | maybe also undergrad, is similar, with the instructor as the
         | adversary. How much more collaborative and _forgiving_ the
         | "scary" "real world" is, was a real surprise after so many
         | years of school.
        
           | whatshisface wrote:
           | It can easily go too far though, especial when somebody
           | doesn't want to accept criticism. Then you're not a team
           | player if you check the arithmetic.
        
         | williamdclt wrote:
         | It's definitely a learned soft skill to direct interactions to
         | be collaborative rather than adversarial.
         | 
         | It's too easy to fall into an adversarial discussion because of
         | differing opinions (eg about code architecture) when really
         | you're on the same team. I try to keep in mind (and convey) the
         | image of "you and me side to side against the problem on the
         | whiteboard" rather than "you and me against each other"
        
       | sk11001 wrote:
       | I still have no clue what Gelman is saying about anything ever,
       | and this post is no exeception. He seems like a great guy in
       | interviews and presentations but anything he writes or talks
       | about is highly non-specific.
        
       | bruturis wrote:
       | Prior are now not subjective but useful, the OP is about the
       | problem of choosing the best priors. The best options are
       | informative priors (1) and regularizers (2). So, for example,
       | choosing as prior a Laplace distribution for the unknown
       | parameters is equivalent to the LASSO that is a well known way of
       | obtaining sparse models with few coefficients. In (2) there is an
       | example in which a prior suggest a useful regularization method
       | for regression. In (3) the author discusses prior modeling.
       | 
       | (1)
       | https://en.wikipedia.org/wiki/Prior_probability#Informative_...
       | 
       | (2) https://skeptric.com/prior-regularise/index.html
       | 
       | (3)
       | https://betanalpha.github.io/assets/case_studies/prior_model...
        
       | kqr wrote:
       | There's also the fact that a prior is really hard to explain to
       | someone else. By definition, it's the unexplainable starting
       | point!
       | 
       | Yet when I lay out fairly tight Bayesian reasoning, there's
       | always that one person sucking life out of the entire
       | conversation with "Wait can you go back to that first number? How
       | did you arrive at that?" and it's an unanswerable question
       | because any attempt would have to start from another, more
       | fundamental prior!
       | 
       | Sometimes this person is reasonable and I can go, "Ah, we can try
       | a different starting point. What's your prior?" but often enough
       | the person gets stuck on the idea of subjective probability and
       | everything derails.
       | 
       | When it comes to important decisions, I've started hiding the
       | prior with smoke and mirrors to redirect attention away from it.
        
       | oldgradstudent wrote:
       | What's the difference between a prior and a bias?
       | 
       | How does one distinguish between the two?
        
         | layer8 wrote:
         | A prior is whatever you start with. There's literally no
         | requirements. Bayes tells you how to update your priors,
         | whatever they are, in the face of new data. Nothing more,
         | nothing less. In principle it doesn't matter what priors you
         | start with (how biased they are), in the sense that given
         | enough data, your likelihoods will converge to what is really
         | the case.
        
       ___________________________________________________________________
       (page generated 2024-03-26 23:01 UTC)