[HN Gopher] Why Most Published Research Findings Are False
       ___________________________________________________________________
        
       Why Most Published Research Findings Are False
        
       Author : Michelangelo11
       Score  : 65 points
       Date   : 2024-09-24 21:45 UTC (1 hours ago)
        
 (HTM) web link (journals.plos.org)
 (TXT) w3m dump (journals.plos.org)
        
       | vouaobrasil wrote:
       | > In this framework, a research finding is less likely to be true
       | [...] where there is greater flexibility in designs, definitions,
       | outcomes, and analytical modes
       | 
       | It's worth noting though that in many research fields, teasing
       | out the correct hypotheses and all affecting factors are
       | difficult. And, sometimes it takes quite a few studies before the
       | right definitions are even found; definitions which are a
       | prerequisite to make a useful hypothesis. Thus, one cannot ignore
       | the usefulness of approximation in scientific experiments, not
       | only to the truth, but to the right questions to ask.
       | 
       | Not saying that all biases are inherent in the study of sciences,
       | but the paper cited seems to take it for granted that a lot of
       | science is still groping around in the dark, and to expect well-
       | defined studies every time is simply unreasonable.
        
         | 3np wrote:
         | This is only meaningful if "the replicaton crisis" is
         | systematically addressed.
        
       | marcosdumay wrote:
       | Yeah, when you try new things, you often get them wrong.
       | 
       | Why do we expect most published results to be true?
        
         | ekianjo wrote:
         | because people believe that peer review improve things but in
         | fact not really. its more of a stamping process
        
           | elashri wrote:
           | Yes that a misconception that many people think that peer-
           | review involves some sort of verification or replication
           | which is not true.
           | 
           | I would blame mainstream media in part for this and how they
           | report on research and don't emphasize this nature.
           | Mainstream media also is not interested in reporting on
           | progress but likes catchy headlines/findings.
        
           | njbooher wrote:
           | Peer review is more of an outlet for anonymous reviewers to
           | be petty jerks than a stamping process.
        
         | bluefirebrand wrote:
         | Because people use published results to justify all sorts of
         | government policy, business activity, social programs, and
         | such.
         | 
         | If we cannot trust that results of research are true, then how
         | can we justify using them to make any kind of decisions in
         | society?
         | 
         | "Believe the science", "Trust the experts" etc sort of falls
         | flat if this stuff is all based on shaky research
        
           | marcosdumay wrote:
           | > If we cannot trust that results of research are true, then
           | how can we justify using them to make any kind of decisions
           | in society?
           | 
           | Well, don't.
           | 
           | Make your decisions based on replicated results. Stop hyping
           | single studies.
        
             | XorNot wrote:
             | > Stop hyping single studies.
             | 
             | This right here really. The reason people go "oh well
             | science changes every week" is because what happens is the
             | media writes this headline: "<Thing> shown to do <effect>
             | in _brand new study!_ " and then includes a bunch of text
             | which implies it works great...and one or two sentences,
             | out of context, from the lead research behind it saying
             | "yes I think this is a very interesting result".
             | 
             | They omit all the actual important details like sample
             | sizes, demographics, history of the field or where the
             | result sits in terms of the field.
        
       | titanomachy wrote:
       | 2022
        
       | elashri wrote:
       | There is at least one thing wrong about this. This is an essay
       | about a paper published a simulation based scenarios in medical
       | research. It then try to generalize to "research" and avoid this
       | very narrow support to the claim. I think this is something true
       | and it should make us more cautious when deciding based on single
       | studies. But things are different in other fields.
       | 
       | Also this is called research. You don't know the answer before
       | head. You have limitations in tech and tools you use. You might
       | miss something, didn't have access to more information that could
       | change the outcome. That is why research is a process.
       | Unfortunately common science books talks only about discoveries,
       | results that are considered fact but usually don't do much about
       | the history of how we got there. I would like to suggest a great
       | book called "How experiments end"[1] and enjoy going into details
       | on how scientific conscious is built for many experiments in
       | different fields (mostly physics).
       | 
       | [1]
       | https://press.uchicago.edu/ucp/books/book/chicago/H/bo596942...
        
         | thatguysaguy wrote:
         | I think it's clear that this paper has stood the rest of time
         | over the last 20 years. Our estimates of how much published
         | work fails to replicate or is outright fraudulent have only
         | increased since then.
        
           | tptacek wrote:
           | Outright research fraud is probably very rare; the cases
           | we've heard about stick out, but people outside of academia
           | usually don't have a good intuition for just how vast the
           | annual output of the sciences are. Remember the famous PhD
           | comic, showing how your thesis is going to be an
           | infinitesimal fraction of the work of your field.
        
       | skybrian wrote:
       | (2005). I wonder what's changed?
        
         | youainti wrote:
         | Over on peerpub there has been some discussion of studies on
         | the topic.
         | 
         | https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...
        
       | motohagiography wrote:
       | i wonder if science could benefit from publishing using
       | pseudonyms the way software has. if it's any good, people will
       | use it, the reputations will be made by the quality of
       | contributions alone, it makes fraud expensive and mostly not
       | worth it, etc.
        
         | wwweston wrote:
         | People have uses for conclusions that sometimes don't have
         | anything to do with their validity.
         | 
         | So while "if it's any good, people will use it" is true and
         | quality contributions will be useful, the converse is _not_
         | true: the use or reach of published work may be only tenuously
         | connected to whether it 's good.
         | 
         | Reputation signals like credentials and authority have their
         | limits/noise, but bring some extra signal to the situation.
        
       | debacle wrote:
       | Most? Really?
        
       | ape4 wrote:
       | So is this paper false too? .. infinite recursion...
        
         | wccrawford wrote:
         | Most probably.
        
       | youainti wrote:
       | Please note the peerpub comments discussing that it appears that
       | followup research shows about 15% is wrong, not the 5%
       | anticipated.
       | 
       | https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...
        
       | carabiner wrote:
       | This only applies to life sciences, social sciences right? Or are
       | most papers in computer science or mechanical engineering also
       | false?
        
         | thatguysaguy wrote:
         | It's very bad in CS as well. See e.g.:
         | https://arxiv.org/abs/1807.03341
         | 
         | IIRC there was also a paper analyzing how often results in some
         | NLP conference held up when a different random seed or
         | hyperparameters were used. It was quite depressing.
        
       | withinboredom wrote:
       | I've implemented several things from computer science papers in
       | my career now, mostly related to database stuff. They are mostly
       | terribly wrong or show the exact OPPOSITE as to what they claim
       | in the paper. It's so frustrating. Even occasionally, they offer
       | their code used to write the paper and it is missing entire
       | features they claim are integral for it to function properly; to
       | the point that I wonder how they even came up with the results
       | they came up with.
       | 
       | My favorite example was a huge paper that was almost entirely
       | mathematics-based. It wasn't until you implemented everything
       | that you would realize it just didn't even make any sense. Then,
       | when you read between the lines, you even saw their
       | acknowledgement of that fact in the conclusion. Clever dude.
       | 
       | Anyway, I have very little faith in academic papers; at least
       | when it comes to computer science. Of all the things out there,
       | it is just code. It isn't hard to write and verify what you
       | purport (usually takes less than a week to write the code), so I
       | have no idea what the peer reviews actually do. As a peer in the
       | industry, I would reject so many papers by this point.
       | 
       | And don't even get me started on when I send the (now professor)
       | questions via email to see if I just implemented it wrong, or
       | whatever, that just never fucking reply.
        
         | reasonableklout wrote:
         | Wow, sounds awful. Help the rest of us out - what was the huge
         | paper that didn't work or was actively misleading?
        
           | withinboredom wrote:
           | I'd rather not, for obvious reasons. The less obvious reason
           | is that I don't remember the title/author of the paper. It
           | was back in 2016/17 when I was working on a temporal database
           | project at work and was searching literature for temporal
           | query syntax though.
        
         | Lerc wrote:
         | For papers with code, I have a seen a tendency to consider the
         | code, not the paper to be the ground truth. If the code works,
         | then it doesn't matter what the paper says, the information is
         | there.
         | 
         | If the code doesn't work, it seems like a red flag.
         | 
         | It's not an advantage that can be applied to biology or
         | physics, but at least computer science catches a break here.
        
       | blackeyeblitzar wrote:
       | It's a matter of incentives. Everyone who wants a PhD has to
       | publish and before that they need to produce findings that align
       | with the values of their professors. These bad incentives
       | combined with rampant statistical errors lead to bad findings. We
       | need to stop putting "studies" on a pedestal.
        
       | breck wrote:
       | On a livestream the other day, Stephan Wolfram said he stopped
       | publishing through academic journals in the 1980's because he
       | found it far more efficient to just put stuff online. (And his
       | blog is incredible: https://writings.stephenwolfram.com/all-by-
       | date/)
       | 
       | A genius who figured it academic publishing had gone to shit
       | decades ahead of everyone else.
       | 
       | P.S. We built the future of academic publishing, and it's an
       | order of magnitude better than anything else out there.
        
         | jordigh wrote:
         | Genius? The one who came up with a new kind of science?
        
       | ants_everywhere wrote:
       | This is a classic and important paper in the field of
       | metascience. There are other great papers predating this one, but
       | this one is widely known.
       | 
       | Unfortunately the author John Ioannidis turned out to be a Covid
       | conspiracy theorist, which has significantly affected his
       | reputation as an impartial seeker of truth in publication.
        
         | kelipso wrote:
         | Ha how meta is this comment because the obvious implication
         | from the title is "Why Most Published Research Findings on
         | Covid Are False" and that goes against the science politics. If
         | only he had avoided the topic of Covid entirely, then he would
         | be well regarded.
        
       | giantg2 wrote:
       | This must be a satire piece.
       | 
       | It talks on things like power, reproducibility, etc. Which is
       | fine. What it fails to examine is what is "false". Their results
       | may be valid for what they studied. Future studies may have new
       | and different findings. You may have studies that seem to
       | conflict with each other due to differences in definitions (eg
       | what constitutes a "child", 12yo or 24yo?) or the nuance in
       | perspective apllied to the policies they are investigating (eg
       | aggregate vs adjusted gender wage gap).
       | 
       | It's about how you use them - "Research _suggests_... " or "We
       | recommend further studies", etc. It's a tautology that if you
       | misapply them they will be false a majority of the time.
        
       ___________________________________________________________________
       (page generated 2024-09-24 23:00 UTC)