[HN Gopher] Reversals in Psychology (2020)
       ___________________________________________________________________
        
       Reversals in Psychology (2020)
        
       Author : apsec112
       Score  : 24 points
       Date   : 2021-07-02 08:53 UTC (14 hours ago)
        
 (HTM) web link (www.gleech.org)
 (TXT) w3m dump (www.gleech.org)
        
       | arsome wrote:
       | Pretty sure I heard about 90% of these on NPR podcasts.
       | 
       | I should probably stop listening to those.
        
         | c3600608-467b wrote:
         | It's much worse than that:
         | https://journals.plos.org/plosone/article?id=10.1371/journal...
         | 
         | Any science you hear about on the news is by selection bias
         | more likely to be wrong than the science you never hear about.
         | In short: news is destroying our trust in science by picking
         | the worst examples of it for clicks.
        
       | stkdump wrote:
       | Is there a similar list for results with particularly strong
       | evidence? Studies that have been replicated multiple times?
        
         | derbOac wrote:
         | It's an interesting question. There are many such studies, but
         | the focus has been on the negative.
        
       | ghostbrainalpha wrote:
       | That there is not good evidence that screen time negatively
       | effects well being was a surprise to me.
       | 
       | https://www.ox.ac.uk/news/2019-01-15-technology-use-explains...
        
         | JadeNB wrote:
         | > That there is not good evidence that screen time negatively
         | effects well being was a surprise to me.
         | 
         | That shouldn't be taken as evidence of the opposite, though
         | (not that you said that!).
        
       | vajrabum wrote:
       | I notice that the page says that Daryl Bem's experiements on
       | precognition have no good evidence. I poked around a bit and
       | immediately found this paper on a metanalysis from 2016 on the
       | topic.
       | 
       | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4706048/
       | 
       | Any thoughts on this? This paper says:
       | 
       | To encourage replications, all materials needed to conduct them
       | were made available on request. We here report a meta-analysis of
       | 90 experiments from 33 laboratories in 14 countries which yielded
       | an overall effect greater than 6 sigma, z = 6.40, p = 1.2 x 10
       | -10 with an effect size (Hedges' g) of 0.09. A Bayesian analysis
       | yielded a Bayes Factor of 5.1 x 10 9, greatly exceeding the
       | criterion value of 100 for "decisive evidence" in support of the
       | experimental hypothesis. When DJB's original experiments are
       | excluded, the combined effect size for replications by
       | independent investigators is 0.06, z = 4.16, p = 1.1 x 10 -5, and
       | the BF value is 3,853, again exceeding the criterion for
       | "decisive evidence." The number of potentially unretrieved
       | experiments required to reduce the overall effect size of the
       | complete database to a trivial value of 0.01 is 544, and seven of
       | eight additional statistical tests support the conclusion that
       | the database is not significantly compromised by either selection
       | bias or by intense " p-hacking"--the selective suppression of
       | findings or analyses that failed to yield statistical
       | significance. P-curve analysis, a recently introduced statistical
       | technique, estimates the true effect size of the experiments to
       | be 0.20 for the complete database and 0.24 for the independent
       | replications, virtually identical to the effect size of DJB's
       | original experiments (0.22) and the closely related
       | "presentiment" experiments (0.21). We discuss the controversial
       | status of precognition and other anomalous effects collectively
       | known as psi.
        
       | newsbinator wrote:
       | In summary: every single thing I learned in Psych 101 was "no
       | good evidence"
        
       | tus89 wrote:
       | Lesson - just because people claim they are doing "science"
       | doesn't mean they are doing either good or correct science.
       | Subtle unchallenged assumptions, cause-and-effect confusion,
       | inadequate root-cause analysis, invalid inferential logic and
       | questionable proof-by-statistics.
       | 
       | Maybe a better question - should psychology even be considered
       | scientific or a science in the first place?
        
         | undreren wrote:
         | Sloppy science is done in every field, natural sciences
         | included. The article even _leads_ with the statement that
         | psychology experience more reversals due to being exceptionally
         | open in terms of sharing code and data compared to other social
         | sciences.
         | 
         | Reversals through replication failure _is_ scientific progress.
         | Your "better" question is nothing of the sort, just lazy
         | contrarianism.
        
           | RcouF1uZ4gsC wrote:
           | > Reversals through replication failure is scientific
           | progress.
           | 
           | Not really. If a flat earther comes and says that the earth
           | is flat, and you take him up in a airplane and show him the
           | curvature of the earth, he can't say that flat earth study is
           | science because "reversals through replication failure is
           | scientific progress"
           | 
           | There is a difference between making real scientific
           | contributions through careful research, trial design, and
           | statistical analysis, and spouting BS.
           | 
           | After a bunch of high profile reversals, psychology is
           | looking more like the spouting BS group that then justifies
           | themselves by saying that clearing up BS is scientific
           | advancement.
           | 
           | If we want society to be able to actually use science to make
           | decisions, then we have to be careful to differentiate
           | science from BS.
        
             | derbOac wrote:
             | https://www.physicsforums.com/threads/shinichi-mochizukis-
             | ab...
        
         | derbOac wrote:
         | Killing the messenger, really. This stuff has been documented
         | in all sorts of fields, including pharmacology, neuroscience,
         | oncology, you name it. I think there's been posts here on HN
         | lately about replicability problems with AI research?
         | 
         | Psychology is fuzzy because of the subject matter, but
         | (appropriately I think) it's also better at turning the
         | microscope on itself (meta-analysis really has its origins in
         | psychology).
         | 
         | Pretending this doesn't happen elsewhere is dangerous. Maybe
         | the rates vary from field to field, but psychology isn't alone.
         | If you applied that standard to every field there would be
         | almost nothing left except maybe physics and some other closely
         | related fields.
        
         | [deleted]
        
         | BurningFrog wrote:
         | Even if people have impressive titles at institutions and
         | landmark discoveries to their name, doesn't mean they are doing
         | either good or correct science.
        
         | BurningFrog wrote:
         | > should psychology even be considered scientific or a science
         | in the first place?
         | 
         | You _can_ do good scientific research in that field. But it 's
         | much harder than in other fields.
        
         | throwawaysea wrote:
         | Everything you say is true, probably for all fields. There is
         | undeserved faith in peer review processes
         | (https://www.vox.com/2015/12/7/9865086/peer-review-science-
         | pr...), inadequate understanding of statistics (or maybe
         | purposefully poor application of statistics), poor incentive
         | structures, and other issues that corrupt "science". This is
         | why I always cringe a little when someone says "trust the
         | science".
         | 
         | Yet at the same time, I think psychology is especially
         | susceptible to these problems, because it is a more fuzzy
         | "social science" (more like sociology, less like physics). It
         | suffers immense bias due to those who select themselves into
         | the field and also how they study it. Psychology researchers
         | typically conduct non-generalizable experiments with immense
         | sampling bias (https://www.psychologytoday.com/us/blog/non-
         | weird-science/20...). Then there are replication problems (http
         | s://www.theatlantic.com/science/archive/2018/11/psycholo...).
         | They're also not making any progress in addressing these issues
         | because of meta cultural problems about acknowledging problems
         | in the field (https://www.wired.com/2016/03/psychology-crisis-
         | whether-cris...).
        
       ___________________________________________________________________
       (page generated 2021-07-02 23:00 UTC)