[HN Gopher] Defence against scientific fraud: a proposal for a n...
___________________________________________________________________
Defence against scientific fraud: a proposal for a new MSc course
Author : vo2maxer
Score : 13 points
Date : 2023-11-19 18:36 UTC (4 hours ago)
(HTM) web link (deevybee.blogspot.com)
(TXT) w3m dump (deevybee.blogspot.com)
| userinanother wrote:
| Anyone dismissing fraud hasn't tried to replicate academic
| results to build products. Those people know exactly how bad the
| fraud truly is
| yashap wrote:
| I have a slightly inside opinion on this - before I became a
| Software Engineer, I got an MSc in a mostly unrelated field, and
| am a co-author on work published in a few (very middling) peer
| reviewed journals.
|
| My gut feel from what I saw is that outright fraud is rare, but
| bias seeping into the study is common. Researchers invest a huge
| amount of effort into a study, they really want the hypothesis to
| be true. A lot of studies have steps that are pretty susceptible
| to bias, like maybe you're classifying results and there's a bit
| of a judgement call there. It's human nature to make biased
| decisions in these situations, and make enough of these biased
| decisions, and statistically insignificant results become
| statistically significant.
|
| Some people might not see a difference between bias and fraud,
| but I personally do. I think of fraud as a very intentional
| deception, straight up falsifying numbers in a conscious attempt
| to deceive. While I see bias as more, you've got a borderline
| case, and you view it in the light you want to see it in, even
| somewhat unconsciously. Like the difference between unconscious
| racial bias, and overt hateful racism.
|
| I think the best approach to combatting this is to spend a lot
| less of the overall $$ in science on novel research, and a lot
| more on attempting to independently reproduce results. Papers
| should be seen as meaningless until their results can be
| independently reproduced, and universities/colleges should reward
| reproduction studies as much as novel research. It's kind of
| crazy that the system almost completely lacks these checks and
| balances right now - peer review is more like an editor, it's
| just a very different thing than reproduction.
|
| Bishop here is suggesting a data sleuthing approach to root out
| fraudsters, but I dunno if that'd be effective, as I think the
| main issue is subtle but pervasive bias seeping into studies by
| most researchers, vs. a smaller number of heavy fraudsters.
| Independent reproduction of results, while expensive, is the only
| effective approach I can think of to combat this.
| yodsanklai wrote:
| > My gut feel from what I saw is that outright fraud is rare,
| but bias seeping into the study is common
|
| There's also a grey area between fraud and not checking your
| work as thoroughly as you should.
|
| For instance, I've witnessed a highly regarded researcher
| (Turing award) telling his co-author not to bother about some
| proof because nobody would read it.
| yashap wrote:
| Yeah totally, good example of the somewhat subtle ways bias
| seeps in.
| AussieWog93 wrote:
| >My gut feel from what I saw is that outright fraud is rare,
| but bias seeping into the study is common.
|
| I have a similar experience and opinion (started a PhD, quit
| when I lost faith in the institution).
|
| In our field in particular, it seemed to be an open secret that
| our current paradigm was a dead end, and this had been apparent
| for almost a decade by the time I joined the university.
|
| Yet, in spite of this fact that we were going nowhere, there
| was extreme pressure to continue to toe the line. You couldn't
| just try something new based on a hunch, but instead had to
| perform fruitless experiments that were "guided by the
| literature" and therefore easier to justify to the people
| funding your endeavour.
|
| The whole institution of academia is rotten, and going on a
| witch hunt like the article suggests ignores several massive
| elephants in the room.
| tppiotrowski wrote:
| > Some people might not see a difference between bias and
| fraud, but I personally do. I think of fraud as a very
| intentional deception, straight up falsifying numbers in a
| conscious attempt to deceive.
|
| Lets say you do three runs of the same psychology experiment
| and only publish the results from the one where results are
| most significant. I think most researchers would not consider
| this fraud. You are publishing data that's absolutely true.
| It's the selective cherry picking of data in order to produce
| novel results that's rotting the field.
|
| This is how I think researchers can sleep sound at night while
| faith in science continues to erode.
| Ar-Curunir wrote:
| Erosion of faith in science (whatever that means) is not due
| to research fraud lol.
| yashap wrote:
| That I would consider fraud, personally. I'm referring to
| more subtle bias than that - like maybe you're putting
| someone through a scenario, then trying to classify their
| emotional state afterwards. You classify the clear states
| properly, but are biased in classifying the borderline ones -
| like it's not very clear if they're sad or anxious, but if
| anxious fits your hypothesis better, you tend to classify
| these unclear states as anxious.
|
| Maybe that's not a good example, I'm not a psychologist. But
| in my experience, scientific studies are full of a surprising
| number of judgement calls, in all sorts of areas
| (experimental design, experimental execution, result
| classification, statistical analysis, exclusion of outliers,
| etc.), and it's naive to think bias doesn't seep into all of
| these, even for people who are quite committed to the ideals
| of science. I can't think of a good way to combat this except
| very extensive independent reproduction of results.
| lucubratory wrote:
| The situations you're describing should be prevented by a
| double blind trial - they're actually exactly what a double
| blind is meant to prevent.
| thegrim33 wrote:
| Interestingly, this correlates with fraud in journalism as
| well. One common technique is that there's a set of events
| that could be written about / published, but they choose to
| only write about the events that positively promotes their
| ideology, while the events that might negatively impact their
| ideology are ignored and not shared to the world. It's one of
| the ways that a news organization can "lie" without ever
| actually telling a lie. By painting a picture of the world
| using selectively chosen data based on a bias/agenda, rather
| than a picture of the world that is based on all available
| data.
| hyperthesis wrote:
| Feynman has a bit where empirixal measurements of some
| universal constant (electron charge?) slowly crept up to the
| now-accepted true value. He said scientists are a little bit
| ashamed of this, because it showed that higher estimates were
| discarded and not published - bias.
|
| BTW scientists get no upvotes for merely reproducing "known"
| results.
|
| Is there a way to see _natural_ merit /discovery in reproducing
| (not an artificial incentive, like grants for repeating
| studies). Perhaps analogous to how teaching helps you
| understand it better?
| everybodyknows wrote:
| > those who have committed fraud can rise to positions of
| influence and eminence ...
|
| > ... sideline any honest young scientists who want to do things
| properly. I fear in some institutions this has already happened.
|
| She has some names in mind!
| greenyoda wrote:
| The recent resignation of Stanford's president comes to mind:
|
| https://www.npr.org/2023/07/19/1188828810/stanford-universit...
|
| > _The president of Stanford University has resigned after an
| investigation opened by the board of trustees found several
| academic reports he authored contained manipulated data._
|
| > _Marc Tessier-Lavigne, who has spent seven years as
| president, authored 12 reports that contained falsified
| information, including lab panels that had been stitched
| together, panel backgrounds that were digitally altered and
| blot results taken from other research papers._
| yawnxyz wrote:
| This article puts the onus on publishers / the scientific
| publication process to catch and prevent fraud.
|
| This is like asking merchants to catch and stop fraud and crime
| (e.g. selling alcohol to kids); it's in their best interest to
| not catch fraud and maximize their income.
|
| The reason they do clamp down on underage drinking is because
| they'll get fined/arrested / their license will be revoked. The
| system cares enough to catch and punish the behavior.
|
| If the funding bodies like NIH don't catch, punish and stop this
| behavior, it creates a system where fraudsters win more. This
| makes more groups, even if reluctant, participate in fraud
| because that's the only way to compete. It's a race to the
| bottom.
|
| Money drives incentives, and clawing it back while blacklisting
| and publicly humiliating a lab, will change behavior. That would
| make the lab a toxic collaborator, especially if collaborators
| ALSO get blacklisted from funding.
| atrettel wrote:
| What changes to the incentive structure do you think would help
| here? Blacklisting seems appropriate for repeat offenders but
| I'd be more interested in changing the incentives for funding
| to prevent fraud in the first place. The problem with
| blacklisting in my view is that many fraudsters who get away
| with it will just continue anyway since they do not "learn
| their lesson" until long after the damage is already done.
| ta988 wrote:
| NIH is using volunteers to review grants and some programs. So
| same problem applies. I've even seen reviewers stealing the
| work from grants they reviewed (and noted really badly so it
| wouldn't get funded) and that same reviewer did it multiple
| times. Nobody cared at NIH, the program officers said they
| would look into it and many years later still nothing happened.
| There is nothing really in place to solve blatant abuse.
| atrettel wrote:
| I can sympathize with the idea of a "scientific police force"
| looking for fraud, but the fact is that we already attempt the
| appearance of that through peer review.
|
| The problem is that peer review right now only provides the
| appearance of oversight. Referees are not given the time, money,
| or resources to really dig into papers and look for issues. I
| have peer reviewed 4 papers this year, averaging around 3 hours
| per review. The onus is largely on the
| referees/editors/publishers to immediately find something wrong
| in the paper to reject it rather than on the authors to really
| prove their case. And I mean "immediately". One journal that I
| review for requires peer reviews within 2 weeks of accepting the
| assignment, and they will nag you if you do not have it done
| within a week. If the onus must be on the referees, they need to
| be given much more time and resources, and that might even
| include money, and also might include waiting for reproduction of
| the results before publishing.
|
| "Publish or perish" also plays a big role here, because
| scientists need to get papers published to prove they are
| productive and worthy of funding and employment. Authors will
| just re-submit their manuscripts to different journals until they
| are published rather than re-evaluating the research in any
| fundamental manner (is my methodology flawed? etc.). So putting
| the onus on referees isn't going to work in the first place,
| since peer reviews are not always shared.
| vlovich123 wrote:
| Peer review's fundamental flaw is that science only works
| through replication and peer review tries to sidestep that as a
| cost saving measure. It doesn't mean there's no value - having
| peers review your work can be valuable. But as any professional
| coder can tell you, a code review is an extremely low quality
| signal that can only catch obvious bugs, typos, and
| stylistic/basic rule conformance issues. It can't explain why
| you made various design decisions and whether those were any
| good to begin with. I think its current role as gatekeeper is
| probably overall more harmful than helpful as it lends
| legitimacy it doesn't have a right to (ie this paper is right)
| but I don't have a better suggestion other than independent
| replication. Maybe more separation between who designs,
| conducts and analyses experiments with the analysis people
| being rewarded if there's no result while the design and
| experiment people being paid if there is to set up a feedback
| loop?
| dang wrote:
| Related:
| https://statmodeling.stat.columbia.edu/2023/11/19/dorothy-bi...
|
| (via https://news.ycombinator.com/item?id=38336432, but we merged
| that thread hither)
| gunshai wrote:
| > To date, the response of the scientific establishment has been
| wholly inadequate. There is little attempt to proactively check
| for fraud: science is still regarded as a gentlemanly pursuit
|
| There is no incentive mechanism, it has been the problem all
| along. The peer review process is clearly inadequate or not up to
| the task in its current form.
___________________________________________________________________
(page generated 2023-11-19 23:00 UTC)