[HN Gopher] There is a worrying amount of fraud in medical research
___________________________________________________________________
There is a worrying amount of fraud in medical research
Author : martincmartin
Score : 100 points
Date : 2023-02-23 14:57 UTC (8 hours ago)
(HTM) web link (www.economist.com)
(TXT) w3m dump (www.economist.com)
| pella wrote:
| https://archive.md/KsZJL
| i-use-nixos-btw wrote:
| I recall reading -somewhere - about a similar problem in
| psychology journals. The problem there is worse, in terms of
| correctness, because the journals don't publish negative results,
| _including_ when the negative results disprove a previously
| published paper.
|
| In medical journals, though, the problem is worse because it is
| more likely to kill someone.
| hn_version_0023 wrote:
| > In medical journals, though, the problem is worse because it
| is more likely to kill someone.
|
| Right, with psychology, you're more likely to _kill yourself_
| PaulKeeble wrote:
| A lot of the medicine paper fraud is in Psychology papers. They
| are notorious for badly setup studies with intentionally
| leading questionaires for data. No amount of peer review seems
| to improve them and a journal somewhere will always publish it.
|
| The volume is quite staggering as well, ~40 Psychologists are
| responsible for an insane amount of papers all with the same
| methodology problems showing it can cure everything. Apparently
| every disease has Psychology problems. In practice no one is
| getting better from their proposed and trailed treatments and
| patients repeatedly complaint about it, but they are everywhere
| in Europe and the vast majority of doctors believe in these
| specialist even though the papers are universally review as
| very low quality even when they aren't faking data or
| manipulating the stats to produce an outcome.
| quickthrowman wrote:
| > A lot of the medicine paper fraud is in Psychology papers.
|
| Psychology is not medicine, it's an academic discipline.
| Psychiatry is the branch of medicine that deals with mental
| health.
| nradov wrote:
| There is a reproducibility (replication) crisis in psychology.
| Much of what psychologists accepted as settled for years has
| turned out to be bunk. While there are some researchers doing
| excellent work, much of the field remains no more scientific
| than phrenology.
|
| https://www.theatlantic.com/science/archive/2018/11/psycholo...
| etidorhpa wrote:
| No surprise here. Just put together a bunch of data and make up
| the statistics. No one's the wiser. "Now hurry and let me sell
| Oxycontin to children" -FDA
| m00x wrote:
| This is highly misleading.
|
| The FDA requires a lot more than a few medical papers. They
| have their own guidelines for studies, and you have to follow
| them to the letter. They also have investigators present to
| make sure data isn't mislabeled or incorrect.
|
| The Oxycontin issue wasn't that the medicine wasn't safe, is
| that it was deemed safe by the FDA for its intended use, then
| it was pushed to doctors to go beyond its intended use.
|
| The FDA was too late to respond to this, but it was undetected
| by the majority of the medical field because people went to the
| streets for re-ups, and doctors didn't want to admit fault.
| This is mostly due to improper regulation in how drug companies
| are allowed to interact with doctors, and the lack of
| healthcare resources.
| apienx wrote:
| > "Going by these numbers, roughly one in 1,000 papers gets
| retracted [..] that something more like one in 50 papers has
| results which are unreliable because of fabrication, plagiarism
| or serious errors."
|
| I'd say these are underestimates. Let me add that 80%+ of papers
| are useless. The only "value" they provide is to the person
| getting academically promoted and/or building their publishing
| portfolio/cred.
| ska wrote:
| > The only "value" they provide is..
|
| As far as I can see this is mostly an incentives problem.
| Bureaucratic control of academic hiring has ended up
| emphasizing the short term measurable (e.g. #of pages
| published, so-called impact factors, etc.) over the long term,
| with pretty predictable results.
|
| Medical research in particular is fraught with another set of
| problems; the default clinical pathway gives a weak at best
| grounding in science, and even the MD/PhD programs have been
| gamed to some degree. There are definite counterexamples
| (lots!) but there are also a lot of clinicians with incentive
| to produce research but little skill in it and even less time
| available...
| opportune wrote:
| I think zooming further out, the incentive for academia is to
| churn out degrees + get grants. Getting a PhD is supposed to
| require doing something that nobody has done before. Most
| people getting PhDs/in academia are not actually good enough
| to do this (it's really hard! There's not a ton of low
| hanging fruit, and you're "competing" with many others),
| which is why we end up with tons of garbage papers nobody
| will ever care about.
|
| Medical research may be performed by MDs but the incentives
| are still basically the same: papers are resume builders. It
| looks better when applying to a fellowship/job to have a nice
| publication history. Obviously the best case is to have
| worked on some really groundbreaking stuff - still really
| hard - but the next best case is to have a ton of meh
| publications, since that beats having a few publications, or
| no publications.
|
| In medical programs the grant thing is a lot bigger too
| because there is, rightfully, tons of money to throw to that
| area. You need grants as an academic to progress. You won't
| keep getting grants if you take grants and then don't publish
| anything, so even if you have nothing good come out of it,
| you need to publish something. That incentivizes fraud in the
| worst case and noise in the best. The better-best case would
| be if academia were more open to accepting null results
| xpe wrote:
| I wish the extent to which a paper is well written (honest,
| clear, appropriate for the audience) was enough. Sure, I
| want scientist to "have a nose" for finding interesting
| results, but I don't like the framing that a solid piece of
| work is any less because it happened to not demonstrate a
| useful result.
|
| I'll add that I would like to see a LOT more work that
| synthesizes and analyzes other work; i.e. literature
| reviews and meta-analyses.
| opportune wrote:
| That's what most of those papers I am calling garbage
| are: they are not necessarily wrong but they are not
| interesting. A meta analysis is cheap and easy to do for
| even an undergrad, and doesn't require any special
| insight or foresight.
|
| I think the bigger problem is we have too many people
| trying to chase after the highest tier of academic
| achievement relative to what that tier "should" be or was
| in the past. It's benchmarked on novelty, but most people
| doing research are never going to produce any worthwhile
| novel results - in some cases it's just bad luck but in
| most I think it is just lack of aptitude.
|
| Research is not supposed to just be a resume checkmark,
| and a PhD isn't just supposed to be some structured
| degree program where you can get "on rails" and churn out
| papers to get a degree proving you have above-average
| intelligence. But that's what it is, and it generates
| tons of noise, while cheapening the value of a PhD.
|
| If I were emperor of the world I'd split research into
| two tiers where one is more focused on basic science:
| investigative studies, verifying results, writing papers
| with solid structure, applying stats. This would be what
| most people get. And then a second tier of research for
| the wickedly skilled researchers who are producing novel
| results and really moving the field forward. Right now
| that second group, in most science disciplines, is who
| progresses in academia anyway.
| [deleted]
| nradov wrote:
| Some of that 80% of papers are also valuable to politicians and
| bureaucrats pushing biased narratives on the public. No matter
| what position they want to take they can cherry pick some low-
| quality research to justify it with a veneer of "science".
| parton wrote:
| If you were to ask experts in a given subfield which papers are
| reliable, I'm sure they would be able to tell you. The problem is
| that there's no process in science for expert consensus to make
| it to out to doctors/laypeople.
|
| People assume that peer review means a paper is good, which
| couldn't be farther from the truth. Science journalists aren't
| any better, they care more about hype than consensus. Honestly,
| it's dangerous to give a random peer reviewed article to someone
| who doesn't have broad knowledge of the field.
|
| Maybe we need middle-ground journals that publish review articles
| at the level of a Scientific American reader?
| patientplatypus wrote:
| The major journals have absolutely no accountability. In any
| other market, if the product doesn't work or harms someone the
| company goes out of business or the maker is sued. Not so in
| journals. So, why do we accept it? Because there's no other way
| for the layman to determine what makes a good professor, because
| by definition, they are smarter than us (or at least they're
| supposed to be), and so we (the general public) are not able to
| tell if they are good at what they do or not.
|
| So - the answer we have is peer review, which is just the foxes
| guarding the hen house. There's no other solution that's been
| proposed that makes any sense in a self reinforcing market
| manner. Having some post-docs suddenly become concerned about
| this and hire a bunch of undergraduates to start using to comb
| excel with spreadsheets will be useful until everyone loses
| interest. The price of a can of Coca-Cola isn't useful until
| people lose interest - it's market priced by millions of
| customers at every minute of every day.
|
| Until there's a solution to this problem that makes sense this
| will keep happening over and over again.
| sjkoelle wrote:
| could start by paying journal reviewers
| jrumbut wrote:
| This is an underrated idea. Putting a very smart and
| motivated person on the other side of the proble is better
| than any static set of incentives that can be gamed.
| stocknoob wrote:
| Prediction markets may be an option:
| https://www.pnas.org/doi/10.1073/pnas.1516179112
|
| Similar to how charities (ostensibly) can be rated by Charity
| Navigator, and colleges (ostensibly) can be rated by US News,
| the credibility of various studies (and the journals that
| publish them) can be measured.
| ubj wrote:
| Retraction Watch [1] is a great source of additional examples of
| unethical behavior in scientific research.
|
| [1]: https://retractionwatch.com/
| corbulo wrote:
| P hacking more generally has been an issue for some time.
|
| Its difficult to trust really almost any study even if you find
| parts of it to be reliable.
|
| Take one or a few stats courses to find out how easy it is to
| smudge data with no one being the wiser. Its a real problem.
| krona wrote:
| For the reasons you say, I regard papers that report p-values
| without effect sizes to be at most interesting, but probably
| irrelevant for making actual real-life decisions.
| PicassoCTs wrote:
| Im advocating for "You keep what you kill" rules in science.
|
| If you disprove a paper or proof a study can not be replicated,
| you get the funds of the scientist, subtracted from his/her
| current funding. Make bad science fund the good science and make
| de-replication a for profit endeavor. There can be all funding in
| the world for quack science, but if it can be debunked and is
| debunked, it will finance real science.
| bmacho wrote:
| Who would you trust? People that -are-doing-it-their-whole-life-
| that have an interest to lie to you, or a correct sounding
| reasoning and the wisdom of billions of people for milleniums?
| uoaei wrote:
| Wisdom like what? "Sky daddy solves everything"?
|
| For the record, there's lots we can learn from ancient and
| indigenous knowledge, but let's not pretend that everything
| would be better if we cast off the whole lot of modern society.
| pedalpete wrote:
| I feel this is an area where AI could be helpful in recognizing
| suspected fraudulent, or potentially just poorly studied
| research.
| [deleted]
___________________________________________________________________
(page generated 2023-02-23 23:00 UTC)