[HN Gopher] How many medical studies are faked or flawed?
___________________________________________________________________
How many medical studies are faked or flawed?
Author : PaulHoule
Score : 190 points
Date : 2023-09-19 16:31 UTC (6 hours ago)
(HTM) web link (web.archive.org)
(TXT) w3m dump (web.archive.org)
| ponderings wrote:
| Isn't this an issue with the reviews rather than the publication
| attempts? Wouldn't the normal (if there is such a thing) approach
| between people be to credit a reviewer with the quality of their
| reviews or lack thereof? Basically, who signed off on it?
| tcmart14 wrote:
| That is my understanding. Papers can be wrong for different
| reasons, some nefarious, some honest mistakes. The problem I
| wouldn't say is with the paper itself. It more of people citing
| it with a lack of peer review or lack of replication. And also
| peer review being like a stereotypical "LGTM" approval on a
| pull request.
| PeterStuer wrote:
| Tldr, most.
| bjourne wrote:
| We all know it is super-easy to cheat in science, but not much
| can be done about it. Short of requiring replication for every
| result published which isn't feasible if the study was costly to
| produce. And the problem isn't confined to medicine either. How
| many studies in hpc of the type "we setup this benchmark and our
| novel algorithm/implementation won!" aren't also faked or flawed?
| gmd63 wrote:
| Plenty can be done about it. We can start by increasing the
| standards for experimental proof past words on a piece of
| paper.
|
| Stream or record your experiment. If a 12 year old has the
| ability to stream his gaming life on twitch, scientists should
| be able to record what they are doing in more detail.
|
| You could start a new journal with higher level of prestige
| that only publishes experiments that adhere to more modern
| methods of proof.
| gustavus wrote:
| This seems overly simplistic, I mean part of the reason so
| many studies are so difficult is because they can happen over
| the period of months or years. This isn't like your High
| School chemistry experiment. These studies can be massive
| longitudinal studies that take years to conclude and work on.
|
| If the answers were easy I'm sure someone would've
| implemented it already, it turns out those kind of things are
| hard.
|
| I'd recommend checking out the following article.
| https://slatestarcodex.com/2014/04/28/the-control-group-
| is-o...
| gmd63 wrote:
| I'm interested more in why it specifically would be hard to
| just show people what you are doing (or did) instead of
| telling them a story at the end of months or years.
|
| Critical details and oversights can be lost in translation
| between reality and LaTeX that could be easily pointed out
| by third party observers.
| MockObject wrote:
| Maybe experiments can take thousands of hours.
| sfink wrote:
| > We can start by increasing the standards for experimental
| proof past words on a piece of paper.
|
| That is overly cynical. We are already well past that
| standard.
|
| We require words on a piece of paper that were written by
| somebody from a recognizable institution, or by an AI
| operated by somebody from a recognizable institution.
| noslenwerdna wrote:
| Require experimental analysis to be pre-registered? That is how
| it's done in particle physics, and it works well.
| michaelrpeskin wrote:
| The "setup this benchmark" happens in medicine all of the time,
| and it's super insidious and no one sees it. It's hard to tease
| out.
|
| For example, say you have a new (patentable) drug that you're
| trying to get approved and replace an old (generic, cheap)
| drug. You need to prove that the new drug is at least as safe
| as the old one and/or more effective than the old one. Which
| sounds reasonable.
|
| Let's say that the old drug is super safe and that the new one
| may not be. What you can do is set up your RCT so that you
| surreptitiously underdose the control arm so the safe drug
| looks less effective. And then you can say that your new drug
| is more effective than the old one.
|
| Peer review doesn't notice this because you can easily hide it
| in the narrative of the methods section. I've seen it a couple
| of times.
|
| So you can easily have "gold standard RCTs" that get the result
| you want just by subtle changes in the study design.
| soperj wrote:
| If the study is costly to produce, then it's even more
| important to replicate it, otherwise you'd wasting a large
| amount of money on a study with no real sense of whether it is
| flawed or not.
| mcmoor wrote:
| I've read that the real reason why researches generated by
| war crimes are worthless, is because we can't replicate it.
| Using a result that its validity can't be checked is just a
| way to disaster. And this is researches with the highest cost
| (human lives) and we still can't use it.
|
| Lots of "interesting" psychology experiments in the days of
| yore turns out to have lots of damaging confounding variables
| and we can't just redo the experiment because, well, we
| shouldn't.
| MockObject wrote:
| When I first grasped how many fake papers are floating around, I
| gnashed my teeth and called for heinous penalties upon the
| fraudsters.
|
| After a moment, I thought better, and decided that we should
| actually offer incentive for fraudulent papers! Liars will always
| have incentive to lie. When their lies are accepted by those who
| ought to be more skeptical, it's an indictment of the system,
| which should be more robust in detecting fraud.
| dukeofdoom wrote:
| The few times I visited a hospital I met people there with
| complications from previous drugs. One with liver failure. Thanks
| to that experience no covid shot for me, plus I actually read the
| pfzier study the judge forced them to release. I now view drug
| companies on the level of Cartels in Mexico with politicians in
| their pockets.
| alwayslikethis wrote:
| > the pfzier study the judge forced them to release
|
| Can you send a link to it?
| dukeofdoom wrote:
| Google made it impossible to find, found a link on yandex.
| https://phmpt.org/wp-
| content/uploads/2021/11/5.3.6-postmarke...
| febeling wrote:
| I heard somewhere 11% of cancer research can be replicated. Does
| anyone have numbers with sources for replicatability?
| [deleted]
| TZubiri wrote:
| 37
| rendang wrote:
| I'm not nearly qualified to make this argument, but has anyone
| ever suggested that we collectively do away with the principle
| that one must publish original research in order to receive a
| PhD? Maybe in something like entomology, there are enough
| undescribed beetle species out there to supply myriads of
| dissertations, but in other fields it seems like you are just
| incentivizing trivial, useless, or fraudulent research.
| obviouslynotme wrote:
| That wouldn't fix anything at all. You still get that PhD to do
| research and publish. The only fix for this is funded and
| career-advancing randomized replication. We should lean harder
| into the scientific method, not withdraw from it. Absolutely
| nothing else will work.
| 2482345 wrote:
| As a completely unqualified laymen my initial question would be
| "If you don't reward novelty why would anyone focused on career
| building want to be novel?" Not that I doubt academia has
| people who want to push the cutting edge forward (if anything
| that seems to be the only reason to go into academia vs.
| private industry usually) but if I was a fresh-faced PhD
| aspirant I'd want to take the most reliable route to getting my
| degree and treading ground someone else has already walked
| seems like a much safer way to do that than novel research if
| the reward at the end is the same.
|
| But maybe that's a good thing? I can't actually say a reason I
| think it'd be that terrible except for the profs doing novel
| research that would lose their some of their student workforce.
| bee_rider wrote:
| IMO the problem might be more that in general we've got this
| perception that there's a sort of "academic ranking," or
| something like that, that puts:
|
| Bachelors < Masters < PhD
|
| Which, of course, not not really the case. A PhD in <field> is
| a specialization in creating novel research in <field>. In
| terms of actually applying <field>, a Masters ought to be as
| prestigious or whatever as a PhD. That it isn't thought of that
| way seems to indicate, I dunno, maybe we need a new type of
| super-masters degree (one that gives you a cool title I guess).
|
| Or, this will get me killed by some academics, but let's just
| align with with the general public seems to think anyway: make
| a super-masters degree, give it the Dr title, make it the thing
| that indicates total mastery of a specific field (which is what
| the general public's favorite Doctors, MDs, have, anyway) (to
| the extent to which a degree can even indicate that sort of
| thing, which is to say, not really, but it is as good as we've
| got). Then when PhDs can have a new title, Philosopher of
| <field>, haha.
| 0cf8612b2e1e wrote:
| Those are two wildly different things. Most things are flawed.
| Deliberately fabricating results is anathema to advancing
| science.
| ftxbro wrote:
| They mean flawed to the point of being useless in a way that
| indicates incompetence or negligence or fraud. Not flawed like
| they had a typo or a negative result or didn't use MLA
| formatting in their citation.
| rossdavidh wrote:
| True that, but the article does distinguish between "flawed"
| (44%) and "problems that were so widespread that the trial was
| impossible to trust" (26%).
|
| Really, though, how/why would we expect otherwise? There's
| nothing in the system to prevent it, and plenty to incentivize
| it. There's really no good reason to expect (under the current
| system) that it would not happen (a lot). Without systemic
| change, it will not get better.
| Randomizer42 wrote:
| [flagged]
| acc_297 wrote:
| Yes I work with these IPD spreadsheets every day and they all
| have typos often in the time/date column (issues with midnight
| and new years are very common) this can result in an erroneous
| reporting of something like average drug exposure or clearance
| if 1 or 2 subjects have uncaught errors in the reported data
|
| I would expect 26% or more studies have these flaws but faked
| data is a different thing entirely and 26% fake would be
| incredibly worrying
| rez9x wrote:
| How many studies are faked and how many just have a
| 'selective outcome'? Almost every nutritional study I read is
| at some point sponsored by a group that would benefit from a
| positive (or negative) outcome. I imagine several studies are
| run and only the one where the conditions led to a desired
| outcome are actually published. The researchers may know the
| results are inconsistent and there is an error somewhere, but
| it's not in their best interest to find the error and correct
| it.
| LudwigNagasena wrote:
| That's why preregistration is important.
| jovial_cavalier wrote:
| How do you discern between a study that is flawed and one that
| is faked? For instance, you could use some methodology that you
| know to be flawed, and allow that flawed methodology to bias
| your result in a pre-determined direction. If you're ever
| caught, you just claim it was an honest error, but it functions
| exactly the same as if you generated the data wholesale.
| qt31415926 wrote:
| He groups them together because ultimately the result is that
| the science can't be trusted. He doesn't go so far to claim
| that one was intentionally faked vs gross incompetence.
| mcmoor wrote:
| My interpretation is that "flawed" study usually have honest
| data but wrong interpretation/analysis or wrong data
| gathering design so to make the conclusion essentially
| worthless. "Faked" study just fucking lie and may just
| provide rigged data to support a flawless conclusion.
| passwordoops wrote:
| My PhD advisors wife's job at a big pharma company was in a
| department that attempted to reproduce interesting papers. She
| claimed they only had a 25% _success_ rate (this was early-mid
| 2000s)
| pella wrote:
| 2 months ago: https://news.ycombinator.com/item?id=36770624 ( 333
| points )
| Solvency wrote:
| I often chuckle that same people who vehemently defend vaccine
| safety studies also decry studies that show processed seed oils,
| glyphosate, aspartame, and hundreds of other molecules or
| compounds are unsafe over time. That's the beauty of medical
| studies! Whether they're good or bad, your own biases mean you
| can simply ignore the ones that don't align with your opinions.
|
| Oh, before I forget! Red meat and saturated fat is terrible for
| you! Wait, no, it's actually sugar that's evil. Vegetables are
| great for you! Oh wait, most vegetables contain oxalates and
| other defense, mechanisms and compounds that are actually bad for
| you overtime.
| rfrey wrote:
| It's pretty easy to ridicule (and therefore feel superior to,
| nice bonus!) an intellectual opponent if you get to invent
| their views out of thin air.
| johndhi wrote:
| Hmm -- in usual fashion I haven't read the original post here,
| but I'm guessing it didn't find that RCT trials that big pharma
| do are quite as useless as most 'medical studies' generally.
|
| Do you think well-funded RCTs (like those that support vaccine
| safety) are just as weak as any old observational study?
| johndhi wrote:
| I've now read it.
|
| But my question to the person saying it's problematic to
| defend vaccine studies and attack food results is: isn't it
| possible that you feel the research procedures used in one
| are superior to those used in another?
|
| For example: vaccine safety study looks at 200,000 people and
| randomly assigns them to use or not use the vaccine.
| Coffee/red wine study looks at 30 people and surveys them
| about how they felt last week after drinking coffee/red wine.
| Looking at these two, I think it's fair to put more trust in
| the vaccine study.
| misterdad wrote:
| [dead]
| paulddraper wrote:
| > Red meat and saturated fat is terrible for you!
|
| Red meat is a little bit bad for longevity. The majority of the
| reported effect is correlative.
|
| > it's actually sugar that's evil.
|
| Sugar is bad but mostly because it's easy to overeat, and
| obesity is all around terrible for health.
|
| > Vegetables are great for you! Oh wait, most vegetables
| contain oxalates and other defense, mechanisms and compounds
| that are actually bad for you
|
| Cooking removes most oxalates (tho vitamins too, to be fair).
| But the overall effect of oxalates is relatively minor, except
| in extreme cases.
|
| ---
|
| Every food source has advantages and disadvantages.
|
| Not being obese is 75% of the health battle.
| ifyoubuildit wrote:
| This is all stated as if it is fact. It sounds believable to
| me, but how do you (and the rest of us) know that the sources
| you got this from aren't a part of the junk being called out
| by the article?
| paulddraper wrote:
| Preponderance of the evidence of multiple studies.
|
| Skepticism is reasonable.
| vampirical wrote:
| I think you're accidentally telling on yourself here. You're
| looking at somebody getting a result which is surprising to you
| but rather than being curious about how they might be on to
| something you're turning off your brain and assuming they're
| malfunctioning.
|
| Something being in a category, such as "a study", doesn't tell
| you much about a thing. If you read multiple studies on vaccine
| safety critically and reason about them and what experts are
| saying about them, IMO most functional human being are going to
| reach the same general conclusion about vaccine safety. If you
| do the same thing on studies about seed oils or aspartame
| you're also going to come to the conclusion that they're safe!
| If you're not reaching these same results it doesn't necessary
| mean you're the one who is malfunctioning but you should
| seriously consider it and try again to learn what you might not
| know.
| ifyoubuildit wrote:
| I dislike "the same people who" comments, but I agree with the
| sentiment. A lot of us have very little ability to determine
| the validity of this study over that, but will confidently
| voice an opinion anyway.
|
| The only way (imo) to stay on firm ground is to acknowledge
| that someone published a thing saying xyz, and maybe that you
| are x% convinced by it. Can't get too far out over your skis
| going that route.
| the_af wrote:
| I've never met one of those people. In general people who decry
| vaccines (as a norm, not talking about the covid can of worms)
| tend to fall into the "alternative medicine" bucket and
| distrust all science studies, and those who trust vaccines tend
| to also trust other scientific studies...
| LordKeren wrote:
| The central thesis of this comment is "all medical studies are
| equal". They are not.
| gustavus wrote:
| I can't wait for it to get to the next level of meta.
|
| "How many medical studies of medical studies are faked or flawed?
| A meta report on meta studies."
|
| https://xkcd.com/2755/
| zackmorris wrote:
| I'm not as worried about faked/flawed studies as I am about
| pharmaceutical companies knowingly selling bad drugs to make
| billions of dollars with no recourse from the FDA. I'll never
| look at Big Pharma the same way again after watching Painkiller:
|
| https://www.netflix.com/title/81095069
| xhkkffbf wrote:
| I'm sorry to say that I've personally witnessed some people in
| the next lab commit fraud. It was investigated and someone was
| fired, but I can believe that it is all too common.
| jjslocum3 wrote:
| > ...it would help if journals routinely asked authors to share
| their IPD
|
| Why couldn't a bad actor just fake the raw data? Isn't that what
| Climategate was all about?
| ketanmaheshwari wrote:
| Pretty sure most studies about the effects of coffee and red wine
| are flawed.
| function_seven wrote:
| That cynicism you have about those studies is probably because
| you don't eat enough eggs. Or too many. I'm not sure...
| PaulHoule wrote:
| At least I get eggs from a neighbor and not "big egg"; I keep
| thinking about getting my own chickens but that's not the way
| I want to feed Mr. Fox who lives in my neighborhood too.
| hellotheretoday wrote:
| My neighbor had 7 chickens and a rooster and lost all of
| them to mr. Fox. It was a tragedy, the eggs were so good.
| the_af wrote:
| Well, your free range chickens aren't truly free range if
| they don't have to fend off the occasional (and free range)
| fox!
|
| Nothing is more natural and free range than the occasional
| murder between animals.
| johndhi wrote:
| I like where this thread is going.
| cratermoon wrote:
| My free range chickens are protected by free range fox
| predators that are communal with them. Sometimes they
| will eat a chicken or two, but mostly they let the
| chickens be because they keep the free range parasites in
| check.
| PaulHoule wrote:
| I definitely count on stinging insects that live in holes
| in my house (or that make holes) to keep other stinging
| insects away.
|
| What I've been told is that opossums are much worse than
| foxes in the sense that a fox will usually eat a chicken
| or two to survive but opossums seem to freak out and will
| kill all the hens in a henhouse in one go.
|
| I've often wished I could talk with my cats but more than
| ever I wish I could ask them what they knew about the
| fox. There is this lady
|
| https://www.youtube.com/@debs3289
|
| who meets them in the street, has them come to her door,
| and feeds them chicken (!) Secretly I imagine that the
| fox is really a
|
| https://en.wikipedia.org/wiki/Kitsune
|
| and my wife is always reminding me that it has just one
| tail, not nine.
| the_af wrote:
| I have to say that I find foxes beautiful. Then again,
| I'm not a farmer nor do I keep chickens. I'm a city boy.
| PaulHoule wrote:
| The animals we're mostly concerned about at our farm are
| horses and cats. I don't think there is anything left in
| North America that can trouble a horse, unless you count
| mosquito-transmitted infectious diseases. I hear foxes
| are not dangerous to cats but I believe we've lost some
| to coyotes.
|
| Mostly we've had people around, either tenants or
| neighbors, who keep chickens so we don't have to. I'll
| say the eggs from a small scale chicken operation taste a
| lot better than commercial eggs.
|
| I definitely thought about trying to draw in the fox but
| as much as that British lady makes it look easy on
| Youtube Shorts the legends are that foxes can cause a lot
| of trouble.
| the_af wrote:
| > _I believe we 've lost some to coyotes_
|
| Am I out of place if I say I also like coyotes?
| (Remember: city boy, so nothing is at stake for me here.
| I also like wolves!)
|
| And yeah, Japanese folklore has taught me that it's best
| to avoid kitsune. Though they sometimes turn into magical
| women who help you?
| the_af wrote:
| What!? Everyone knows a glass of wine once a day is good for
| your heart, but also even a single glass a year is way too much
| and causes irreversible damage ;)
|
| The old adage applies: "everything nice is either illegal or
| bad for your health". Or both, I would add.
| willmeyers wrote:
| Everything in moderation, including moderation
| Clubber wrote:
| Or will produce a baby.
| zzo38computer wrote:
| I think that something can have both good and bad effects.
| the_af wrote:
| Yes, of course. But have you read journalists reporting on
| science findings? It's always an extreme "scientists now
| claim one glass of wine a day is good for you!" and
| "scientists discovered that even one glass of wine will
| ruin your life forever".
|
| Never the middle ground, it's always a shocking new finding
| "by science" (spoiler: scientists seldom say the things
| newspapers and pop-science/nutrition & health articles
| claim they say).
| jylam wrote:
| I'm nice so your point doesn't stand. Or does it ? I'm not
| illegal at least.
| the_af wrote:
| Well, _you_ would say so even if you were illegal!
| sampo wrote:
| > "everything nice is either illegal or bad for your health"
|
| And is known to cause cancer in the State of California.
| tarxvf wrote:
| Well that's alright then, I'm in New Hampshire.
| waihtis wrote:
| why coffee and red wine specifically?
| blfr wrote:
| Popular subjects, soft disciplines, frequent contradicting
| results.
| Ekaros wrote:
| It might be that those are common enough in shared datasets
| or when datasets are collected. So it is easy to draw
| interferences with them and various other measured factors.
| PaulHoule wrote:
| Both of those are substances that _should_ be harmful based
| on the effects of the major ingredients: e.g. caffeine is
| addictive and when I am deprived of it my schizotypy flares
| up and I have paranoid episodes (I yell at my wife "why the
| hell are you always standing where I want to go next?") I
| have had two doctors tell me three other reasons why I should
| quit.
|
| Look in the medical literature and it seems outright spammed
| by reports on the positive effects of caffeine and negative
| reports on any of the harmful effects one would expect.
|
| Similarly the main active ingredient of red wine (alcohol) is
| harmful, red wine in particular causes a lot of discomfort,
| dispepsia, hangovers and other unpleasant effects if you get
| a bad vintage but look the literature and it is like it will
| transport you to a blue zone and you will love forever.
|
| And you find those kind of papers spammed in "real" journals,
| not MDPI or "Frontiers" journals.
| waihtis wrote:
| Yes but that's not something you can automatically draw
| inferences from. Exercise is harmful to you in a short
| enough time interval but benefits you on the long run.
| denimnerd42 wrote:
| Furan content in coffee too. Very low amount but still.
|
| Coffee prevents headaches for me so I'll always drink it.
| And no it's not related to physical dependence although at
| this point the withdrawal will guarantee a headache.
| goosinmouse wrote:
| I can relate. I never drank anything with caffeine and
| would get headaches fairly often. Headaches were never
| too bad or too often to need medical attention but was
| just normal part of life. I started drinking coffee on
| road trips and drives over 3 hours long and noticed that
| my headache coming on would go away right after. Now i
| drink coffee twice daily and i'll get a headache once a
| month at most.
| smazga wrote:
| Yerba mate is my caffeine of choice and I suggested it to
| my Dad who gets bad migraines. He claims that when he
| feels a migraine coming on, he can drink a can and it
| will result in a mild headache instead of forcing him to
| lie down in a dark room for hours.
| lurquer wrote:
| I drank five or six cups of coffee a day for decades. I'd
| even have a cup before going to bed -- that's how
| tolerant I had become.
|
| Got a mild flu/Covid/cold couple years ago. Better in a
| week. But, during the illness and since, the slightest
| bit of caffeine would make be incredibly wired to the
| point of panic attacks. Had to quit cold turkey. I've
| tried a cup now and again, and it's the same thing: 6
| hours of overwhelming anxiety.
|
| Wierd. It's like I became hypersensitive to caffeine.
| Oddly, though, nicotine doesn't have that effect, and I
| always figured the two stimulants were similar.
| waihtis wrote:
| Try yerba mate, like seriously. It gives you a very
| smooth caffeine-induced motivation boost without any of
| the anxiety effects.
|
| I frequently switch between that and coffee (coffee has a
| much more pronounced effect and sometimes you have to
| grind)
| MavisBacon wrote:
| Interesting. I was at one point diagnosed with "tension
| headaches" and prescribed a medication called Fioricet
| which is a combination of caffeine, acetaminophen, and a
| barbituate called butalbital. Incredibly effective. I'm
| sure the harm profile of coffee alone is lower so if it's
| totally eliminating headaches as a problem for you it's
| perhaps a better solution, but thought i'd throw that out
| there as it could be worth seeing a neurologist if the
| problem becomes unmanageable for you
| [deleted]
| 99_00 wrote:
| >caffeine is addictive and when I am deprived of it my
| schizotypy flares up and I have paranoid episodes
|
| Dose size matters.
| PaulHoule wrote:
| ... and I find it very hard to stay at a low dose. If I
| quit entirely for a few weeks I could probably manage one
| small coffee a day for a while but inevitably I'd have a
| rough night and then I need a small and then another
| small or maybe just a large and pretty soon I can be
| drinking two whole carafe a day.
|
| I have the same issue w/ cannabis. Right now I have a few
| plants (legal) in the garden and also a bag that is going
| to a friend and I don't care. If I had a little puff
| though the next day I would want another little puff and
| another and in a week or so I would be like the guy in
| the Bob Marley song
|
| https://genius.com/The-toyes-smoke-two-joints-lyrics
| swalsh wrote:
| I read a lot of studies about gout, there's a lot of studies
| like "we found participants who consumed x cups of coffee
| lowered their uric acid" all of them follow the same pattern.
| They asked people to consume more liquids, and it lowered their
| uric acids.
|
| I think peeing more lowers your uric acid.
| neilv wrote:
| You know how (to use a familiar field) most of software
| development has become going through the motions, churning
| massive insecure bulk, doing nonsense rituals and making up myths
| about process and productivity, hopping jobs frequently, cutting
| corners and breaking rules, etc., and the purpose is usually not
| to produce trustworthy solutions, but only to further career and
| livelihood/wealth?
|
| With everything we've been seeing in recent years on HN, about
| science reproducibility and fraud, and the complaints about
| commonplace fudging and fraud that you might hear privately from
| talking with PhDs/students in various fields... I wonder whether
| science has developed a similar alignment problem.
|
| How many people in science careers are doing trustworthy science?
| And when they aren't, why not?
| diogenes4 wrote:
| > You know how (to use a familiar field) most of software
| development has become going through the motions, churning
| massive insecure bulk, doing nonsense rituals and making up
| myths about process and productivity, hopping jobs frequently,
| cutting corners and breaking rules, etc., and the purpose is
| usually not to produce trustworthy solutions, but only to
| further career and livelihood/wealth?
|
| Well furthering wealth is the reason why tech companies exist.
| The incentives in academia are completely different. You might
| be right, but i see no reason to expect similar behavior across
| such drastically different situations.
| [deleted]
| [deleted]
| somenameforme wrote:
| "For more than 150 trials, Carlisle got access to anonymized
| individual participant data (IPD). By studying the IPD
| spreadsheets, he judged that 44% of these trials contained at
| least some flawed data: impossible statistics, incorrect
| calculations or duplicated numbers or figures, for instance. And
| in 26% of the papers had problems that were so widespread that
| the trial was impossible to trust, he judged -- either because
| the authors were incompetent, or because they had faked the
| data."
|
| So, 70% fake/flawed. The finding falls in line with other large
| scale replication studies in medicine, which have had replication
| success rates ranging from 11 to 44%. [1] It's quite difficult to
| imagine why studies where a positive outcome is a gateway to
| billions of dollars in profits, while a negative outcome would
| result in substantial losses, might end up being somehow less
| than accurate.
|
| [1] -
| https://en.wikipedia.org/wiki/Replication_crisis#In_medicine
| SubiculumCode wrote:
| I'd like to point out that the study [1] reported 16 trials
| from the USA with just one "zombie/fatally flawed" trial. e.g.
| 6%
|
| Most of the studies that were problematic came from China and
| Egypt.
|
| In other words, nothing new here.
|
| [1] https://associationofanaesthetists-
| publications.onlinelibrar...
| onlyrealcuzzo wrote:
| As someone who works in Data Science at FAANG - if you look
| hard enough - there is something questionably wrong in every
| step of the data funnel.
|
| And that's when I believe people do have a somewhat best effort
| to maximize profits. There are plenty of people that only care
| about career progression and think they can get away with lying
| and cheating their way to the top. They wouldn't believe that
| if it didn't work sometimes.
|
| These medical studies are also run mainly to maximize profits,
| also by some career climbers. They are not run virtuously for
| the betterment of society.
|
| So I would be astounded if they are as reliable as people might
| like to believe.
|
| Maybe I'm just being grossly skeptical. Actually, I'd feel
| better if someone could convince me I'm completely unfounded
| here.
| verisimi wrote:
| I don't think that's skeptical.
|
| I do think there is an even worse issue - which is funding.
| The money incentive means you can fund studies that support
| whatever you want.
| hotnfresh wrote:
| I've literally never seen data-driven business decisions that
| weren't using fatally flawed datasets or methods so bad that
| you'd be a fool to believe you were getting anything but
| gibberish out of them, except in trivial cases.
|
| You quickly learn not to be the guy pointing out the problem
| that means we'll need several people and months or years to
| gather and analyze data that _would_ allow them to (maybe)
| support or disprove their conclusion, though. Nobody wants to
| hear it... because they don't actually care, they just want
| to present themselves as doing data-driven decision making,
| for reasons of ego or for (personal, or company) marketing.
| It's all gut feelings and big personalities pushing companies
| this way and that, once you cut through the pretend-science
| shit.
|
| "Yeah, that graph looks great ( _soul dies a little_ ) let's
| do it"
| ancorevard wrote:
| "And that's when I believe people do have a somewhat best
| effort to maximize profits."
|
| Nope. I actually think that if you do scientific research as
| a company (profit) it may make you less bad/less likely to do
| fraud compared to academia (non-profit).
|
| Reason is that there are more ways to punish you, employees,
| board, investors, etc in a profit seeking vehicle, and as a
| profit seeking vehicle being caught must be part of the
| profit seeking calculation - in the end, the world of
| reality/physics will weigh your contribution.
|
| I believe there is evidence that there is more fraudulent
| scientific research happening in non-profit
| vehicles/academia. Take for example an area where there are
| fewer profit seeking companies participating - social
| sciences. It's dominated by academia. Now look at the
| replication rate of social sciences.
| screye wrote:
| I have a simpler reason for the same belief.
|
| You can fake everything except a well designed A/B test. At
| FAANG scale, a statistically significant A/B test
| requirement will stop the worst fraud before it hits the
| user.
| onlyrealcuzzo wrote:
| And also - you can somewhat take care of bugs by evenly
| distributing them to your test & control group.
| spicymapotofu wrote:
| All three points in last paragraph seem wholy unrelated to
| each other and your larger point. I agreed with first half.
| hackncheese wrote:
| Objectivity and honesty can be hard to find if all someone
| cares about is their reputation as a competent researcher or
| climbing the ladder. What do you think a potential solution
| would be for this? I feel like even in my own experience,
| trying something out and it not working feels like failure,
| when in fact to proclaim it a success or "fix" it is truly
| what harms both the endeavor for truth and the people reliant
| on the outcomes of these surveys
| autoexec wrote:
| > Objectivity and honesty can be hard to find if all
| someone cares about is their reputation as a competent
| researcher or climbing the ladder.
|
| It seems like destroying the reputation and career of
| people who fake science would be a great start. If you're
| willing to fake data and lie to get results, there will
| always be an industry who'd love to hire you no matter how
| tarnished your reputation is. We need a better means to
| hold researchers accountable and we need to stop putting
| any amount of faith in any research that hasn't been
| independently verified through replication.
|
| Today the lobby for orange juice manufactures can pay a
| scientist to fake research which shows that drinking orange
| juice makes you more attractive, and then pay publications
| to broadcast that headline to the world to increase sales.
| We should have some means to hold publications responsible
| for this as well.
| cutemonster wrote:
| > destroying the reputation and career of people who fake
| science would be a great start
|
| When so many reports are faulty and fraudulent, that
| might instead be the great start of destroying the
| careers of those who would have revealed the fraudulent
| research?
|
| I wonder what'd happen if researchers got compensated and
| funding based on other things, unrelated to papers
| published. But what would that be
| shaburn wrote:
| Really curious how a profitable industry like medical
| research ranks against less profitible.
| JacobThreeThree wrote:
| >Actually, I'd feel better if someone could convince me I'm
| completely unfounded here.
|
| Unfortunately, your skepticism is not unfounded. Those in the
| industry conclude the same. Take, for instance, the editor in
| chief of The Lancet:
|
| >The case against science is straightforward: much of the
| scientific literature, perhaps half, may simply be untrue.
| Afflicted by studies with small sample sizes, tiny effects,
| invalid exploratory analyses, and flagrant conflicts of
| interest, together with an obsession for pursuing fashionable
| trends of dubious importance, science has taken a turn
| towards darkness.
|
| https://www.thelancet.com/journals/lancet/article/PIIS0140-6.
| ..
| cypress66 wrote:
| You don't need "big pharma" for this. Researchers at
| universities also do these kinds of things because it helps
| them advance their careers.
| smu3l wrote:
| > Researchers at universities also do these kinds of things
| because it helps them advance their careers.
|
| This is a huge problem and in my opinion is mostly due to bad
| incentive structures and bad statistical/methodological
| education. I'm sure there are plenty of cases where there is
| intentional or at least known malpractice, but I would argue
| that most bad research is done in good faith.
|
| When I was working on a PhD in biostatistics with a focus on
| causal inference among other things, I frequently helped out
| friends in other departments with data analysis. More often
| than not, people were working with sample sizes that are too
| small to provide enough power to answer their questions, or
| questions that simply could not be answered by their study
| design. (e.g. answering causal questions from observational
| data*).
|
| In once instance, a friend in an environmental science
| program had data from an experiment she conducted where she
| failed to find evidence to support her primary hypothesis.
| It's nearly impossible to publish null results, and she
| didn't have funding to collect more data and had to get a
| paper out of it.
|
| She wound up doing textbook p-hacking; testing a ton of post-
| hoc hypotheses on subsets of data. I tried to reel things
| back but I couldn't convince her to not continue because
| "that's how they do things" in her field. In reality she
| didn't really have a choice if she wanted to make progress
| towards her degree. She was a very smart person, and
| p-hacking is conceptually not hard to understand, but she was
| incentivized to not understand it or to not look at her
| research in that way.
|
| * Research in causal inference is mostly about rigorously
| defining the (untestable) causal assumptions you must make
| and developing methods to answer causal questions from
| observational data. Even if an argument can be made that you
| can make those assumptions in a particular case, there is
| another layer of modeling assumptions you'll end up making
| depending on the method you're using. In my experience it's
| pretty rare that you can really have much confidence that
| your conclusions about a causal question if you can't run a
| real experiment.
| bigfudge wrote:
| It's so interesting to hear you say that. I became
| disillusioned with causal methods for observational data
| for similar reasons. You can't often model your way to
| interesting inferences without an experiment.
| jermaustin1 wrote:
| >helps them advance their careers
|
| ... into the multi-billion dollar companies GP is talking
| about.
| Tenoke wrote:
| A lot who stay purely in Academia do it as well, presumably
| for the prestige.
| Ilverin wrote:
| Well you need peer reviewed papers in highish impact
| journals to get tenure. For some people, the only way
| they're getting that is by cheating.
| BurningFrog wrote:
| Doing research has a lottery element. You explore
| something that might reveal an important discovery. And
| sometimes it just doesn't.
|
| That doesn't mean you're a bad scientist, just an unlucky
| one. But it does mean you can't get tenure.
|
| So it's easy to understand why people fake results to
| secure a career.
| autoexec wrote:
| > That doesn't mean you're a bad scientist, just an
| unlucky one. But it does mean you can't get tenure.
|
| That sounds like a really easy problem to solve. Just
| treat valid science as important regardless of the
| results. The results shouldn't matter unless they've been
| replicated and verified anyway.
| vitalurk wrote:
| Too easy. Define "valid" though.
| autoexec wrote:
| Valid as in meaningfully peer reviewed to avoid
| flawed/badly designed studies as well as total garbage
| (for example https://nerdist.com/article/fake-star-wars-
| midi-chlorian-pap...) but the gold standard should be
| replication.
|
| We should reward quality work, not simply the number of
| research papers (since it's easy to churn out trash) or
| what the results are (because until they are verified
| they could be faked).
| autoexec wrote:
| Peer review is a joke. Too often it's a rubber stamp
| because there's no accountability for journals that fail
| to do the job. Unless peer review means something, the
| standard should change so that published papers only
| count if they're independently replicated and verified.
| lmm wrote:
| It's worse than that: if enough people cheat, even the
| best people can't make the grade without cheating.
| rqtwteye wrote:
| I would think a successful study is way better for your
| academic career than a failed study.
| ethanbond wrote:
| Which is why you fake your data into looking successful,
| and you don't even go through the effort of publishing
| the failed studies (leading to its own _additional_
| problems in understanding what's true and what we know)
| LMYahooTFY wrote:
| What's your point? We have "big pharma" for this.
|
| Do you think CocaCola and the Sacklers had their own unique
| ideas shared by no one else? That we've filtered all
| scrupulous people out of industry?
|
| Scruples are an abstraction at that scale.
| smcin wrote:
| Is the Coca-Cola mention about their campaign to influence
| CDC research and policy [0], and pro-obesity astroturf
| [1][2]?
|
| [0]: Politico "Coca-Cola tried to influence CDC on research
| and policy, new report states"
| [https://www.politico.com/story/2019/01/29/coke-obesity-
| sugar...]
|
| [1]: "Evaluating Coca-Cola's attempts to influence public
| health 'in their own words': analysis of Coca-Cola emails
| with public health academics leading the Global Energy
| Balance Network"
| https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10200649/
|
| [2]: Forbes: "Emails Reveal How Coca-Cola Shaped The Anti-
| Obesity Global Energy Balance Network" https://www.forbes.c
| om/sites/nancyhuehnergarth/2015/11/24/em...
| paulusthe wrote:
| It's more complex than just that. Sure, there's the people
| trying to make a dollar who are willing to do bad science in
| order to get the result they want. But there's also the general
| publication bias against replication studies - who wants to
| read them, and who wants to do them (they're not usually seen
| as prestigious academically: most academics want to test their
| ideas, not those of others.
|
| And then there's cultural differences in which people sometimes
| see a negative result as a "failure", don't publish it as a
| result, and instead skew the data and lie their asses off in
| order to gain prestige in their career. As long as nobody
| double checks you, you're good.
| autoexec wrote:
| > ut there's also the general publication bias against
| replication studies - who wants to read them, and who wants
| to do them (they're not usually seen as prestigious
| academically: most academics want to test their ideas, not
| those of others.
|
| Academia seems like the idea place for this. Why not require
| a certain number of replicated studies in order to get a
| degree? Universities could then be constantly churning out
| replication studies.
|
| More importantly, why do we bother taking anything that
| hasn't been replicated seriously? Anyone who publishes a
| paper that hasn't been verified shouldn't get any kind of
| meaningful recognition or "credit" for their discovery until
| it's been independently confirmed.
|
| Since anyone can publish trash, having your work validated
| should be the only means of gaining prestige in your career.
| NotHowStatsWork wrote:
| "So, 70% fake/flawed."
|
| I think you miss read that section? Only were 44% fake or
| flawed acording to this study.
|
| The 26% that were very flawed is a subset of the 44% that were
| flawed in general. So those precentages should not be added
| together.
| cycomanic wrote:
| Oh the irony!
| codingdave wrote:
| > Carlisle got access to anonymized individual participant data
| (IPD)
|
| I'm not in the industry so my question might have an obvious
| answer to those of you who are: How would one go about getting
| IPD if you wanted to run your own analysis of trial data or
| other data-driven research?
| misterdad wrote:
| You'll need to reach out to the study authors with a request.
| If they are interested (you're going to publish something
| noteworthy with a citation for them (low chance), you want to
| bring them in on some funded research (better chance), etc)
| then they'll push it to their Institutional Review Board (a
| group of usually faculty and sometimes administrative staff
| at a University/Hospital/Org) who will review the request,
| the conditions of the initial data collection, legal
| restrictions, and then decide if they'll proceed with setting
| up some sort of IRB agreement / data use agreement. Unless
| you're a tenured professor somewhere or a respected
| researcher with some outside group then you probably won't
| get past any of those steps. Even allegedly anonymized data
| comes with the risk of exposure (and real penalties) not to
| mention the administrative overhead (expense, time,
| attention) that you'll need to be able to cover the cost of
| through some funded research. That research, btw, will also
| need to be through some IRB structure. You can tap a private
| firm that acts as an IRB but that's another process entirely
| and most certainly requires fat stacks of cash. Legal privacy
| concerns, ethical concerns, careerism (nobody wants you to
| find the 'carry the two' you forgot so you can crash their
| career prospects), bloated expenses (somebody has to pay for
| all of that paperwork, all those IRB salaries, etc) and etc,
| etc, etc all keep reproducibility of individual data frozen.
| Even within the same institution. Within the same team! You
| have to tread lightly with reproduction.
| [deleted]
| waterheater wrote:
| Two relevant perspectives to share:
|
| "It is simply no longer possible to believe much of the
| clinical research that is published, or to rely on the judgment
| of trusted physicians or authoritative medical guidelines. I
| take no pleasure in this conclusion, which I reached slowly and
| reluctantly over my two decades as an editor of the New England
| Journal of Medicine." -Marcia Angell
|
| "If the image of medicine I have conveyed is one wherein
| medicine lurches along, riven by internal professional power
| struggles, impelled this way and that by arbitrary economic and
| sociopolitical forces, and sustained by bodies of myth and
| rhetoric that are elaborated in response to major threats to
| its survival, then that is the image supported by this study."
| -Evelleen Richards, "Vitamin C and Cancer: Medicine or
| Politics?"
| Zalastax wrote:
| Is there not full or at least partial overlap between the 44 %
| and the 26 %? Which would mean not 70 % but some smaller
| number?
| morelisp wrote:
| > The finding falls in line with other large scale replication
| studies in medicine, which have had replication success rates
| ranging from 11 to 44%.
|
| These numbers seem almost wholly unrelated. A perfectly good
| study may be extremely difficult to replicate (or even the
| original purpose of replication - the experiment _as described
| in the paper_ may simply not be sufficient); and an attempt at
| replication (or refutation), successful or not, is under the
| same pressure to be faked or flawed as the original paper.
| adasdasdas wrote:
| I also see similar findings in tech where most experiment
| results are "fudged". In some cases, people run the "same"
| experiment 5+ times until one is stat sig.
| qt31415926 wrote:
| I don't think the statement reads that the 44% and the 26%
| should be additive. Especially given the zombie graphic where
| it looks like they overlap the 26 on top of the 44, where the
| orange bar is the 26% and the remaining yellow bar is the 44%
| costigan wrote:
| I think you misread the article slightly. The 26% of zombie
| papers are inside the 44% of papers that contained at least
| some flawed data. Look at the figure just below this quote
| where the blue bar, indicating papers he thought were ok,
| covers more than 50%.
| crazygringo wrote:
| Which, ironically, just shows how easy it is to make mistakes
| not just in complicated statistical methods, but in basic
| interpretation of what numbers mean in the first place.
| SubiculumCode wrote:
| not to mention that everyone here forgets to mention that
| the majority of submitted papers with suspected fraud came
| from countries with known issues with fraudulent science
| publishing. e.g. China. The USA, for example, looked much
| better (although not perfect) at 6% zombie.
| some_random wrote:
| While we're on the subject, how many studies period are faked or
| flawed to the point of being useless? It seems to me that the
| scientific community's reaction to the replication crisis has
| been to ignore it.
| systemvoltage wrote:
| > has been to ignore it
|
| This is putting it very mildly.
| treis wrote:
| My internal filter is:
|
| Sociology/Psych/Economics are almost all junk. Their
| conclusions may or may not be correct.
|
| Medical studies are mostly junk. There's way too much financial
| incentive to show marginal improvement. Theraflu and anti-
| depressents come to mind. Both show a small effect in studies
| and launched billion dollar businesses.
|
| Hard science stuff tends to be pretty good. Mostly just
| outright fraud and they usually end up getting caught.
| fnikacevic wrote:
| Chemistry still has a ton of issues with people exaggerating
| numbers that can't be contested like yield and purity of
| reactions even among the top journals. Straight up fraud is
| rarer yes.
| RugnirViking wrote:
| > the scientific community's reaction to the replication crisis
| has been to ignore it
|
| What can they do? It's an incredibly hard problem to solve.
| It's like asking why the buisness community has done nothing to
| adress the housing crisis.
|
| Large scale culture changes, or the entire structure of the way
| science is conducted and funded, would be the only solutions
| colechristensen wrote:
| If a university was actually in the business of seeking
| truth, there is much they could do to solve this.
|
| * working groups "red teams" whose whole job it is to find
| weaknesses in papers published at the university
|
| * post mortems after finding papers with serious flaws
| exposing the problem and coming up with constructive
| corrective actions
|
| * funding / forcing researchers to devote a certain amount of
| time to replicating significant results
|
| * working groups of experts in statistics and study design
| available to consult
|
| * systems to register studies, methodologies, data sets, etc.
| with the aim of eliminating error and preventing post-hoc
| fishing expeditions for results
|
| The whole-ass purpose of a university is seeking knowledge.
| They are fully capable of doing a better job of it but they
| don't because what they actually focus on are things like
| fundraising, professional sports teams, constructing
| attractive buildings, and advancing ranking.
|
| Most universities would be better off just firing and not
| replacing 90% of their administration.
| jltsiren wrote:
| Universities can't do that, because they lack the
| expertise. Most of the time, the only people in a
| university capable of judging a study are the people who
| did it.
|
| That's because universities are in the business of
| teaching. Apart from a few rare exceptions, universities
| don't have the money to hire redundant people. Instead of
| hiring many experts in the same topic, they prefer hiring a
| wider range of expertise, in order to provide better
| learning opportunities for the students.
| some_random wrote:
| Maybe brush up on undergraduate statistics and experimental
| design so that they don't execute and publish obviously
| flawed studies? Perhaps they could apply such basic knowledge
| to the peer review process which is supposed to catch these
| things. They could stop defending their peers who publish
| bogus research, stop teaching about obviously flawed nonsense
| like the Stanford Prison "Experiment" and hold "scientists"
| like Philip Zimbardo in contempt. It's like asking how
| individual cops can make things better, of course they can't
| fix systematic problems but that doesn't mean there's nothing
| they can do.
| LudwigNagasena wrote:
| It doesn't have to be large scale. It can start with a single
| journal, with a single university, with a single scientist.
|
| Make peer review public. Weight replication studies more.
| Make conducted peer reviews and replications an important
| metric for tenure and grants. Publish data and code alongside
| papers.
| JacobThreeThree wrote:
| >What can they do?
|
| There's literally thousands of different things they could
| do.
|
| But why do anything when the real business of academia is in
| the tax-free hedge funds they operate and the government-
| subsidized international student gravy train? There's no
| short-term incentive to change anything.
| adamsb6 wrote:
| The replication crisis is even a thing because of very strong
| career incentives.
|
| You might think that publishing about the replication crisis
| itself would be great for your career, but perhaps not. Maybe
| the incentives to be able to bullshit your way to a
| professorship are so great that no one wants to rock the boat.
| rqtwteye wrote:
| "Maybe the incentives to be able to bullshit your way to a
| professorship are so great that no one wants to rock the
| boat."
|
| Our whole economy is fueled by people bullshitting each
| other.
| wizzwizz4 wrote:
| Our economy is fuelled by people doing actual work. Large-
| ish, visible parts are _shaped_ by bullshitters, but that
| 's a different thing.
| thereisnospork wrote:
| > It seems to me that the scientific community's reaction to
| the replication crisis has been to ignore it.
|
| They've always known so there hasn't actually been any new
| information from which to spur action. In the academic circles
| I've run in there has always been a strong mistrust of reported
| results and procedures based on past difficulties with internal
| efforts replicating results. Basically a right of passage for a
| grad student to be tasked with replicating work from an
| impossible paper.
| HarryHirsch wrote:
| That's the correct answer, but what is baffling is that this
| is news to so many people frequenting this website. Most
| everyone posting here has been though a university, but
| hardly anyone has been involved in the pursuit of research.
| onthecanposting wrote:
| PIs may not be able to raise money for replication studies.
| Some of this is a consequence of guidelines for federal funds
| to prevent waste on duplication of effort.
| some_random wrote:
| This goes far beyond there not being enough replication
| studies, the problem is that when replication studies are
| done the results don't reproduce because the results were
| bogus. https://en.wikipedia.org/wiki/Replication_crisis
| whatshisface wrote:
| If the replication studies were done at a reasonable rate
| there would be no incentive to produce bogus results
| because you'd be caught before you could go through an
| entire career as a "successful scientist."
| bee_rider wrote:
| This points to the real problem and also where the
| responsibility and interest ought to be to fix it.
|
| There's no replication crisis for academics because they have
| a meatspace social network of academics; they go to
| conferences together and know each other. You can just ignore
| a paper if you know the author is an idiot.
|
| If medical studies are faked, is it a problem? Presumable
| regulatory agencies are using these studies or something,
| right? Looks like the FDA and NSF need to fund some more
| replication studies.
| tppiotrowski wrote:
| > While we're on the subject, how many studies period are faked
| or flawed to the point of being useless?
|
| Or how many studies are useless, period? It's like publishing a
| memoir to Amazon. You can now say "author" on your resume, or
| when you're introduced or at cocktail parties but nobody finds
| any value in what you have to say. You can also use ChatGPT
| because people might not notice.
| some_random wrote:
| There is always value in expanding the breadth and depth of
| human knowledge, even if it doesn't seem useful to you, right
| now. That of course assumes that knowledge is true, which is
| the crux of the problem now.
| yellowcake0 wrote:
| Unfortunately, the preponderance of very low value research
| in the literature puts a significant burden on the
| scientists who have to sift through a lot of garbage to
| find what they're looking for. Even if the work is
| ostensibly correct (much of it is not), it really doesn't
| do anyone much good, except for the authors of course. But
| now every undergraduate and every parent's little vanity
| project at Andover wants a first author contrib., so here
| we are.
___________________________________________________________________
(page generated 2023-09-19 23:00 UTC)