[HN Gopher] I tried to report scientific misconduct. How did it go?
___________________________________________________________________
I tried to report scientific misconduct. How did it go?
Author : ivank
Score : 816 points
Date : 2021-01-27 00:36 UTC (22 hours ago)
(HTM) web link (crystalprisonzone.blogspot.com)
(TXT) w3m dump (crystalprisonzone.blogspot.com)
| lanevorockz wrote:
| The belief that any research is automatically true is so bogus
| and so abused that industries and lobbyists came to rely on it.
| It's sad that then people blindly push as "it's science".
|
| Main issue is the sheer amount of papers being published and the
| lack of capacity of the body of experts to read all of it. I
| guess it's the professionalisation of research.
|
| People publish papers to improve their rankings and not because
| it's relevant.
| tgv wrote:
| There actually is more than enough capacity to peer review (
| _). It 's just that nobody wants to do it. It costs time and
| money. Not compensated by the publisher, of course.
|
| (_) edit: that's raw body count. I wouldn't know how many
| people could actually spot the errors mentioned in the OP.
| dubbel wrote:
| From the article:
|
| > For example, one paper reported mean task scores of 8.98ms
| and 6.01ms for males and females, respectively, but a grand
| mean task score of 23ms.
|
| A 9th grader should be able to find that inconsistency, if
| you give them the table and tell the to find the number that
| is wrong.
|
| (the other stuff is harder to detect, and I fully understand
| that you can't request and re-process the raw data for every
| paper you peer review. Some of these numbers....)
| mjburgess wrote:
| This comment really clarified an issue:
|
| This is a slow-moving disaster for scientific credibility, and
| therefore for national safety and security.
|
| There's going to be a point within two decades where
| "reproducibility crisis" is not a localised phenomenon, and
| "expert" misconduct is paraded out by the papers.
|
| Totally destroying our societies ability to govern itself based
| on expert information. The early stages are already here (anti-
| climate, anti-vax, etc.).
| rdtwo wrote:
| I think the outcome is more likely to be that papers from the
| US are just assumed to be highly suspect in quality sort of
| how papers from China and India are now.
| fiftyacorn wrote:
| I remember doing chemistry at university and lab results quite
| frequently didn't match the expected results. So the first time
| you submit the results you report what you've found and try and
| explain it, and get marked down
|
| Lesson learned in future you give them what they want and attach
| large error bars
|
| I changed course after that as part of science should be
| explaining bad results
| liminal wrote:
| How do we prevent fraud? Is there a way to change the incentives?
| Statistical analysis of results to detect bogus numbers? Only
| cite research that has been independently replicated and apply a
| "provisional" label to the first results? Then only cite the
| duplicating lab and not the first one, so the incentive is there
| for doing the grunt work?
| leovailati wrote:
| I generally like the idea of rewarding replication studies.
| Replication/validation could be a required process that runs
| concurrently to peer review. The researchers who run the
| experiments to replicate results could rewarded by being added
| as contributors to the original paper, so they also get the
| citations. And like you suggest, the paper would then be marked
| as "validated" or something similar. I wonder if any journals
| out there are already doing something like this.
|
| There is, of course, the danger of collusion among original
| authors and validators. Hopefully the fear of having your
| results rebuked would prevent people from trying to publish
| bullshit in the first place.
|
| Another problem is logistics. Research labs have their own
| ideas they want to push forward, so spending time and resources
| proving or (even worse) disproving some else's idea doesn't
| sound that great. Also, even if it gives you citations, it
| probably wouldn't help you with your thesis.
|
| Anyways, it's a tough problem to solve.
| throwawayfrauds wrote:
| My previous account on HN was banned by dang after I called out a
| researcher for scientific misconduct. The researcher contacted
| dang and complained, and he banned me. So I guess calling out
| scientific misconduct didn't go so well here on HN either.
| physicsguy wrote:
| There's a group in Asia that worked in the same area I did my PhD
| in. In particular, there's a guy who published _18_ papers during
| his two years Master 's degree.
|
| Now, most of these papers were tiny. They effectively were "Run
| one simulation, get one interesting but tiny result, publish". To
| me, that's 'salami slicing', and journals should not accept
| papers that should have been larger studies. But he's carried on
| with this, has now completed a PhD and has a permanent position
| at a Japanese University.
| osamagirl69 wrote:
| This reminds me of the story from R. Trebino about trying to get
| a correction published--in 123 easy steps! (spoiler alert -- the
| comment didn't end up being published after the 123 steps)
|
| [1] https://frog.gatech.edu/Pubs/How-to-Publish-a-Scientific-
| Com...
| throw14082020 wrote:
| I can't imagine how tiring it is to do the investigation the
| author performed. He had to subscribe to literally garbage papers
| (from Qian Zhang) to investigate them. He had to do the policing.
| Because there is no quality control in research. Just citation
| counts and people-you-know.
| Method5440 wrote:
| My best friend went to UCSD for his PhD in biology. He was
| brilliant and had a nearly unique depth of insight into knotty
| problems and an incredible drive to progress the field.
| Unfortunately he was also a bit of an idealist with little real
| political sense.
|
| In grad school he selected a difficult problem in the cancer
| space and worked on it in the lab for 6 years. His advisor
| thought he was on track for a Nature paper. Around the end of
| year 6 a very famous scientist who was on his larger committee
| decided he wanted the research for himself (apparently). He had
| one of his floater grad students (in their last years of grad
| school without any research of their own to publish) literally
| steal his data from his desk. They eventually published their
| 'stolen' paper in Nature themselves, before my friend could have
| finished writing it up by himself.
|
| My friend found he was unable to compete with the reputation of
| this scientist and was repeatedly told to just move on - even his
| own advisor suggested that there was nothing to do about it and
| complaining to the university ethics committee would only hurt
| his career. He tried anyway, entirely unsuccessfully.
|
| My friend was not able to move on. He left grad school with an
| exit Masters. He spent a few years in his parents house lost,
| then in institution really really lost. He eventually got himself
| together and built up the courage to try again (roughly 10 years
| later). He got into a good bioinformatics program on the opposite
| coast of the country. Eventually the same exact thing started
| happening to him again. Things got bad. I left work early one day
| to go cheer him up and long story short I found his body in the
| bathtub of his apartment. He was just not ready to go through it
| all again.
|
| I still think about him every single day more than a year out
| from his funeral. I find myself unable to understand why some
| humans treat each other the way that they do or how they are able
| to get away with it. I've asked around and it seems like this is
| a fairly common occurrence, especially in circles around the
| original 'famous' scientist. These people basically killed my
| friend, don't know it and probably wouldn't care. They likely
| rationalize their behavior as the cost of doing science.
|
| The system is absolutely disgustingly broken and much of
| published and celebrated science is, in one way or another, a
| lie. We need to stop making scientists into rockstars, especially
| those who somehow publish more papers in a year than physically
| possible. Each one of these untouchable individuals is followed
| by an unseen trail of ruined careers and ruined lives.
|
| The field would not have suffered. My friend's work would still
| have been published. The difference is that it wouldn't have
| added to the myth of exceptionalism of this particular scientist
| - and maybe the floater grad student would not have gotten her
| PhD... but in the end my friend didn't get his PhD either and now
| he isn't here any more. Scientific prestige is not a limited
| resource and should not be subject to the tragedy of the commons.
|
| My young daughter asks about him a lot and I have no idea what to
| say.
| avsteele wrote:
| Good article. For more on this subject try:
|
| https://fantasticanachronism.com/2020/08/11/how-many-undetec...
|
| https://www.goodreads.com/book/show/52199285-science-fiction...
| LockAndLol wrote:
| How are we, the laymen, supposed to trust published, peer-
| reviewed papers? They seem to be just a bunch of words now with
| little meaning.
|
| Before, I used to consider scientific papers... well, scientific
| and would try to base an informed opinion on the abstracts and
| conclusions. After reading blog entries like these and actually
| reading some papers (especially soft science papers which are
| easier to understand), even as a layman, some glaring mistakes
| can be spotted.
|
| Popular scientists like Dawkins and that black astronomer are
| happy to point out the glaring problems in other areas of life,
| but it's looking more and more like the scientific field doesn't
| have its shit together either.
|
| On a scale of bullshit to trustworthy, where stuff like Breitbart
| and ThePinkNews live in swamps of bullshit, scientific
| publications and papers seem to barely reach "believable". One
| always has to question "who payed for this research", "who
| reviewed it", "which country is this from", "what reasons could
| there have been to do this research", "are these results too good
| to be true", "who would benefits from these results", etc.
|
| It seems like one really cannot trust anybody or anything and has
| to constantly keep their wits about themselves.
|
| Will we ever be able to clean up our act? What can we do?
| psim1 wrote:
| For one, consider the institutions and nations that the
| research is from. It's too bad that journals are not more
| discriminating, but we as consumers of articles can be.
| searine wrote:
| >How are we, the laymen, supposed to trust published, peer-
| reviewed papers?
|
| You are not supposed to.
|
| The laymen really isn't the intended audience of academic
| publications. The literature is always in flux and inherently
| unreliable as discoveries are claimed and over decades, proven
| true or false. Taking a snapshot of the literature at any one
| time is to accept that a proportion of the claimed truth will
| be false. Unfortunately the laymen doesn't get this, and they
| believe that published=true.
|
| As a layman, you should be looking to sources of information
| that have been vetted for truth, like textbooks. Textbooks are
| made to distill the most reliable information from the
| literature by a team of experts.
|
| One paper isn't truth, a dozen independent papers, all pointing
| to the same thing is. That is what we call the "scientific
| consensus".
| Ace17 wrote:
| > I was curious to see how the self-correcting mechanisms of
| science would respond [...] > I was disappointed by the response
| from Southwest University. Their verdict has protected [a
| fraudulent researcher] and enabled him to continue publishing
| suspicious research at great pace.
|
| The self-correcting mechanisms of science can only correct
| _knowledge_. Those mechanism work mainly by requiring the
| research works to be checkable by others. Self-correctness
| emerges by the accumulation of checks on the same topic, all
| leading to the same conclusion, and by the progressive
| retractation of bad research ... not by the elimination of "bad
| researchers".
|
| Efficiently "correcting" _people_ , whatever that means, is a
| different beast. Such a mechanism belongs to an administrative
| entity who can emit decisions - and, by construction, who can
| make errors.
| marcus_holmes wrote:
| How does bad research get retracted and corrected then?
|
| As the author points out, the "data" in these papers is large
| enough to contaminate meta-analyses for years to come. And if
| the Bad Scientist continues to produce more of them, then
| decades to come. The consensus of the entire discipline will be
| swayed. Self-correcting this will be very difficult, require
| lots of data, and be unrewarding. It probably won't happen.
| Politicians consulting The Science on this subject will get
| erroneous conclusions and make erroneous decisions.
|
| The Scientific Method is self-correcting. Academia, not so
| much.
| SiempreViernes wrote:
| The usual way is someone writing a better paper and everyone
| going "yeah, that's a better reading of the data".
|
| Not sure why you say correcting an established consensus
| would be unrewarding? Sure, for unimportant details getting a
| correction out isn't much fun, but correcting an important
| point is basically the career goal of every scientist
| precisely because it _is_ important and rewarding.
| marcus_holmes wrote:
| From the author, who specifically stated that this task of
| correcting bad science is unrewarding.
| Jolter wrote:
| She stated it is unrewarding because the university
| didn't reprimand or remove the bad researcher, and
| because most papers didn't retract the bad papers. She
| didn't produce new studies disproving the results of the
| existing one, which might have been a more rewarding
| pursuit for someone who had that inclination. I'm sure
| the author has other ideas about what research she should
| do, and it's not for us to say.
| marcus_holmes wrote:
| Yeah, I hear this a lot. That the goal of every scientist
| is to disprove the consensus and overturn bad science.
| Yet every time I read a blog article by someone who has
| actually tried to overturn bad science, they say it's an
| uphill battle against bad incentives, vested interests
| and academic politics.
|
| If you know of a scientist who has written about
| succeeding in achieving this objective and had a great
| time (in the last 20 years), can you point me to their
| writing, please?
| PeterisP wrote:
| It's probably very different in different disciplines - in
| social studies like the ones in main article this is a big
| problem because, as you say, meta-studies will likely include
| these papers simply because they exist.
|
| However, in more practical sciences if someone fakes data to
| show that their method A works better than baseline B, then
| other people building on that find out that method A doesn't
| really work well for them for a weird reason, shrug, and
| ignore the bad paper, so it doesn't get used and cited, while
| the correct assertions persist, get replicated, repeated and
| cited.
| cosmotic wrote:
| If journals retract, they lose reputation. Thus they have
| incentives to ignore the problems, which explains the authors
| results.
| SubiculumCode wrote:
| but mostly likely they'll never publish that author again.
| ALittleLight wrote:
| That doesn't seem to be the case in the story in the OP. The
| problematic author is still publishing at a great pace.
| nxpnsv wrote:
| Which is weird... I guess reviewers are lacking in stats
| education.
| SubiculumCode wrote:
| I mean, those are pretty low tier journals but...
| SubiculumCode wrote:
| I did mean at the same journal.
| toxik wrote:
| Unrelated to the subject, but titles like these are immensely
| unhelpful to the reader.
| greesil wrote:
| I really like this work. Her methodology is remarkably simple.
| Which begs the question, is peer review so broken that simple
| things like this can't be caught? Should we have a government
| ministry of academic papers?
| dnautics wrote:
| that's kind of what we have already, though responsibility is
| divided between DARPA, NSF, NIH, DOE, NCI.
| jojobas wrote:
| The model of science that worked from Newton to Landau seems to
| be falling apart in today's scale.
|
| There are millions of people whose livelihood depends on
| publishing, so they will publish anything they'll get away with.
| The amount of noise is beyond any researcher's ability to pick
| through. True incremental improvements in all areas are drowned
| in a steady flow of bad research.
|
| Top tier institutions seem to survive in some sort of bubbles.
| raverbashing wrote:
| I agree (though we're probably looking back with rose-tinted
| glasses and with all the BS that went at that time filtered
| through time)
|
| There's an over-reliance on p-values and publishing results and
| "peer-review" that can get pedantic, gate-keeping or useless
| real quickly.
| thaumasiotes wrote:
| > There are millions of people whose livelihood depends on
| publishing, so they will publish anything they'll get away
| with.
|
| This is also the problem with the iOS App Store and the Google
| Play store. They are modeled off Linux repositories, and those
| do not cause major problems. The phone app stores are riddled
| with problems, because you can charge for apps in those stores.
|
| If you're willing to pay people to lie to you, they will. You
| have to make the tradeoff between higher participation with
| lots of fraud, and lower participation with not that much
| fraud.
|
| > Top tier institutions seem to survive in some sort of
| bubbles.
|
| Not really; compare Brian Wansink at Cornell. (
| https://en.wikipedia.org/wiki/Brian_Wansink )
|
| Research output is reliable where it is actively relied on by
| engineers -- and not elsewhere. At this point, fraud is the
| norm and research is the exception in academia overall.
| unishark wrote:
| > Research output is reliable where it is actively relied on
| by engineers -- and not elsewhere.
|
| I'd word it this way: if the person producing the output is
| not responsible for it really working, it almost certainly
| won't. Even innocently this will be the case with anything
| complex, people get things backwards, miss a scale factor,
| etc. Finding that last bug can take more work than the rest
| of the project combined, much easier to publish what appears
| to work and move on. Much more so when there's direct career
| benefits to "hacking" the system over competing honestly.
| Especially considering the internet is awash with people
| trying to cheat through every other competition (exam
| questions, interview questions, etc.)
| thaumasiotes wrote:
| > if the person producing the output is not responsible for
| it really working, it almost certainly won't. Even
| innocently this will be the case with anything complex
|
| Indeed, there's a great example in the article itself, in a
| totally unrelated area:
|
| > I felt these journals generally did their best, and the
| slowness of the process likely comes from the bureaucracy
| of the process and the inexperience editors have with that
| process.
|
| In other words, these reasonable journals weren't able to
| use their retraction process even though they wanted to,
| because the process never gets used and therefore isn't in
| a usable state.
| thaumasiotes wrote:
| > these reasonable journals weren't able to use their
| retraction process even though they wanted to, because
| the process never gets used and therefore isn't in a
| usable state.
|
| Actually, this made me think about a journal purposefully
| running intentionally spurious papers, with a challenge
| to the readership to identify which paper was fake. If
| the system worked, that would cause every paper published
| in the journal to be investigated adversarially.
|
| The obvious problem with the idea seems to be that so
| much of the process is voluntary; people might be
| unwilling to submit papers to that journal.
| esja wrote:
| Interesting idea... reminds me of Chaos Monkey.
| unishark wrote:
| The government pays bounties to whistleblowers who expose
| grant fraud under the False Claims act, along with big
| fines for the perpetrators. Not sure how much it extends
| to research fraud itself, but it certainly seems like
| something they should do. Perhaps even they might extend
| it to publishing stuff that can't be reproduced.
| kenjackson wrote:
| Fraud is the norm? This seems like a likely false statement.
| I'd love to see data supporting this.
| rurban wrote:
| You are not fighting science, you are fighting politics. China
| obviously wants to forbid violent movies and games for minors, so
| they come up with these fantasy studies.
|
| Very similar phenomenons also just happened in the west, where
| news and politics wanted to suppress critical scientists. News
| was stronger.
|
| You also have to fight corruption all the time. Paid studies are
| constantly published to support some companies goals, with much
| better tricks and not so obvious flaws. Best is just to study the
| background of the authors and only accept independent research.
| hoseja wrote:
| Finally someone is talking about the content of Zhang's
| "studies". Seems motivated to support predetermined policy,
| probably so the CCP can boast about their decisions being
| backed by science.
| morpheuskafka wrote:
| Makes sense overall, but why would they care about getting it
| published in an English-language Western journal? If they just
| want to convince their own people they can publish it in state-
| owned media, and it's not a democracy so it's not like they
| really need to convince anyone.
| salamanderman wrote:
| In a graduate product design class I took, our semester project
| was to design and build and make cost estimates for development
| of an IOT product. "Internet of things" wasn't a phrase yet, but
| that's what you'd call it today. We had to incorporate these
| ultra low power sensor/processor things the professor had his
| name on and he was a big promoter of. At the beginning of the
| semester his grad assistant presented her invention from a
| previous year, which she won awards for and had presented and
| written about and was part of her PhD work. It was a home health
| monitoring device and she showed plots of a month of data from
| sampling herself (pointing out she was the only woman on her
| project). It was very inspiring and I was very impressed by her.
| Jump to mid-semester, I randomly have a team of MBA students and
| me; the three of them were going to do all of the writing and I
| just had to do all of the engineering by myself (yay). I'm
| battling in the lab for hours trying to get the damn thing to
| read a voltage. I keep putting time of the GA's calendar for
| help, and she keeps blowing me off or passing me in the hall and
| saying "ummm maybe try this?" or she'd give me another device to
| see if the last was defective. In principle, she should have been
| able to point out whatever I was doing wrong in 15minutes or
| less, but weeks of this avoidance went on. Eventually, after
| asking everyone in the department where she was and letting
| people know I was trying to meet with her and just sitting at her
| desk at our appointed time for over an hour waiting, she caught
| me in a hall, conspicuously looked both ways to see that nobody
| was around, and said "look, the things don't work. They've never
| worked. My device never worked. I made up the plots based on what
| they theoretically should have been if the product worked. I'm
| grading the projects. Just focus about the write-up of the
| business plan." So my work was done. And at the end of the
| semester, nobody's product worked, but most people acted like
| theirs did. Ours obviously didn't work, but we made up some shit
| about it being a mock-up because we didn't have the budget for
| some of the components. ... A professional photographer took
| shots of my team that were used in promotional material for the
| school.
| cornel_io wrote:
| Chinese researcher is full of shit and fabricates results, news
| at 11.
|
| For the haters, this is not racism but nationalism, China super
| incentivizes bullshit research at a high level these days, and
| it's gotten bad enough that we're starting to distrust any "work"
| that comes out of it.
|
| I don't know what the solution is, other than to subject Chinese
| submissions to more stringent and specifically non-Chinese
| review.
|
| That's absolutely nationalist, and arguably racist, but it's also
| smart.
| Blikkentrekker wrote:
| Why would it be racist?
|
| I have noticed in English-language discourse that often, shall-
| we-call-it, "non-white countries" are "races" but "white
| countries" are "nations".
|
| Also, Christianity and Judaism are religions, but Islam is a
| race.
|
| Explain me that.
| captain_price7 wrote:
| > Islam is a race.
|
| Muslim here, that sounds absurd. In fact one of my biggest
| annoyances is when people view all Muslims around the world
| as single entity. Every stupid trait of every Muslim majority
| culture gets blamed on entire Muslim world.
| Blikkentrekker wrote:
| I find it absurd too, but it is often how it is phrased in
| English-language discourse, even in Dutch discourse the
| anti-Islam branch phrases it as such: the language suggests
| that one can identify a "Muslim" by some kind of physical
| phaenotype of his body.
|
| The difference is that in general in Dutch discourse, such
| statements are considered racist or betraying such a
| mentality, and frequently protested, but, in English-
| language literature, even the "left" that claims to
| champion the causes of all these "races" and "religions"
| still very often writes in a way that betrays a mentality
| that some religions and countries are "races" and others
| are not.
| dlvktrsh wrote:
| it's built into their culture it's hard to even blame them. I
| say this as politely as I can
| Blikkentrekker wrote:
| "their culture" being the Chinese culture, or the Anglo-
| Saxon culture?
| 12ian34 wrote:
| >China super incentivizes bullshit research at a high level
| these days
|
| Just as you'd expect from a "Chinese researcher", you're going
| to have to qualify this statement for your point to hold any
| weight.
| remram wrote:
| In particular the incentives are set up the same way in the
| US, and reproducibility is a problem in many fields.
| temikus wrote:
| Dr. Elizabeth Bik is making efforts to detect image fraud in
| scientific papers and reading her findings has made me quite
| worried about not only the general accuracy of the data from
| lesser-known universities, but how difficult it is to
| retract/correct those through journals.
|
| Check out her Twitter if you're interested in the topic:
| https://mobile.twitter.com/microbiomdigest
| samvher wrote:
| Not sure why you're particularly worried about lesser known
| universities? These issues seem to have more to do with the
| state of science and the review process than the prestige/fame
| of any particular university. I'm pretty worried about the well
| known universities as well, in particular the expectations and
| pressure to perform can affect people's work and decision
| making.
| temikus wrote:
| Because that may introduce new biases around smaller
| universities from India, Russia, China who already have
| issues getting things published. As a former scientist in a
| smaller Russian uni publishing was already difficult and it
| saddens me to see a couple of bad apples ruining it for
| everyone.
| Bucephalus355 wrote:
| I once read an article that theorized that as a covert means of
| pre-war, countries would publish bogus disease, human health, and
| pathology data along with fake stats on "how poisonous is XYZ
| things".
|
| Wish I could find it. Struck me as creepy.
| darig wrote:
| In a world of more scientific misconduct than meaningful science,
| reporting scientific misconduct IS scientific misconduct.
| dandare wrote:
| Two of the mentioned studies seem to be related to the question
| whether violent games and movies influence our behaviour.
|
| Assuming Qian Zhang's work is fraudulent, what was his agenda?
| That violent games are OK, or the opposite?
| JW_00000 wrote:
| His agenda could also just be to advance his career, and make
| as much money with as little work as possible.
| ogogmad wrote:
| "Social psychology" has had a serious problem with scientific
| misconduct and statistical incompetence. Is this related in some
| way to that subject area?
| ogogmad wrote:
| What about the rest of academic psychology for that matter? Is
| this endemic to that field?
| walleeee wrote:
| The field is over-crowded, for one. Two, its participant pool
| is almost exclusively undergraduate students on university
| campuses in WEIRD (western, educated, industrialized, rich,
| and democratic) nations. Three, experimental designs may rely
| on questionably reliable, possibly retrospective self-report
| to infer patterns of affect and cognition. Four, experiments
| are often conducted in highly contrived laboratory
| environments. (In-situ data collection as people go about
| their lives is becoming more common with the rise of
| smartphones and wearables, however.)
|
| As in most domains of science, researchers are also pressured
| to publish regardless of the quality of the work, and
| replicated findings are usually not considered interesting or
| valuable enough to publish.
|
| This is anecdata, so take it with a grain of salt. (I minored
| in psychology and worked as a research assistant with a
| social/cognitive psych lab as an undergrad.) Also, there is a
| ton of _extremely good science_ in psychology. It 's a shame
| the recent profusion of low-quality work obscures it.
| lldbg wrote:
| What is an example of extremely good science in psychology?
| What's a result we can hold up to scrutiny and believe
| truthfully with conviction?
| shatnersbassoon wrote:
| Cognitive psychology topics such as memory and reading
| have a strong paradigm, and established results. In
| general, their effects are much easier to replicate
| because you require far fewer participants. This is
| because you can validly make multiple observations of the
| same participant.
| walleeee wrote:
| As the other commenter mentioned, cognitive psychology is
| well-developed (working memory, the relationship between
| action impulses and our subjective awareness thereof,
| etc). And there is very good social psychological
| research on a number of useful topics including diffusion
| of responsibility, the fundamental attribution error, the
| effect of dissenting voices on groupthink, the influence
| of perceived authority on human decision-making
| (Milgram), etc. Kahneman and Tversky's contributions (and
| all of "behavioral economics") could also be considered
| social psychology.
|
| Anyways, though, I'm not sure it's super useful to
| "believe with conviction" in science. Shouldn't we hold
| all results up to scrutiny, and weigh them on the
| evidence?
|
| Along these lines, by far the most important takeaway
| from the last 50 years of academic psychology, imho, is
| that we are usually far too willing to trust our own
| personal judgment. The brain is as much a self-deception
| engine as it is a reasoning machine. We have a far hazier
| view of what actually goes on in our minds than we
| usually think we do.
| shatnersbassoon wrote:
| I decided not to pursue an academic career after completing a
| Ph.D in Experimental Psychology. My main issue with it was:
|
| 1. You shouldn't know _a priori_ whether you will be able to
| reject your null hypothesis (otherwise what is the point of
| doing the experiment). So what you need is luck, luck that
| your hunch turned out to be true.
|
| 2. If careers live and die by published results, then those
| who are _lucky_ with finding significant effects early on in
| their career will win out.
|
| 3. Running a well-controlled experiment at scale is difficult
| in a way that I haven't found matched in the tech fields I
| have been in since leaving academia. I mean mega-hassle
| difficult.
|
| 4. You therefore have an incredible amount riding on the
| outcome of that experiment, because of its enormous
| opportunity cost.
|
| 5. The likelihood of being caught faking data is low
| (especially if you are halfway-competent which this
| researcher clearly is not).
|
| 6. The penalties of being caught faking data (as set out in
| this article) are relatively low.
|
| 7. The payoff of getting away with faking data can include a
| lot, up to and including a high profile academic career and
| tenure.
|
| 8. So from a game theoretic perspective, it's almost
| inevitable that quite a few people at the top have faked
| their way there.
|
| This is not to say that good work is not being done - there
| is some amazing work out there. I just think that like
| athletes taking steroids before they reach the big leagues,
| many academics succumbed to temptation to get an edge in
| order to get to the top.
|
| See the Dutch Social Psychology scandal for more on this.
| ricksunny wrote:
| I'm wondering how many folks are aware that a similar case of
| research misconduct is actually affecting the covid origin
| investigation?
|
| Essentially a lot of scientists in 2020/2021 cite the same two
| research papers on impounded pangolins to support that covid-19's
| virus, SARS-CoV-2, had a close cousin(s) infecting pangolins.
|
| However, this analysis from an MIT Broad Institute genomics
| researcher, Alina Chan PhD, implicates research misconduct on the
| part of those two articles' authors.
|
| https://twitter.com/ayjchan/status/1320344055230963712?s=21
|
| Turns out the authors of the pangolin papers can't provide the
| complete pangolin-infecting coronavirus sample genome (i.e. they
| can't provide 'the source code' if you will), and they profess
| not having coordinated with each other or even knowing each other
| even though authors from both papers published a paper (notably
| also a pangolin cov genome oriented) together just a couple
| months before the outbreak came out.
|
| Mainstream virologists like Angela Rasmussen PhD now call the
| pangolin cov genomes 'a mess',
|
| https://twitter.com/angie_rasmussen/status/13498414893842841...
|
| and yet these papers continue to get cited to help prop up the
| natural origin line.
|
| U.S. Right to Know published the email traffic between the Nature
| Medicine & PLOS Pathogens editors and the two sets of authors of
| the research papers in question:
|
| https://twitter.com/ayjchan/status/1354455267656785925?s=21
|
| . . . and after all that those authors still come up short. In
| other words, there's 'weirdness' around the provenance of those
| pangolin cov datasets, and the lack of formal retractions from
| Nature and PLOS Pathogens (despite those journals' editors'
| posted Q&A with the authors) means that the natural origin line
| continues to gain unearned steam (beyond being reasonably treated
| as simply the pandemic origin's null hypothesis).
| Johnny555 wrote:
| The article mentions Brandolini's Law:
|
| _"The amount of energy needed to refute bullshit is an order of
| magnitude larger than to produce it."_
|
| It's the first time I've heard it, but it's a very appropriate
| observation in today's world where misinformation travels faster
| and wider than correct information. If you're just making stuff
| up, it's much faster than looking up sources.
| Yizahi wrote:
| Also related: https://en.wikipedia.org/wiki/Gish_gallop
|
| I see this widely used by antivaxxers now.
| Blikkentrekker wrote:
| Information travels very quickly through a medium that wishes
| it to be true.
|
| You will find that the ability of the human mind to be
| critical, to refute with very salient arguments is suddenly
| acute when the mind doesn't wish it to be true, and this
| definitely also applies to _H.N._ comments.
|
| That _H.N._ in this case is so accepting to this one side of
| the story suggests to me that this is the side it seems to want
| to be true, notwithstanding it might entirely be, or not be,
| true.
| remram wrote:
| In your mind, is that "one side" that misconduct happens? Do
| you think the opposite side "it never happens" is reasonable?
|
| No one here is trying to argue that it happens all the time
| or more often than not, I'm wondering if that's what you
| think we're reading.
| Blikkentrekker wrote:
| I find that rarely when a side has an "opposing side" that
| either side is reasonable
|
| In this case, "misconduct happens" is not opposite to "it
| never happens" and I do not find the comments to echo the
| former sentiment as much as " _Academia has become so ripe
| with either outright malice, or an inability to catch
| earnest mistakes, that virtually no research can be
| trusted._ "
|
| > _No one here is trying to argue that it happens all the
| time or more often than not, I 'm wondering if that's what
| you think we're reading._
|
| No one is indeed arguing that, but what many, including me,
| are arguing is that nothing can really be trusted any more
| because it's a coinflip whether data is even reproducible.
| remram wrote:
| There are definitely two sides to _your_ story then :)
| Not necessarily to the top comments I have read. It might
| be a more important story though.
| Aeolun wrote:
| On the other hand, we are presented with the raw data that
| makes the author of this article suspicious.
| Blikkentrekker wrote:
| Even in the face of such evidence, it often turns out that
| when the other side tells it's story it's more reasonable
| than that and there are explanations.
| jiofih wrote:
| #bothsides
| m12k wrote:
| You're absolutely right - the big question of the day is not
| just "how do we counter disinformation?" but "how do we counter
| it at scale?" The bad-faith actors of the world have realized
| how incredibly cost effective disinformation is in an online
| world that can massively amplify messages, and which
| algorithmically selects for divisiveness and "engagement"
| rather than factually and utility. We need a CAPTCHA for truth
| - but I'm not sure such a thing is even possible without AGI.
| So what does that leave us with - making algorithmic message
| amplification illegal? Putting that genie back in the bottle
| isn't going to be easy, so we'd need to be damn sure it's the
| right thing to do, to ever drum up enough support to get
| legislation like that passed.
| hashkb wrote:
| Aligning the incentives is the solution. There's no downside
| to bullshitting because our institutions are built on it.
| hobofan wrote:
| Isn't the answer well know (improving education and
| encouraging critical thinking), but unsatisfying because it
| is hard to implement and changes only crystalize slowly? If
| everyone were to e.g. request sources for claims instead of
| taking them at face value that would act similar to a CAPTCHA
| and prevent the spread of misinformation.
| TeaDrunk wrote:
| I don't know if improving education and encouraging
| critical thinking is actually the silver bullet one may
| hope it is. HN considers itself a cohort with significantly
| more critical thinking ability, but it will make wide and
| unsubstantiated, highly upvoted comments whenever a
| political subject comes up involving censorship,
| hallucinogens, systemic oppression, or unions.
|
| Of course, HN is also an invaluable resource when it comes
| to tech and sometimes other STEM subjects. It's just
| significantly less valuable for areas completely outside of
| it. I wouldn't trust HN as a neutral or critically thinking
| source for, say, the usefulness (or lack of usefulness) of
| gender studies.
| m12k wrote:
| Sure, but again, we need a solution that scales, and
| there's no empirical evidence that this is a scalable
| solution. We also wouldn't need this if everyone would just
| be nice to each other and not spread disinformation in the
| first place, but that's not really much of a realistic
| solution either, just wishful thinking.
| m463 wrote:
| This goes double for privacy.
| SulfurHexaFluri wrote:
| Its no wonder there is a trend now of just outright rejecting
| information presented when our "trusted" sources of information
| are very susceptible to malice and error without any real tools
| to combat it.
|
| My current view is that academic research should not be used as
| proof of anything and only as the starting point for your own
| research. And by your own research I mean your own actual
| tests. The papers can point you in the right direction but
| their findings should not be taken as fact.
| obscura wrote:
| I don't see how this is practical. You could spend a lifetime
| testing the work of others and still not get through it all,
| let alone get to working on anything original. Progress is
| made by building on the work of others.
| canofbars wrote:
| The point is not to test everything ever published, its
| that when you want to do X, you look for papers on X
| understanding that they are likely flawed but better than
| starting from scratch.
| kenjackson wrote:
| This still don't make sense. For example, I want to paint
| my house with a less toxic paint. I can't trust any
| academic research. I have to now research what is toxic
| in paint? Then I have to find ways to measure various
| chemicals and gases? Etc...
|
| This seems like a complete utter waste of time.
|
| In real life most life impacting academic research is
| much more right than wrong. You are far better served
| assuming so. Unless you want to waste your time going
| back to basic science and rebuilding all the academic
| knowledge in most things you wish to do.
| peytn wrote:
| I think what you're missing is that academic research
| focuses on novelty, not basic facts. Ultimately not
| trusting novelty can save time. Basic facts can be found
| in reference material.
|
| So it's more like suppose you want to paint your house
| green, and you read that somebody says you can mix red
| and blue paint to make a really cool green paint. Instead
| of immediately going out and buying enough red and blue
| paint to cover your whole house, first buy a small amount
| of red and blue paint, mix them together, and see if you
| get that neat green paint.
|
| It's common sense, but the window dressings of academia
| can lead you to burn time and money on things that are
| totally silly because somebody important-sounding said
| they did it once.
|
| Where people get burned is that there's an enormous power
| imbalance---junior scientists can end up stuck trying and
| failing to make green paint out of red and blue paint
| because nobody senior is going to take them seriously if
| they can't make green paint. This presents a serious
| ethical challenge if making green paint is impossible.
| kenjackson wrote:
| It's not as simple as buying paint. You're not going to
| use any treatment where research came from a medical
| school or associated institute without personally proving
| it works first? Good luck!
|
| If making green paint is impossible I think that it will
| eventually self correct, or is simply inconsequential. In
| some instances it may take a while, but if the
| alternative is to reprove a result before using it --
| that seems like something only a fool would do or someone
| with infinite time.
| obscura wrote:
| What are "basic facts"? Surely the point of most research
| is to uncover new facts? And what is "reference material"
| if not other research - research that you're using as a
| foundation for your own?
|
| It's fair to question things, especially if they don't
| make sense to you and even if acknowledged authorities
| are behind them. However, (1) something that you may
| question is not necessarily something I may question, and
| (2) questioning may be a waste of time.
|
| If a paper that says mixing red and blue paint makes
| green paint has a thousand citations, perhaps you don't
| need to question it because others already have. If you
| can't reproduce it, the simplest thing to do is ask an
| expert who says it is possible to do it.
| rkagerer wrote:
| Reminds me of " _a lie can run around the world before the
| truth has got its boots on_ ".
|
| That precise quote is from Pratchett but there are similar,
| earlier citations
| https://quoteinvestigator.com/2014/07/13/truth/
| tolbish wrote:
| Aren't these all variations on well known aphorisms such as
| "publish or perish"?
| [deleted]
| toxik wrote:
| Publish or perish is the opposite of "silence is golden"
| more than a statement on the speed of lies and truths.
| tsimionescu wrote:
| I think the ones above are much more general than 'publish
| or perish'.
| soulofmischief wrote:
| That aphorism applies to publishing within the scientific
| community and the need for grants, tenure, results, etc.
| mxcrossb wrote:
| I read retraction watch every day, so I'm used to seeing
| stories like this. But I'm always surprised the effort people
| go through over articles published in garbage journals. You're
| never going to even make a dent.
| obscura wrote:
| The garbage journals are a plague. You publish one article in
| a legitimate journal and in next to no time these worthless
| journals (and conferences) start spamming you, even if what
| you wrote has nothing to do with what they're supposedly
| publishing.
| noisy_boy wrote:
| It should be a requirement to be able to provide the anonymized
| raw data for publication in peer reviewed journals and multiple
| retractions should have an effect on the credibility of the
| authors and their chances of being able to publish such research.
| Without the penalty, there is no chance of reduction in such
| fraud.
| runsWphotons wrote:
| There have been several discussions about this topic this year
| and there are lots of interesting anecdotes in the responses. Is
| anyone aware of a good longer form review of the difficulties
| obstructing better reproducibility and more honesty in academic
| publishing? Something attempting to collect the different
| theories and observations people make in discussions like these?
| cies wrote:
| Many stories like this and people wonder why the general public
| is less "believing" in science lately. I think sadly the general
| public is right not to believe as strongly in science as it maybe
| once did, the results though are dramatic in "baby bathwater"
| kind of proportions.
|
| I think science should fix itself. Just publishing paper should
| not be the metric to reward. A retraction should seriously reward
| the flaw finder (like sometimes with exploits), and really harm
| the flaw author/publisher: both scientist and journal.
| 1MachineElf wrote:
| Last week in a university course, I was surprised to read in _A
| Short History of Physics in the American Century_ (Cassidy,
| 2013), that at least with Physics, US public perception of
| science had been tumultuous following the WWI, WWII, and the
| Cold War. As a scientific discipline, it only reached maturity
| through the war-effort, which earned it infamy for bringing
| about terrorizing nuclear weapons.
| Blikkentrekker wrote:
| It is very good that the general public is less believing in
| science.
|
| I remember well when the public was very believing, including
| me, and in hindsight it was always undeserving of such faith.
|
| It was a very misguided thing to take a conclusion as fact, so
| long as it be called "science", for often upon closer
| inspection the methodology was dubious, and it was never
| attempted to be reproduced, so even if the methodology were
| sound, the data could either be a fluke, or outright
| fabricated.
|
| This is not a new development; if anything, the critical stance
| is the new development. It has been going on for centuries most
| likely that completely fabricated data stoot the test of time
| because no one bothered to replicate it. When I was at
| university in the 2000s, we were already told of respected
| researchers that fell from grace as it was found they had been
| fabricating data for decades and it took this long for someone
| to catch wind of it, as no one bothers to replicate research in
| this world.
|
| The only new development is that now, some are starting to.
|
| "Science" is not enough to believe it; the methodology must be
| inspected and found to be salient, and the data must have been
| replicated at least once, praeferably more, by another
| independent group.
| kube-system wrote:
| To be credible does not require infallibility. The broader
| social consequence of the general public losing faith in
| science is not that they will suddenly become enlightened in
| the nuances of the scientific discovery process -- it is that
| they will turn to alternative sources of truth. Science isn't
| a perfect source of truth but it is a heck of a lot better
| than seeking truth through mythology, tribalism and the
| opinions of ideologues. Scientific literacy is the ideal
| state, but the world is not that.
| Blikkentrekker wrote:
| I find that much of the newly inspired criticism on science
| after the appearance of the replication crisis did not go
| to alternative source of truth but started to admit that
| there is much that men don't know and won't know.
|
| The problem is man's arrogance that it knows, that it can
| find a solution to every quaestion it asks.
|
| "science" is also not even close to "not infallible" it is
| a complete coinflip whether any peer-reviewed result is
| even worth the paper it's printed on.
|
| Dare I say it's under that, because it's a coinflip whether
| the data are even reproducible, but the conclusions derived
| from the data, even if they be reproducible, are almost
| invariably involving bigger leaps of faith than making data
| up.
| mam2 wrote:
| > Many stories like this and people wonder why the general
| public is less "believing" in science lately. I.
|
| Eh im not sure bad studies is the cause.
|
| Scientists, especially doctors, wanting to use their authority
| i some debates while 2 of them can be saying completely opposed
| things maybe, however, contribute...
| speeder wrote:
| Problem is more serious than that.
|
| What is happening is that the bad studies are being used for
| policymaking.
|
| Examples: the "nutrition pyramid" that encouraged
| carbohydrates and blamed health issues on animal-based food,
| was later found out to be based on research that was
| blatantly corrupt, with researchers getting bribes from food
| industry to manipulate or hide results (a case of hiding
| results: one researcher that found out that vegetable oil
| causes decrease of blood cholesterol, also found WHY it
| happened, but omitted that part from his paper... the reason
| is that cholesterol is needed for cell maintenance, and
| consuming only vegetable oils cause a deficit from it, the
| body pulls cholesterol from the blood to repair itself, and
| even that might not be enough, with some people suffering
| damage).
|
| Or a lot of pharma circlejerking that turns into law or
| regulations.
|
| Or the paper mentioned in the article, that was about video-
| games and aggression, with many countries passing laws
| regulating video game consumption based and such papers.
|
| Or the original reason Cannabis was banned (long story short:
| part of the reason is that they wanted to ban hemp fibers,
| that was being an obstacle to some newly invented synthetic
| fibers, some of the government people involved, had stocks of
| Dupont and other fiber companies, and "accidentally" banned
| hemp fibers while "trying" to ban the drug, based on
| manipulated and fraudulent science).
|
| Or more seriously: the papers that recommended "Austerity"
| and basically destroyed the livehoods of millions of people,
| later were found out to have math errors that changed the
| conclusion completely.
|
| And the list goes and goes on.
| ncmncm wrote:
| Hemp fiber was in competition with wood fiber harvested
| from Hearst-owned western-US forest land. Hearst also owned
| a newspaper chain, and found using it an easy way to
| eliminate the competition. Hemp is both cheaper and better-
| quality than the wood fiber for paper, but had no
| newspaper-chain backing.
| dlvktrsh wrote:
| I sometimes think it's just a population problem. There are so
| many of everything and there's so much competition to be the
| best and succeed the rules and customs we have in place for
| most of the things is just ancient in comparision and it keeps
| getting older by the minute
| known wrote:
| I think there should be a bounty for revealing the truth
| motohagiography wrote:
| Was just thinking that if sci-hub were more like a git repo or
| wiki with editors and peer comments, it would add a great deal
| of transparency, and pretty much destroy this anti-pattern.
|
| FOSS still has a lot of vulnerabilities, but it has also caught
| a lot of vulnerabilities and more people know what to look for.
| Perhaps this is why there is so much resistance to sci-hub,
| because there are so many compromised editors and academics who
| risk exposure?
| jononor wrote:
| sci-hub has not tried to move anything in that direction. I
| would guess that opposition to sci-hub is mainly because it
| could reduce barriers to entry, and de-value traditional
| journals. For established researchers the high level of
| gatekeeping caused by journals is in their favor, since they
| know how to play the game, and many other want-to-be-
| researchers do not.
| pototo666 wrote:
| Hell. This is disgusting.
|
| I used to think of joining academic world, my interests was in
| Political Science. But I had many doubts over the academic
| mechanics in China, many bad news spreaded.(Besides I doubt my
| passion and intelligence). So I choose to join business world,
| then startup world.
|
| Still, reading such news disgutsts me. There should not be such
| thing as _Fake it until you make it_ in academics.
| UShouldBWorking wrote:
| As a PhD student I found some data that made no sense. Basically
| a compound added to a plate of cells EXACTLY matched the increase
| in yield, the former student had just multiplied the yield by the
| same amount of the added compound. I reported it to my advisor
| but (of course) she swept it under the rug and did nothing.
|
| I tried to contact the format student but also nothing. There
| were a few more similar instances before I became completely
| disillusioned and left the phd program after 4 years, totally
| burned out with little to show.
|
| To this day I hate that lab and the whole institution. Rotten to
| the core.
| yason wrote:
| Scientists are people. Taking the path of least resistance and
| skipping over details is the nature of some people, and thus of
| some scientists as well.
|
| It's not unbeknownst in other professions. Anecdotally, I haven't
| had my car serviced in a shop for years because mechanics even
| with supposedly good reputation would fail to do even trivial
| maintenance jobs properly, invariably requiring myself to
| partially redo it myself to ensure the quality of the
| installation which defeats the point of having a job done by a
| paid professional in the first place. It's the boring details and
| following the process carefully that these people skip, in hope
| for getting results quicker and moving on sooner. I just take it
| that some people are like that, and the lower barrier of entry to
| sciences than before means there are more corner-cutters in
| academia as well.
|
| The question is where in the process of the academic journey from
| student, master, grad-student to doctor are people qualified for
| the work as for their psychology and personality, not only
| knowledge and intelligence?
|
| The danger signs with this sort of personality are undoubtedly
| visible in early stages. The market pressure to maintain one's
| reputation doesn't seem to work in the academia as illustrated by
| the article. Thus, it would be better to start explicitly culling
| this attitude off the field before these people get to establish
| themselves despite their bad workmanship.
| Aeolun wrote:
| > ... _a swift two-month process_ ...
|
| O, M, G. I'm really happy I'm not in a research field.
|
| In my experience, China will do a _lot_ to look good in research.
| Apparently up to and including falsifying data.
|
| My experience is mostly in gaming university ranking mechanisms
| though (though arguably publishing lots of bogus articles helps
| there too), the increase in "research" output from China has been
| nothing short of amazing.
| speedgoose wrote:
| I think there is at least one or two platforms to rate and review
| articles after they are published. But I can't find them. Does
| anyone remember their names ?
| timbargo wrote:
| Are you thinking of PubPeer? https://pubpeer.com/
|
| Which I recommend the browser extension for. It provides a
| banner at the top of the page anytime the site displays a paper
| with PubPeer comments on it. https://addons.mozilla.org/en-
| US/firefox/addon/pubpeer/
|
| Plenty of comments when you visit this thread's site. :)
| fareesh wrote:
| What if you had a webpage with the paper at the top and a
| comments section at the bottom where anyone in the world could
| post a rich-media post of their findings that cast doubt on some
| findings with regard to the original paper.
|
| We currently treat academic research by a metric of "citations",
| why not have a new metric where the world can be cautioned of
| potential issues with the result. You could use the bug bounty
| model and invite crowdfunded contributions to pay out bounties to
| those who participate in debunkings.
|
| This does not need to be accepted as canonical by the scientific
| community at large.
| ramraj07 wrote:
| That's already a thing - https://pubpeer.com/static/about
|
| It has definitely brought out many demons in papers but it's
| still not widely prescribed to or refered from in academia.
| renewiltord wrote:
| Well, I suspect that comments section could be autogenerated by
| a bot that randomly generates comments that say "Correlation !=
| causation" and "Sample size only 11". The bot needn't even
| parse out the true sample size. It can always use a constant
| number.
| throwaway2245 wrote:
| > ...anyone in the world could post...
|
| This would be ultimately filled with trolls, nonexperts, and
| people with an ulterior motive or grudge: so any such effort
| would be less trustworthy than the original publication.
|
| If you want to find mistakes, you can find one in every paper.
| The character of the mistakes is important - is it fraud, is it
| incompetence, does it negate the results? Or is it good science
| being done by a human? An open website doesn't seem to me
| likely to be able to draw that out.
| grumple wrote:
| > any such effort would be less trustworthy than the original
| publication.
|
| The idea that scrutiny would be less reliable than blind
| trust is absurd. The question in the OP, for example, could
| have been in the comments sections of these papers.
| throwaway2245 wrote:
| I understood that this was about scrutinising papers in
| academic journals - the academic journal's value is quite
| literally its trustworthiness. (The journal exists and is
| employed to do the scrutiny)
|
| A comment from any random person (in general) holds a lower
| level of trustworthiness.
| grumple wrote:
| This post - and many other conversations we've had on the
| subject on HN - are about the lack of integrity of
| academic journals. More broadly, this contributes to
| discussions about fraud in academia, the reproducibility
| crisis, and the pressure to publish.
|
| "Any random person" includes many researchers, including
| phd holders or just random people with time on their
| hands, but whose commentary could be judged on its own
| merits, not by some credentials or stamp of approval from
| journals that don't even examine the data used by studies
| they publish. This does not mean that comments should go
| completely unmoderated.
|
| As far as I've seen, no journal does a thorough
| examination of data referenced by studies it publishes.
|
| Credentials, papers, citations, and studies do little to
| increase the levels of trustworthiness precisely because
| papers like these are not publicly scrutinized.
| pjc50 wrote:
| > What if you had a webpage with the paper at the top and a
| comments section at the bottom where anyone in the world could
| post a rich-media post of their findings that cast doubt on
| some findings with regard to the original paper.
|
| That just enables your field to be destroyed by cranks who
| themselves have no accountability.
| terranstyler wrote:
| A fried of mine reported scientific misconduct (p-hacking) and,
| together with a few colleagues, left the research group, due to
| moral harassment by the head of that group.
|
| The university removed all of them from the research group and
| said they could continue working on the data because it belongs
| to the university.
|
| 3 months later:
|
| - investigations of scientific fraud against the people leaving
| (neglecting authorship because the data could after all not be
| used and the head wanted a say in the articles, i.e., change them
| completely). Also some random other allegations that didn't
| stick.
|
| - police investigation of defamation (because they reported the
| scientific misconduct and some other misleading statements used
| by the head in sales for a research-related product)
|
| - the university now expects them to contact the head of the ex-
| research group to clarify questions of authorship
|
| - the head meanwhile continues as before
| Prcmaker wrote:
| I've reported similar misconduct before. Was told the claims were
| very concerning, and that there was a quite clear problem that
| would be investigated. A few months later, I was informed the
| problem had been resolved, though on inspection, nothing had
| changed. I learned their investigation involved asking the person
| of interest if everything was legitimate, to which they said yes.
| Investigation closed. I am truly disappointed at what has become
| of the industry I used to appreciate so much.
| rdtwo wrote:
| I'm surprised it didn't involve retaliation against you. I know
| of a few cases at UW where the outcome was retaliation against
| the reporting party
| Prcmaker wrote:
| It hasn't been. My research is now being actively sabotaged.
| dnautics wrote:
| In graduate school, in my lab there was a grad student who was
| kind of an unlikely "professor's pet". He was tall and had
| surfer's long hair with a bit of a hippie aesthetic. Anyways, he
| was also really completely clueless about how to do science
| correctly, but also, I guess, really good about playing politics
| (there was a time when he asked me to put some bacterial plasmid
| DNA on my mammalian cells. I told him "it doesn't work that way",
| but I did it anyways and handed over the cells, and he got the
| observation he was expecting). On his main project he was teamed
| up with a super sketchy foreign postdoc that I was convinced
| would say anything to get high profile papers out.
|
| So they did a series of experiments and reported results that
| screamed "artefact". On one of them, for example, the postdoc got
| trained to use the electron microscope and they went through
| thousands and thousands of images to pick out the one that had
| "just the right morphology" (I am pretty sure they were snapping
| photos of salt crystals). On another, they reported that their
| research subject protein was so fast at the process we were
| studying that everything occurred IN MIXING TIME. That to me,
| screams "you are not doing your experiments carefully".
|
| Meanwhile I was sweating balls working on a very careful
| preparation of similarly finicky proteins (you agitate them and
| they do bad things since they're metastable) and finally got it
| to produce reproducible results. I suggested they adapt my
| preparation to their protein but they couldn't give a damn, they
| had already published their paper and had moved on to sexier
| proteins.
|
| But then an intern was put on the project, and she could not
| reproduce their results, after working on it for six months (she
| is careful and honest). At the end, I felt so bad for her, I
| offered to train her on my technique, but she passed. I think she
| was burned out on the project. I asked if I could get a sample of
| the protein that she had prepped, and she agreed.
|
| I ran the protein through my preparatory technique and observed
| that there was a contamination that could have seeded the
| kinetics of their process. Upon isolating an uncontaminated
| sample, I carefully but briskly rushed the sample over to the
| machine. Nothing. Curious, I jacked the temperature up to get it
| going faster. Nothing. I left it in the machine overnight.
| Nothing. Finally, convinced that I had likely done something
| wrong, I dropped the sample in a shaker at temperature, came back
| the next day and recorded amazingly high signal. In short, the
| observation that it was "super fast" was entirely an artefact.
|
| As I, too, was trained on the Electron Microscope, I quickly
| spotted my sample onto an EM disc, reserved some time and hopped
| on the 'scope. The first grid sector I looked at, there was
| literally TEXTBOOK morphology in front of my eyes.
|
| I stapled together my results, gave it to the grad student, and
| told him that the general gist of his paper was probably still
| correct, but that he should be careful about characterizing his
| protein as exceptional. I then said it was in his hands to do the
| right thing.
|
| What do you think he did? Nothing, of course. He kept on the
| talks circuit, still talking about how exceptional his discovery
| was, and to date there have been no retractions. He even won the
| NIH grad student of the year award.
|
| The epilog is that after a decade of floundering I realized that
| even though I am pretty good at science, I was no good at playing
| academic politics and quit the pursuit; I drove for lyft/uber for
| a bit, and now I'm a backend dev. I am certain that my
| experiences are not unique. Amazingly the intern returned to our
| lab, and had her own three-year stint chasing ghosts that turned
| out to be overoptimistic interpretation of results reported by a
| postdoc.
|
| Oh. What happened to the grad student? He's a professor in the
| genomics department at UW.
| stanford_labrat wrote:
| I did my undergrad at UW and was heavily involved with the GS
| department.
|
| There's only so many people this could be lol, really makes me
| wonder.
|
| Edit: found who it is. Why am I not surprised?
| [deleted]
| asveikau wrote:
| I don't know anything about this field and I certainly don't
| know any people involved, but this comment made me curious
| and after a few google searches I think I can tell who it is
| too.
|
| Which leads me into some thoughts about not rushing to
| judgement. I believe the commenter above is doing his best to
| be a reliable narrator, but it's always possible there was
| more to this story that was not visible to him at the time,
| that might exculpate a bit. It's also notable that people
| change over time, can improve on their faults, and might have
| learned something in the years since. Best not to view their
| past mistakes as forever damaging.
| ncmncm wrote:
| Those people can still be useful:
|
| When there is an open question, with important consequences
| but unclear resolution, it is hard to know the right answer.
| Somehow, it is easier to know the _wrong_ answer, and that
| person will reach for it immediately. So, watch him and
| choose the opposite.
|
| In any group there is such a person, called the Oracle of
| Wrong, and almost anybody can tell you who it is. He is the
| one most likely to wear a trilby, and no wrong choice he has
| made has ever caused him any personal discomfort.
| garmaine wrote:
| You should tell UW about this.
| selimthegrim wrote:
| This is the sign of someone who has never been in grad school
| rscho wrote:
| Also describes medical research. Only it's even worse. Problem
| is, you have no choice if you want to work in a university
| hospital. The system essentially tells you "you'll be doing
| shit science... or you'll leave!". Been at it for more than
| 10y, and no hint of change in sight. This is going to be
| really, really hard to change unfortunately.
| Aeolun wrote:
| How is medical research shit science?
| rscho wrote:
| At least my research is. Mainly due to hierarchical
| pressure. And from what I see around me, most medical
| papers must be read with a healthy dose of skepticism. I've
| personally witnessed incredible feats of dishonesty that I
| won't describe here.
|
| There are multiple reasons degrading research quality. An
| important one is spreadsheet incompetence. Another one is
| that medical research goes hand in hand with academic
| achievement, which in medicine also means money and power
| (probably more than in most other fields). I guess we have
| the same kind of problems as everyone else, overall.
|
| One thing people often miss is that clinical data is of
| abysmal quality and reliability, so honest analysis is
| really difficult.
| valarauko wrote:
| I'm a postdoc at a medical school, and this hasn't been
| my experience. At least in our setup, clinical data tends
| to be channeled into a collaboration with a computational
| lab who are better stewards of data handling. Is there
| cherry picking and over selling the results? Sure.
| Outright dishonesty is something I have yet to see in my
| current institution (I did see a fair deal of fudging in
| my graduate institute, though)
| rscho wrote:
| Well, I think things are beginning to be better managed
| in some centers. If that's your case, then good for you.
| In my center, it's basically the wild west and data
| management is a catastrophe.
|
| But are you working the clinical wards? Because things
| are definitely much better managed in places such as
| epidemiology units. The true horrors mostly come from
| clinical researchers digging into excel spreadsheets
| without knowing a mean from a median.
| valarauko wrote:
| I'm in a computational lab, but I think I understand what
| you're describing. My medical school was acquired a few
| years ago by a hospital network, encouraging us to
| collaborate with our new clinical researchers. The
| medical school itself had a strong background in rigorous
| basic research with animal models, and the clinical
| samples are a relatively smooth transition. The data is
| obviously nowhere as clean or plentiful as with animal
| models, but that's to be expected.
|
| So for example, my lab's expertise was in single cell
| developmental models, primarily for organ development in
| mice. Extended that to tumors from clinical samples was
| relatively straightforward. One of my colleagues is
| working on an autism dataset, but I wouldn't expect that
| to be nowhere nearly as clean.
| agumonkey wrote:
| I think a lot of people have deep enough pockets to fund
| a side lab. It's worth trying.
| cbozeman wrote:
| No wonder we have a crisis of faith in our institutions.
| dnautics wrote:
| the irony is that the intern after three years of suffering,
| twisted the arm of our PI and convinced him to do the right
| thing and got an 11-page retraction identifying and
| confirming the source of her artifact. This diligence got her
| a job at a big pharma company.
|
| Of course her paper should have been a cautionary tale, but
| there are still people using the flawed technique for high-
| throughput studies to this day.
| mpoteat wrote:
| I just wanted to say that this resonated with me, as a former
| grad student involved in protein research who is now doing dev
| work. I hope you're doing well these days!
| dnautics wrote:
| I'm doing pretty good, thank you! I enjoy coding, I've always
| been a better coder than a biochemist (likely on account of
| started coding at age 5, started biochemistry at age 19). I'm
| still doing garage science here and there, and being a
| programmer affords me the capability to both afford that and
| have _some_ time to do it.
| panabee wrote:
| i have been thinking a lot about this problem. society has
| innumerable unsolved problems in healthcare, and many
| talented people who would like to contribute but cannot.
|
| in software, the open-source model allows people to advance
| critical initiatives without quitting their day jobs or
| making onerous commitments.
|
| how can we achieve the same in healthcare, that is let
| outsiders contribute and advance the state of the art?
| Melting_Harps wrote:
| > how can we achieve the same in healthcare, that is let
| outsiders contribute and advance the state of the art?
|
| The Biohacking community is actually really adept, and
| had made a lot of progress in making Science accessible,
| prior to COVID you had teams already working together
| across continents and different time zones. So when
| someone like Josiah Zayner wanted to tackle a COVID
| vaccination trial on himself and other biohackers they
| already had the means and methods ready to go.
|
| The problem is if you want to play by their (academia)
| rules you're never going to making any inroads, you can't
| publish and no one will give you a grant for your work,
| and you're not going to be a chair of anything for your
| work even if it pans out: but, certain therapies are in
| development that started off as Biopunk/Biohacker
| projects.
|
| It's super exciting and hard but also way more work than
| just BSing your way in academia into a professor role as
| its all too common occurrence. Professional students
| becoming mediocre professors was a far worse problem in
| the Sciences than I could have ever imagined, the one's I
| really felt bad for were the post docs with actual
| meaningful research, often with severe social anxiety and
| poor speaking skills, but were forced to teach undergrad
| and simply just read the book aloud as 'lecture.' My
| Organic Chem professor comes to mind, my inorganic
| professor (did his MSc at Cambridge!) was a rockstar to
| us undergrads and would do office hours during his lunch
| hour between lab research and the university made him
| protest before they'd release back pay during the cuts
| and layoffs.. it was pathetic and I felt so bad for him,
| my review was scathing of the University as I left and
| I've never really forgiven them for that.
|
| Obviously with no VC model in Science to follow for
| anything but the most brazen outliers (theranos) it's
| unlikely to happen. Personally I'd volunteer to help
| middle school or HS kids get involved in plant and Ag
| science and take some on in culinary if such an Industry
| still exists in the US after COVID and help them bypass
| the University track altogether. That is what I focused
| on after I left working in a lab, but there aren't many
| avenues for this model to scale to take on massive
| projects due to a lack of funding. And the money and
| stability is abysmal, but the Science and fraternity of
| actual Scientists doing meaningful work is probably more
| than half of the reason most of us decided to study it in
| the first place.
|
| Chamath needs to stop pretending to care about politics
| and solve real problems like funding Community Science
| wet-labs next to libraries to help the youth care about
| Science in a meaningful way instead of wasting their time
| on tik-tok or Instagram with his billions.
| px43 wrote:
| Josiah's videos about demystifying the COVID vaccine all
| got taken down, then the entire channel got shut down.
| Super disappointing move by YouTube. It was definitely
| one of my favorite channels for science education.
| brandmeyer wrote:
| (squinting suspiciously) ... exactly why did they get
| taken down? The Algorithm has a well-earned reputation
| for being capricious, but there's also a ton of good-
| sounding bullshit out there.
| dnautics wrote:
| I don't know if it will "allow outsiders to contribute"
| but I would like to see a biotech that makes patent-free
| drugs. I tried to make nonprofit out of that but there
| was a lot I didn't understand about how I work, how the
| world works, and how to get things done, so I will take
| another crack at it in 5-10 years.
| panabee wrote:
| i believe this is not only possible, but will happen
| sooner rather than later because of advancing
| capabilities in software, machine learning, and
| collaboration. we simply need the right people providing
| capital and launching these biotechs.
|
| re patents, the key is to drive down costs for research
| and testing. research seems like the low-hanging fruit,
| comparatively speaking, but it's unclear how to reduce
| the costs of clinical trials in an uncontroversial way.
| wesleywt wrote:
| I am also from a molecular biology background and saw this
| often. We call these guys the "Golden Boys". They are super
| successful, but completely useless. If you still believe live
| is fair, wake up sunshine.
| noobermin wrote:
| This is so dastardly common sometimes I'm surprised science
| works at all.
| [deleted]
| qqj wrote:
| this is incredible. And I thought I had it bad with politics in
| tech companies... this is some next level not giving a fuck
| right there - people who cheat like that should be punished
| severely, and work as supermarket cashiers, not become fricking
| professors. Unfortunately, I too, were I in your shoes,
| wouldn't pursue it much further past filing a formal complaint
| or two: the game is asymmetrical, it's much harder to nail
| someone for wrongdoing than it is for them to fudge up some lab
| results. Not to mention the emotional toil and waste of time
| and potential political blowback the would be whistle blower
| would suffer...
|
| Pretty depressing stuff.
| dnautics wrote:
| About half of my friends in grad school have had their
| careers damaged to varying extent by academic fraud, some
| have wasted lives chasing bad results (one friend lost
| several years chasing a bad result by Homme Hellinga), some
| have had bad stuff perpetrated upon them by bad actors with
| big names (one had her result we suspect - stolen by Carolyn
| Bertozzi via the review process, luckily her boss was a
| member of NAS and PNAS track-III'd the paper ahead of
| Bertozzi's publication).
| gspr wrote:
| Just be aware that it's not always like this, and that some
| fields are less prone to it than others.
|
| In my 8 years in research mathematics, I didn't see a single
| case that would come close to this horror show (not that
| mathematics is free of unethical behavior, of course).
| Collaborating with biologists, however, I got exposed to a
| world far more backstabby than I've since experienced in the
| corporate world.
| read_if_gay_ wrote:
| I think this is largely because results in math are easily
| verifiable compared to chemistry, or as an even worse
| example, the social sciences. The latter are also suffering
| from the replication crisis the most.
| fingerlocks wrote:
| Math has a different problem. Because of the wide breadth
| of the field, and highly specialized nature of problems,
| it can take a very long time for anyone to actually
| verify a result with confidence. If ever. Unless you're
| doing something famous like P!=NP, there might not be
| many people capable of checking your work in a reasonable
| amount of time.
|
| The story of Fermat's Last is a great example, what would
| have happened if that wasn't a famous problem?
| read_if_gay_ wrote:
| I agree, even proofs are wrong more often than you'd
| think, but I'm not sure whether math is actually so
| uniquely broad that other fields don't suffer from this
| problem.
| gspr wrote:
| Maybe it's not its breadth, but its depth. That isn't to
| say that other fields aren't deep, don't get me wrong.
| But the more tightly coupled with the high-level physical
| world a field is (think for example medicine or biology),
| the more it is prone to having technologoical advances
| from the outside make new sub-fields crop up and old ones
| die. Think of for example the multitude of research areas
| made possible by gene editing, or high-resolution NMR
| imaging.
|
| Of course this happens to some extent in math too, but a
| lot of subfields aren't killed or born due to outside
| technological changes. Number theory remains number
| theory, and still builds directly on centuries of work,
| even if computer verification has helped in some cases
| (disclaimer: I'm not a number theorist).
|
| For most subfields of mathematics, you have a lot of
| depth to cover before you get to the forefront of
| research. That isn't to say that it's by any means easy
| to get to the forefront of more high-level physical
| sciences, but there are certainly subfields in biology or
| medicine that didn't exist a mere 40 years ago (also true
| in math, but in _general_ far more rare there).
| renewiltord wrote:
| My cousin was a student at a lab where a sketchy grad student
| doctored results too. She was majorly sketched out by the whole
| thing and that the PI supported the whole op. It was very
| painful and set her back a bit but she managed to switch to a
| different lab and do proper work, defend her thesis, graduate,
| and get far away from those people. Now we just shake our head
| in disbelief at them but at that point it was fairly
| existential. It's not easy to switch labs after some time there
| and people will sort of distrust you and everything.
|
| My impression is that some large number of 'results' are fake
| results. I can't even imagine in non-hard sciences what the
| fakery is when the hard sciences have this stuff.
| DarkWiiPlayer wrote:
| Every now and then I go through some of wikipedias sources on
| certain social topics because I don't trust them at all. The
| amount of BS I've found in papers, even though I don't eve
| have any research background at all is impressive.
|
| My favourite was probably this one paper where the author
| essentially made a reddit-post asking a community about
| themselves, then cherry-picked (the post is still up, with
| timestamps and all) a few comments and came to a conclusion
| that didn't really fit those hand-picked comments.
|
| In conclusion: Wikipedia is a dumpster fire and shouldn't be
| used for anything other than hard facts like dates and for
| entertainment.
| MaxBarraclough wrote:
| Did you edit the Wikipedia page or challenge the
| interpretation in the talk page?
| auganov wrote:
| I wanted to edit an article about an obscure religious
| group that included some blatantly wrong statements about
| the espoused ideology. They had an academic tertiary
| source making these claims extrapolated from reaearch by
| the same author that made both plausible claims but also
| included similar inferences. Being a very obscure group
| there aren't many other academic sources discussing it.
| All literature that could disprove these claims comes
| from non-academics affiliated with the group which are a
| no-go.
|
| As per Wikipedia rules (which took hours to figure out),
| there's not much one can do short of getting some
| impartial or friendly academic to publish a more
| reasonable article.
| DarkWiiPlayer wrote:
| I already spend a lot of time trying to "fix the
| internet" and just don't have the stamina to also start
| fixing wikipedia now. I'm also being turned away by the
| constant stories of edit-wars that tend to happen about
| certain controversial topics.
| roel_v wrote:
| Wikipedia itself has descended into the same pathologies
| you see in 'science' today: a bunch of gatekeepers who,
| by account of having been there the longest, have set up
| a moat of rules and 'culture' and such, to the point
| where newcomers are shut down or drowned out. I'm not
| saying it's impossible to get in; but only those that
| sufficiently mould themselves to the existing people and
| structures will last long enough to become fully
| accepted. And so the system sustains itself.
| aspaceman wrote:
| What are you even talking about?
|
| I genuinely have no idea. You mind tangibly identifying
| how Wikipedia has descended into chaos like you say?
| MaxBarraclough wrote:
| They're not the only person to have made an honest effort
| to improve Wikipedia, only to be met with legalistic
| hostility and inertia.
| pdpi wrote:
| Wikipedia is an encyclopaedia. It's a secondary source that
| aggregates knowledge from primary sources.
|
| For all the problems wikipedia does have, this isn't one of
| them. It's not their job to second guess published
| research.
| DarkWiiPlayer wrote:
| > Wikipedia is an encyclopaedia
|
| An encyclopaedia with rather low standards that many
| people sadly treat as an absolute source of truth.
|
| You're right that this isn't really a wikipedia problem
| though. It's a matter of education because an
| overwhelming majority of the population isn't competent
| enough to fact-check memes on facebook, let alone
| wikipedia, and if wikipedia doesn't do it either, then
| that responsibility is pushed all the way back to the
| scientists doing the actual research.
|
| This is an incredible lack of redundancy if you consider
| how important wikipedia has become in shaping public
| opinion. It's a system where the scientific publication
| process is the single point of failure and this article
| clearly shows that it _does_ fail rather often.
|
| So what way is there to make this process safer? There
| needs to be at least another link in the chain that
| confirms information, preferably two or three.
| yholio wrote:
| > An encyclopaedia with rather low standards
|
| ... that somehow manages to have articles on proeminent
| subjects that are more in depth and factual than any
| competing encyclopedic endeavor, while, at the same time,
| far surpassing them by orders of magnitude on breadth for
| obscure and less academic topics.
|
| Wikipedia is not an encyclopedia in the traditional
| sense, and can't be judged on the same standards. It is
| simply in a league of its own, it fails in different ways
| than traditional editor-controlled projects and is a
| fantastic repository of human knowledge and educational
| resource.
| [deleted]
| missingrib wrote:
| Is Wikipedia really as bad as you are claiming? Can you
| give some examples?
|
| It seems to me like a lot of the articles are accurate,
| and some of the check marked or featured articles are
| downright great.
| Uberphallus wrote:
| Wikipedia is overall great, though highly politicized in
| some topics, for science/engineering topics there's
| actual review by other Wikipedians and sourcing is solid.
|
| Moreover, the talk page always has anything that might be
| controversial about the article that you might be
| interested in.
|
| Sure, it will rarely display incorrect data, but it
| happens less and less as antivandalism bots become
| smarter.
| pdpi wrote:
| > Moreover, the talk page always has anything that might
| be controversial about the article that you might be
| interested in.
|
| This, plus access to the per-article revision history,
| ensures a much higher degree of transparency than any
| other comparable work.
| angry_octet wrote:
| Wikipedia actually aims to use secondary or tertiary
| sources, because of the likely bias in primary (and to
| some extent, secondary) sources. Statements of fact
| shouldn't be supported only be primary sources
| (publications), though they may be referenced for the
| historical context. However, quality control on something
| as big as Wikipedia is essentially impossible.
| qwantim1 wrote:
| Attention and effort will go to "well-presented fake".
|
| Marie Curie believed that radioactivity might have been
| caused by ghosts or the paranormal because of such things.[1]
| While there may actually be ghosts or other things
| paranormal, I'd bet that Marie Curie was fooled.
|
| The good part is that Curie's work persists, and we think we
| have more understanding about radioactive substances.
|
| I'm not sure whether she had to spend time specifically
| debunking the ghost-of-radioactivity theory; that just
| happened because of her work studying radioactive substances
| and their effects.
|
| [1]- https://www.famousscientists.org/scientists-who-
| believed-in-...
| throw998 wrote:
| I find this fascinating. If we allow ourselves to entertain
| the idea of quantum time paradoxes, could it be that the
| radioactivity was in fact caused by _the ghost of Marie
| Curie herself_? She would have a very strong and obvious
| reason to haunt the science.
| prewett wrote:
| That sounds like self-imposed slavery: every time someone
| wants radioactivity, Marie Curie's ghost needs to show up
| and produce it. What with all the nuclear reactors and
| RTGs on far-flung spacecraft, she's a busy ghost.
| [deleted]
| xiaolingxiao wrote:
| The exact same thing happen to me with a high profile professor
| who sits on editorial boards of top conferences. He was
| interested in shilling his dataset far and wide, and will do
| what ever it takes to show the "value add" of his stuff to
| other tasks. The smart ones just gave him the number the
| wanted. I did the work and told him the value add is dubious at
| best. He found someone else to gave him the number he wanted to
| tell the story he pre-determined, and made claims in the papers
| that didn't even line up with the published results in that
| very paper. The music goes on and the whole experience is a
| complete waste of life.
| Melting_Harps wrote:
| > In graduate school, in my lab there was a grad student who
| was kind of an unlikely "professor's pet". He was tall and had
| surfer's long hair with a bit of a hippie aesthetic. Anyways,
| he was also really completely clueless about how to do science
| correctly, but also, I guess, really good about playing
| politics (there was a time when he asked me to put some
| bacterial plasmid DNA on my mammalian cells. I told him "it
| doesn't work that way", but I did it anyways and handed over
| the cells, and he got the observation he was expecting). On his
| main project he was teamed up with a super sketchy foreign
| postdoc that I was convinced would say anything to get high
| profile papers out.
|
| God damn, just this paragraph alone made me remember why I ran
| like hell after my undergrad even during the financial crisis
| of 2008's horrible job market and being up to my eyeballs in
| debt; I saw the politicking behind what it took just to get a
| department to give a nod to a tenured professor's peer reviewed
| paper.
|
| It was fucking pathetic and I've never been more ashamed of my
| what would be my _profession_ than that but it set the tone for
| what to expect and made me realize just how irreparably marred
| that system is. It was followed by a sense of dread that
| nothing I could do would ever change that and I turned down the
| offer to work in said professor 's lab to carry things on into
| grad school (MS) and just worked as hard as possible to pay off
| my debts and pivot my Life entirely. I'd rather sweep and clean
| floors helping a small business grow into something real than
| ever go back to that despicable environment.
|
| Academia is definitely a mind-prison, and a trap for so many
| brilliant minds that may not have ability or wherewithal to try
| their hand a startup or have the necessary paperwork
| (citizenship) to take on private sector work, which itself
| carries a ton of pitfalls.
|
| There are some benefits to the University model but I really
| hope COVID disrupts the monopoly Universities have over this
| domain for good! Ed-tech really should be much bigger source of
| funding and development, but FAANG just keeps suckering in
| people that could otherwise do something actually useful for
| Society.
|
| > What do you think he did? Nothing, of course. He kept on the
| talks circuit, still talking about how exceptional his
| discovery was, and to date there have been no retractions. He
| even won the NIH grad student of the year award.
|
| > Oh. What happened to the grad student? He's a professor in
| the genomics department at UW.
|
| He is literately the academic 'Big Head' character from Silicon
| Valley that every lab/department has. I'd speak of my own
| experiences further, nothing as bad as yours, but I really
| don't feel like ruining my evening any further.
|
| > I am also from a molecular biology background and saw this
| often. We call these guys the "Golden Boys". They are super
| successful, but completely useless. If you still believe live
| is fair, wake up sunshine.
|
| Same, I should have made the leap to Microbiology in JR year,
| but I just wanted to GTFO and even abandoned by double major
| (Biochemistry) work just to speed up the process.
| dnautics wrote:
| I would say he is more of an erlich bachman than a bigetti.
| Melting_Harps wrote:
| The guy teaches at UW now, it's not Stanford but its
| definitely Big Head's plot.
| jdsalaro wrote:
| I'm sorry but this saddens me to no end, even I did better
| science during my BSc and MSc; it's not just disheartening,
| it's frightening. Reading this almost made me feel ill to my
| stomach. I don't know what else to say at the blatant disregard
| for scientific ethics and sense of duty.
|
| And we complain that the public at large doesn't trust us
| "educated" folk, well I can't see why...
| radiator wrote:
| This is true, and in my opinion there is one more tendency
| which you also imply.
|
| Not only the public at large, but even University graduates
| start to an extent distrusting those who are "professionals"
| in academia. It is simply a whole other world, where you are
| only judged by the number of papers under your name, perhaps
| never having contributed to anything practical - seems so
| detached from real life.
| TheSpiceIsLife wrote:
| Hk a that old saying go?
|
| Those that can, do. Those that can't, teach.
|
| In my experience this is accurate in the overwhelming
| majority of cases.
| mlang23 wrote:
| It is worse. If you mention that you are not trusting every
| scientific results per se, you're being labeled as _stupid_
| and _uneducated_. This sort of absolute reasoning is making
| the distrust even worse. How can you have trust in a in a
| system that is unwilling to publicly admit its shortcomings?
| Trust and honesty come in pairs.
| Radim wrote:
| This science-bro movement scares me too. "But SCIENCE said
| so! You're a SCIENCE denier!"
|
| It feels like a religion, with its own T-shirts and all.
| Appeals to authority, intellectual posturing... often from
| people with little understanding of the actual science.
| Honest insiders are way more careful with any absolute
| statements.
|
| No wonder there's a (also scary) rise of conspiracy
| theories.
|
| How do people not observe those as two sides of the same
| coin?
| dennis_jeeves wrote:
| > "But SCIENCE said so! You're a SCIENCE denier!"
|
| This is re-incarnation of what used to be religion.
| Religion is alive an well, just not in form that our
| predecessors were familiar with.
| at-fates-hands wrote:
| I found that the movement you talk about is more about
| putting your faith in the "scientist" as opposed to the
| actual "science".
|
| It seems much easier to find scientists who will tow your
| political viewpoint and then people can use them as a
| resource to prove that unless you take this person's
| "expertise" as gospel, then it proves you are a science
| "denier".
| ncmncm wrote:
| "toe"
| xkde wrote:
| I think people have to get more serious about separating
| science as a procedure from scientism (that is,
| philosophical issues that are often discussed in tandem).
| When one uses the phrase, "science denier", it often
| means, "you don't agree with my
| philosophy/metaphysics/economic policy" rather than "you
| deny these particular facts", which causes people to be
| rightly concerned. I'm not optimistic that this is going
| to change anytime soon, but this, I think, accounts for
| many of the issues in current discourse.
| moomin wrote:
| In the UK we've seen a fascinating evolution from skeptic
| societies to science denial conspiracy theorists. To
| _massively_ simplify what's a relatively complex piece of
| sociological weirdness: using your intuition about how
| the world works is a good heuristic for spotting
| charlatans, but it fails you badly when the science tells
| you something that doesn't accord with your intuition.
| analog31 wrote:
| I tend to limit my use of "science denier" when an
| organization or its followers _systematically_ deny
| scientific knowledge on multiple unrelated fronts.
|
| Interestingly, I have read that in the 1920s and 30s,
| there was actually an organized relativity denialist
| movement, that wrote articles and held public protests.
| antonvs wrote:
| Relativity was a huge philosophical shift from the
| comparative simplicity of Newton's laws. It's not
| surprising that there was resistance to it.
|
| Tesla was famously against relativity, telling the New
| York Times, "Einstein's relativity work is a magnificent
| mathematical garb which fascinates, dazzles and makes
| people blind to the underlying errors. The theory is like
| a beggar clothed in purple whom ignorant people take for
| a king".
| analog31 wrote:
| Indeed, and the anti-relativity movement also had a very
| strong undercurrent of antisemitism.
|
| Chances are, most of the people marching against
| relativity had no clue about Newtonian mechanics, and
| were told stuff such as relativity leading to moral
| relativism.
| jjoonathan wrote:
| I don't _like_ the science-bro movement, but I also think
| they might fill an important niche. The anti-science
| movement has too many people and too much time and too
| high of a (answer time / question time) gish-gallop
| ratio for scientists to possibly engage with. If
| scientists try to fight the anti-science crowd, they will
| lose.
|
| Science bros, for all their faults, can trade blows on
| more even footing, and that's something. Perhaps even a
| vitally important something. Even if science bros aren't
| great at science proper, their contribution to societal
| consensus formation might be as important as the
| underlying science itself!
| bnralt wrote:
| Science bros often misuse "anti-science" to try to
| shutdown opinions they disagree with. Hence people
| worried about the unlikely event of being killed by a
| nuclear power plant are anti-science, but people worried
| about the even more unlikely event of being killed by a
| super intelligent AI aren't. Misusing the word "science"
| (particularly by people who don't seem to have a good
| grasp on it) and turning it into a rhetorical cudgel is
| harmful, and pushes the idea that science is ideological.
| jjoonathan wrote:
| Do you have an alternative for addressing the gish gallop
| issue?
|
| We agree that science bros have problems, but unless you
| have an alternative I see them as a net positive, and not
| by a small margin.
|
| Consensus formation is always messy, but that's not
| solved by losing.
| arethuza wrote:
| My own, rather bitter, experiences with academic research in
| the early 1990s led me to suspect that by trying to "manage"
| academic research at a large scale was utterly counter
| productive and was optimising for all the wrong things
| (publications, career progression, money, politics) and was
| actually dramatically reducing the amount of actual science
| being done.
|
| I left, co-founded a startup and never regretted it for a
| moment.
|
| Edit: The point where I was sure I had to leave was when I
| was actually starting to play the "publications" game _too_
| well - when you find yourself negotiating with colleagues to
| get your name on their paper for a bit of help I 'd decided
| things weren't really for me.
|
| Edit2: I'd wanted to be an academic research scientist since
| I was about 5 or so when I actually got what I thought was my
| dream job I was delighted - took me a couple of years to work
| out why almost nothing in the environment seemed to work in
| the way I expected them to ("Why is everyone so
| conservative?") and became, as one outsider described me,
| "hyper cynical".
| pas wrote:
| What does conservative mean in this context? Could you
| explain it a bit? Thanks!
| arethuza wrote:
| Apologies, I meant conservative in the sense of
| resistance to contemplate new ideas rather than the
| political sense. Somewhat naively I had assumed that
| academic research was where people would be _most_
| welcoming of at least discussing new ideas, whereas I
| found the opposite to be true.
| andi999 wrote:
| Actually in some sciences they are. But anything touching
| medicine... forget it.
| pas wrote:
| Thanks! Could you give a few examples? About what were
| folks so conservative? Were they stubborn
| proving/supporting their own paradigm/hypothesis or ...
| they were just simply not open to any ideas? About
| methods or about theory? Both?
| gryn wrote:
| If I had to guess, in the academic context it would mean
| no actual novel thinking, just churning out more papers
| on the same `winning` theories in the field, things where
| before even starting you have a clear idea of what the
| result would look like.
| mjburgess wrote:
| Is that small-c conservative? Or do you mean rightwing?
| (curious, I assume the former...)
|
| In either case pretty much all humans are profoundly
| small-c conservative, "big change projects" on society-
| scale do often end in war/death/etc. At least, it's
| probably 50/50 whether its a "National Health Service" or a
| "World War".
|
| However the reason is deeper than that: evolution does not
| care if you're thriving, it cares that you are breeding. So
| you're optimized for "minimum safety" not "maximum
| flourishing".
|
| So if things are stable then you will prefer to stay in
| them for as long as possible. It is why people need to "hit
| rock bottom" before they can be helped, often, ie., their
| local-minimum needs to become unstable so they will prefer
| the uncertainty of change.
| jhbadger wrote:
| The problem with going to a startup is it is kind of like
| going from the frying pan into the fire. As someone who has
| worked in both academia and industry, while academia and
| its pursuit of publications leads to bad behavior, industry
| and its pursuit of money is even more unprincipled. While
| it might not be that hard to fool peer reviewers with
| nonsense, it is way easier to fool venture capitalists, who
| often know no science and and are just listening for the
| hot buzzwords.
| mercer wrote:
| I had the 'luck' of being a research assistant at a
| prestigious academic collaboration involving multiple
| equally prestigious universities. This was in my bachelor
| years, and I still hadn't decided whether to pursue a
| career in academia or elsewhere.
|
| While the experience day to day was definitely fun, it
| destroyed any desire I had of entering the field. A lot of
| politics, a lot of statistically suspect stuff (even to me,
| in my third year of a bachelor), and a lot of busiwork.
|
| After that experience I went into web development (full-
| stack). What I like about it is that even though there IS
| politics, even though there IS taking shortcuts, and god
| forgive me for some of the code I delivered, in the end
| whatever I work on has to actually do the thing it's
| supposed to do. It doesn't remove the aforementioned
| problems, but it grounds everything in a way that is mostly
| acceptable to me.
|
| As frustrating as it can be to build some convoluted web
| app that feels like it's held together by scotch tape, it's
| nice to know that it eventually has to do whatever the
| client asks for, however flawed.
| jjoonathan wrote:
| That's exactly why I left science, too. I saw people around me
| publishing artifacts and not getting caught. I realized I
| couldn't compete and left.
| agumonkey wrote:
| > The epilog is that after a decade of floundering I realized
| that even though I am pretty good at science, I was no good at
| playing academic politics and quit the pursuit; I drove for
| lyft/uber for a bit, and now I'm a backend dev.
|
| I'm really sorry. It seems a lot of people are hit by a wall of
| cruelty. More is less in our lives.
|
| Have you thought of joining some biohacklab to keep enjoying
| your talent and curiosity on your original field ?
| twic wrote:
| A friend joined a group studying some cell behaviour. They had
| previously had a big result that they could stimulate this
| behaviour in defined, serum-free culture by adding a specific
| factor.
|
| Friend was to work on characterising this effect, so his first
| job was to reproduce the result as a base case. He couldn't.
| The factor didn't stimulate the behaviour.
|
| He asked around, comparing his execution of the protocol with
| that of the the postdoc who had done the original work.
|
| The method involved growing a feeder layer of cells, in serum,
| then lysing them and washing the plate, leaving a serum-free
| layer of extracellular matrix behind, as a foundation for the
| serum-free cell culture (this is a pretty standard technique).
|
| Turns out the previous postdoc's idea of washing a plate was a
| lot less thorough than my friend's. Couple of quick changes of
| PBS. So they were almost certainly leaving a lot of serum
| factors behind on the matrix. Their serum-free culture was
| nothing of the sort.
|
| The supervisor insisted that the previous postdoc's work was
| fine, and that my friend just didn't have good technique. The
| supervisor had him repeat this work for months in an attempt to
| make it work. But he's a careful worker, so it never did.
| 8note wrote:
| This feels like automation would have great benefits for
| these types of things.
|
| Instead of relying on people getting the right technique, you
| load in their program, dump chemicals into the right vials,
| then let it run and check the results
| valarauko wrote:
| Well, a lot of these tasks are already automated (ie,
| shakers), but most bench workers have their own quirks on
| existing protocols. Most labs have their own 'dialect' of
| common mol bio techniques that 'work' in their particular
| setup. Perhaps the reagent from their particular supplier
| requires a longer incubation time, or the enzymes are wonky
| and you need to add more. Everybody I know does washing
| steps their own way - they say the "official" protocol is
| too long/cumerbersome/wasteful. More often than not, their
| own variant of the protocol is documented in their lab
| books, but not in the publications, where it cites the
| original protocol.
| andrewon wrote:
| I imagine the postdoc would have a negative control of not
| adding the vector? Otherwise its hard to convince people the
| effect was coming from the vector.
| twic wrote:
| Right, so the effect must have been from the added factor
| plus some mystery factor in the serum.
| refurb wrote:
| This is the worst situation when the supervisor (professor)
| "sees no evil, hears no evil".
|
| In a similar situation a prior students work couldn't be
| repeated and it was pretty clear the student made up the
| results. "Water under the bridge, let's move on". Of course
| the publication still counted for the prof.
| SubiculumCode wrote:
| I am not doubting your story, but I know plenty of solid
| careful scientists doing honest work and being successful. One
| of these successful scientists self-retracted a high impact
| paper after he discovered that he had made a data coding
| mistake. It was painful, but he did the right thing of his own
| accord, even though it halted work on several follow-up papers
| that he was drafting.
| Ecstatify wrote:
| How is their story related to you knowing scientists?
| SubiculumCode wrote:
| I am simply providing a counter-example of academic
| integrity to make the point that one's personal
| experiences, good or bad, may not reflect what is generally
| true of academica/science.
| mlyle wrote:
| I think that probably most people show integrity... but
| it's a problem if review processes, editorial mechanisms,
| and culture reward those who don't show it.
| robertlagrant wrote:
| The problem is of a lack of data publishing. All data
| should be published; all conclusions published
| (preferably with the code that generated the conclusions)
| so corrections can be made and improved conclusions drawn
| easily.
| jpeloquin wrote:
| > The problem is of a lack of data publishing
|
| Agreed. If you have any recommendations for long-term
| public data archival they would be greatly appreciated.
| OSF recently instituted a 50 GB cap which rules out
| publishing many types of raw data, and subscription
| options (AWS, Dropbox, etc.) will lead to link rot when
| the uploading author changes jobs or retires, or the
| project's money runs out. Sure, publishing summary
| spreadsheets is a good first step, but there should be a
| public place for video and other large data files. IPFS
| was previously suggested but the data still needs to be
| hosted somewhere. Maybe YouTube is the best option,
| despite transcoding?
| cryptica wrote:
| The modern financial system is making a butchery out of honest
| people. I've seen it happen over and over at many different
| companies and industries.
|
| Educational institutions are rotting from the inside. Idiots
| were being rewarded at the expense of intelligent people and
| now the idiots have taken over control and rewarding other
| idiots. If you want to know what happens next, watch
| 'Idiocracy' or 'Planet of the apes'. At this rate, it will
| certainly take less than 500 years to get there.
|
| You can see it based on how slow scientific development has
| gotten; there are very few major new breakthroughs compared to
| before... Most of the ones that get attention are BS.
| dnautics wrote:
| > Most of the ones that get attention are BS
|
| Arsenic life was the big one when I was a postdoc
|
| Tardigrade DNA is a new one, so popular that it became a
| major plot point in Star Trek. Turned out it was probably
| just a sloppy grad student not being careful with their
| samples/not taking into account microbes physically hitching
| a ride on the tardigrade
| foobarian wrote:
| > You can see it based on how slow scientific development has
| gotten
|
| I feel that the cause and effect are reverse; while the low
| hanging fruit was available and getting discovered it was a
| lot harder to get away with fraudulent results. But now that
| we're facing diminishing returns and more fish in the pond
| due to years of overtraining fraud is easier to sell.
| op03 wrote:
| Dont worry about it. It becomes a trap and many of them loose
| their minds as them dig themselves into deeper and deeper holes
| not knowing what else to do. Source: shrink in the family at a
| univ counselling center. At the end of the day they are
| misguided people.
|
| Even though there is a high price, their function is to train
| the survival skills of the honest folk who rise up the food
| chain. And dont have any doubt they have survived these type of
| people (usually thanks to the right networks and mentors), have
| developed their own tricks and exist in large numbers.
| foobarian wrote:
| Reminds me of the protagonist in movie Fargo played by
| William H. Macy.
| wesleywt wrote:
| No they don't. In my experience they become professors or get
| hired by top tier companies like Roche.
| dnautics wrote:
| I don't particularly care about him per se (though i'm sorry
| to burst your model of society, from what I hear over my
| residual science network, I'm pretty sure he's oblivious or
| doesn't care), I'm a bay area dev, and I'm making enough
| money and have made good investments in friend's startups
| that my only regret is not having started sooner. Hopefully
| I'll be able to cash out with enough to do my own biotech, so
| I'm just biding my time for now. But what does concern me is
| that this is endemic in chemistry. It's not talked about much
| outside.... Which makes me wonder if other sciences are just
| as bad, "we just don't hear it". The incentive structure and
| nearly nonexistent self-reporting accountabilily is just the
| same; and meanwhile everything operates under a general
| social deification of the sciences.
| op03 wrote:
| We hear it all the time. Its an ancient story. The history
| of science is full of these stories.
|
| Misguided/driven/ambitious people are always looking for
| shortcuts and they will find them. Its like dealing with
| mosquitos, cockroaches, weeds, software bugs and cancer. It
| never ends.
| TeMPOraL wrote:
| I think the point of GP's story is that this isn't just
| one or two bad apples here and there, but it's endemic in
| that domain - and most likely in others too (I'm leaning
| to believe that; it's not the first story like this I've
| read in recent years).
|
| Being an endemic problem means you have to switch your
| assumptions; when reading a random scientific paper,
| you're no longer thinking, "this is probably right, but I
| must be wary of mistakes" - you're thinking, "this is
| most likely utter bullshit, but maybe there's some
| salvageable insight in it".
| hobofan wrote:
| I think once you've seen a few papers in high-tier
| journals that turn out to be bullshit once you start to
| dig a bit deeper, there is not other choice than to adopt
| this harsh stance on random scientific papers. Especially
| if you want to do work with that expands on findings on
| other papers that roughly look good "trust but verify"
| seems to be the way to go.
|
| I've only recently dipped by toes into academic life in a
| lab, but it very much seems that PIs generally know which
| are the bad apples. E.g. when discussing whether some
| data is good enough to be publishable the PIs reaction
| was something along the lines of "If we were
| FAMOUS_LAB_NAME it would be, but we want to do it in a
| way that holds up". So it seems like there are at least
| some barriers to how incompetence would hurt the whole
| field.
|
| I'm also surprised that there is no mention of the PI in
| GP's story. As it's a paper published by the lab, it's
| not just on the grad student "to do the right thing", but
| even more on the more senior scientist, whose reputation
| is also at stake.
| TeMPOraL wrote:
| > _I think once you 've seen a few papers in high-tier
| journals that turn out to be bullshit once you start to
| dig a bit deeper, there is not other choice than to adopt
| this harsh stance on random scientific papers. Especially
| if you want to do work with that expands on findings on
| other papers that roughly look good "trust but verify"
| seems to be the way to go._
|
| Yeah, but I meant that in general case, you no longer
| "trust but verify", but "assume bullshit and hope there's
| a nugget of truth in the paper".
|
| This has interesting implications for consuming science
| for non-academic use, too. I've been accused of being
| "anti-science" when I said this before, but I no longer
| trust arguments backed by citations around soft-ish
| fields like social sciences, dietetics or medicine. Even
| if the person posting a claim does good work of selecting
| citations (so they're not all saying something
| tangentially related, or much more specific, or "in
| mice!"), if the claim is counterintuitive and papers seem
| complex enough, I mentally code this as _very weak
| evidence_ - i.e. most likely bullshit, but there were
| some papers claiming it, so if that comes up again, many
| times in different contexts, I may be willing to
| entertain the claim being true.
|
| And stories like this make me extend this principle to
| biology and chemistry in general as well. I've burned
| myself enough times, getting excited about some result,
| only to later learn it was bunk.
|
| The same pattern of course repeats outside academia, but
| more overtly - you can hardly trust any commercial
| communication either. At this point, I'm wondering how
| are we even managing to keep a society running? It's
| _very hard work_ to make progress and contribute, if you
| have to assume everyone is either bullshitting, or
| repeating bullshit they 've heard elsewhere.
| dnautics wrote:
| Funny story, PI noticed an error in one of my papers and
| I (happily) issued a very minor retraction. Also in one
| of the threads I talked about how he did retract several
| year's worth of work done on a different project by the
| intern when she joined later. So he was alright. Plus, as
| a junior (2nd year grad student) you really don't want to
| tattle on the NIH grad student of the year. Who do you
| think wrote the recommendation?
| rdtwo wrote:
| There is a reason there is a huge replication crisis in
| academia and it's exactly what you say above. When folks
| in industry need to develop a product based a published
| paper more often then not it's bullshit.
| dnautics wrote:
| it's endemic in biology, and it's endemic in chemistry (I
| had feet in both sides). The sentiment you wrote in the
| last sentence is exactly what I feel whenever I read a
| paper, hit it on the nail.
|
| The crazy thing, is that the honest scientists are
| working at middling university. It is worse the higher up
| you go. I have had the opportunity to work in a upper-
| midrange research university [time-] sandwiched between
| two very high profile institutes. The institutes were way
| more corrupt. Like inviting the lab and the DARPA PM to
| hors d'oeuvres and cocktails at the institute leader's
| private mansion type of stuff (it turned out that that
| DARPA PM also had some wierd scientific
| overinterpretation skeletons / PI railroading the
| whistleblower stuff in her closet, and for a stint was
| the CTO of a microsample blood diagnostics company, I
| can't make this shit up, I guess after Theranos it got
| too wierd, she's now the CEO of another biotech startup
| -- how TF do people like this get VC money, and yet I
| can't get people to raise for some buddies with a growth
| industry company, and had to make the entire first
| investment myself?).
|
| Of course working at a upper-midrange university sucks
| for other reasons. Especially at state universities, the
| red tape is astounding. And support staff is truly
| incompetent. Orders would fail to get placed or would
| arrive and disappear (not even theft, just incompetence)
| all the time.
| Radim wrote:
| While the "host" (people who pay, often with minimal
| decision power over their resources) turns a blind eye,
| "parasites" (cheaters who profit disproportionately)
| proliferate. Is that really so surprising?
|
| When somebody else foots the bill, it's feast time!
|
| To be clear, I'm with you. Also a PhD-turned-industry,
| for much the same reasons. But I realize what you
| describe is a completely rational strategy. The options
| always come down to:
|
| 1) Try not to be a _host_ - if you have the wherewithal
|
| 2) Try to be a _parasite_ - if you have the stomach
|
| 3) Suck it up & stay _salty_ - otherwise. You can call it
| a balance, equilibrium, natural order of things -
| whatever helps you sleep at night.
|
| Take your pick and then choose appropriate means.
| Romantic resolutions and wishful thinking - kinda like
| Atlas Shrugged solution for option 1) - rarely work.
| disgruntledphd2 wrote:
| Yup, I was taught this as part of graduate school.
|
| Nobody ever said it was fraud, they said things like they
| wouldn't share the data and I couldn't replicate.
|
| In general, the incentives for shoddy science (get Nature
| papers or find a new career) tend to reward bad
| behaviour, and I just wasn't able to find something
| unexpected and pretend it had been my hypothesis all
| along (it's almost impossible to publish a social science
| paper where you disconfirm your major hypothesis).
| ALittleLight wrote:
| The problem isn't that such people are getting away with
| unearned good feelings and so the fact that some may feel bad
| later isn't a solution or a reason not to worry. The problem
| is that they are wasting scientific resources (e.g. the time
| of the careful intern trying to reproduce flawed results),
| polluting research by publishing misleading findings, and
| discouraging legitimate research.
| JesseMReeves wrote:
| The problem is that there is no working system in place
| that makes such abuses of scientific truth visible.
|
| We would need to get away from inefficient communication
| via publications and set a system in place that tracks
| findings in detail, and whether they can be replicated
| first.
|
| But there is no willingness to do so after the US of A
| deeply harmed the scientific mission and academics by
| introducing infuriatingly dumb economical incentives into
| science.
| AgentMatt wrote:
| > But there is no willingness to do so after the US of A
| deeply harmed the scientific mission and academics by
| introducing infuriatingly dumb economical incentives into
| science.
|
| What are you referring to here?
| lumost wrote:
| The number of incentives driving this kind of activity in
| science is disheartening.
|
| So many papers get published, few are read widely, and even
| fewer are replicated, they'll still get citations if the talk
| circuit is played right. Citations are what advance a scientist
| in their career, and anything that could be tossed off as an
| unfortunate statistical anomaly or error is unlikely to end a
| career.
|
| In such a world, "optimal play" would be to intentionally or
| unintentionally P-hack, or just slightly embellish results such
| that the work is interesting to cite, but not interesting
| enough to replicate. People who do this will eventually move up
| ahead of everyone else, ultimately favoring incremental but
| bogus resuls.
| foobarian wrote:
| The thing I find disheartening is that if fraudulent results
| are being cited, it must mean that the mechanism of "standing
| on the shoulders of giants" is not working. One would expect
| that these papers would be contributions that citing
| scientists could benefit from and use in their own work with
| impact. For example if scientist A truly developed an O(N)
| sorting algorithm, then a scientist B might use it in their
| work to derive some other result.
|
| I guess in some fields of science the effective dependency
| graph of academic work is very flat, and the true results get
| plucked and developed by industry (being true results it is
| actually possible to meet the higher reproducibility bar
| there). And the citations don't actually reflect the true
| dependencies, but some political/social graph instead. Too
| bad.
| lumost wrote:
| > And the citations don't actually reflect the true
| dependencies, but some political/social graph instead. Too
| bad.
|
| I think this gets to the major concern with Academia today,
| as it becomes somewhat of a self-reinforcing feedback loop.
| Curry citations with political savvy, get awarded grants
| due to citations and political savvy, show that you are
| productive due to citations, grants, and political savvy -
| earning yet more political capital.
|
| This will probably become my go to explanation for why
| Academic CS research has largely become decoupled from
| industrial application and industrial research. While
| political savvy is important in a large corporation,
| eventually you need to produce results.
| mooseburger wrote:
| Shouldn't you name and shame this guy in this comment? It
| doesn't seem like he deserves anonimity.
| academonic wrote:
| I think the grad student was this guy:
| https://twitter.com/dougfowler42 (Douglas M Fowler)
|
| His work at Scripps matches the same research group and
| timeframe of when dnautics was there, and he's now a
| professor in UW's genomics lab. The topic described seems to
| fit what he was researching then, and he received a
| prestigious grad student award for it.
| [deleted]
| havish wrote:
| I am taking up masters(Materials Science) after a spell in
| Corporate. I was really hoping to go into research and
| academia. This is quite disappointing to hear. Then again, it
| helps to remove any expectations that any field would be devoid
| of politics in general. Bit relieved to be disillusioned now,
| rather than much later.
| andrewon wrote:
| Having worked in both academia and industry in biotech field, I
| have to say that the bar of reproducibility is a lot higher in
| industry.
|
| In academia, the goal is to publish. The peer-review process
| won't care to repeat your experiments. And the chance that
| other lab repeating your experiments was slim -- why spending
| time repeating other people's success?
|
| In contrast, in industry, an experiment has to be bullet-proof
| reproducible in order to be ending up in a product. That
| includes materials from multiple manufacturing batches of
| reagents, at multiple customer sites with varying environmental
| conditions, and operator with vastly different skills.
| hinkley wrote:
| I could sense this sort of problem even in CS and could not
| wait to get into an applied position as soon as possible. If
| you cannot build the thing you're an expert in, you're no
| kind of expert I understand.
| tinyhouse wrote:
| It's true CS has this problem. But that doesn't mean you
| cannot do reproducible research in academia. It's up to
| you.
| lr1970 wrote:
| In many academic disciplines there are no real incentives
| for reproducible research. On the contrary,
| reproducibility helps your colleagues/competitors poke
| holes in your papers. It is quite perverse that being
| secretive and sneaky is better for career advancement
| that being open and honest. This is the underlying root
| of the problem.
| ziotom78 wrote:
| Well, I believe that the biggest problem is that there
| are very little incentives in doing that. Everybody (your
| university, the Government, the funding agencies...)
| rushes you to publish as many papers as possible, get
| zillions of citations, and boost your h-index; however,
| they do not give a damn about the reproducibility of the
| results you are publishing.
| toomim wrote:
| If you are outcompeted by people with lower morals, then
| is it really up to you? You either have to succumb to
| taking shortcuts, or lose your funding.
| nikanj wrote:
| Theranos showed us that's sadly not true. A good story beats
| reliable results.
| andrewon wrote:
| I would say the industrial incentive still works pretty
| well. Theranos didn't follow and eventually couldn't sell
| products and busted.
| singhrac wrote:
| I can second this. Working in industry, the bar is quite high
| for rigor. The general attitude of industrial researchers is
| to be very very skeptical of academia, since a lot of things
| just don't reproduce (cherry-picked data, p-hacking, only
| work in a narrow domain, etc., etc.). These researchers are
| almost all people with PhDs in various science fields, so not
| exactly skeptics.
| jfengel wrote:
| The bar is different, but so are the aims.
|
| Industry works solely on stuff that's reproducible because
| it wants to put these things into practice. That makes for
| an admirable level of rigor, but constrains their freedom
| to look at unprofitable and unlikely ideas. That inevitably
| results in inadvertent p-hacking. The first attempt to look
| at something unexpected is always "This might be nothing,
| but..."
|
| They call in other people earlier because they're not
| protecting trade secrets or trying to get an advantage.
| They do want priority, and arguably it would be better if
| they could wait longer and do more work first, but the
| funding goes to the ones who discover it first.
|
| So there's no real reason for either academics or industry
| scientists to look askance at each other. They're doing
| different things, with standards that differ because
| they're pursuing different goals. They both need each
| other: applications result in money that pushed for new
| ideas, and ideas result in new applications.
| singhrac wrote:
| I agree with you, and I don't want my comment to be read
| as an indictment of academia exactly - we couldn't live
| without it, it has huge returns on investment, etc. It's
| worth reading 10 bad papers to find 1 with a kernel of a
| good idea (and worth spending research funding on 100 bad
| experiments to get 1 useful result).
|
| I think what I mean to say is that the skills required in
| industrial research (which can be quite speculative in
| well-funded companies, by which I mean a 5% chance of
| success or so) are somewhat different from those required
| in academia.
| winstont wrote:
| > a super sketchy foreign postdoc
|
| I'm unsure of how the term "foreign" is being used above. Is it
| implied as a pejorative there? For example, if OP had written
| "a super sketchy white postdoc", or "a super sketchy black
| postdoc", would the HN community tolerate that?
| fireattack wrote:
| I agree that's unnecessary detail.
| read_if_gay_ wrote:
| It's not about race, it's about the quality of the academic
| system, which _is_ bad in many countries. I suppose GP
| intended it to compound - as in the guy was sketchy per se,
| _and_ from a sketchy place.
| winstont wrote:
| > the academic system, which is bad in many countries. I
| suppose GP intended it to compound - as in the guy was
| sketchy per se, and from a sketchy place.
|
| is it good here? is here considered less sketchy?
| read_if_gay_ wrote:
| Evidently it's not perfect here but yes, it's less
| sketchy, because there's far worse out there.
| remram wrote:
| All you're arguing is that it's "not the most sketchy"
| not that it's less sketchy than the average. And you're
| offering no proof or example of that either.
| read_if_gay_ wrote:
| If you're going to get hung up on technical details, note
| that GP didn't ask whether it's less sketchy than
| average, just whether it's less sketchy.
|
| Finding concrete proof or examples is obviously hard in
| this subject matter (how are you going to _prove_
| something as abstract as sketchiness), but here 's one
| observation: predatory conferences mostly only exist
| outside the West. To be even more concrete, two of the
| most infamous predatory publishers (WASET and OMICS) are
| based in Turkey and India respectively. You generally
| won't find something nearly as sketchy in the West.
| valarauko wrote:
| >two of the most infamous predatory publishers (WASET and
| OMICS) are based in Turkey and India respectively
|
| Well, as an Indian postdoc working in the US, I can speak
| to some of these sketchy behaviours. In terms of the
| predatory publishers, my Indian institution had its own
| filters, and most labs have their own as well. For
| example, for a while we had an institutional restriction
| on submitting manuscripts to conference proceedings, with
| the justification that the hard time limit equals
| substandard peer review. In addition, for the longest
| time we were not allowed to submit anything to open
| source journals, with similar justifications. Publishing
| in a journal with an IF < 8 was also frowned upon, and
| the institution would not cover publication expenses.
| AFAIK other institutions had similar filters for
| publications. I would regard my institution as a decent,
| but nowhere near the best in my field in India.
|
| Who does publish in these predatory journals? Smaller,
| less well funded universities with desperate students,
| ever since the government mandated first author
| publications as a requirement for receiving PhD degrees.
| dnautics wrote:
| Sure, I guess i should have been more specific, he was a
| postdoc who was from a country where getting one really
| awesome paper in a lab with a moderately good name (which
| ours was) would be an instant ticket to tenured
| professorship at the top academic facility in the country.
| That should give you an idea of the incentives at play.
| That doesn't necessarily make him sketchy. But he was also
| a sketchy human.
| candiodari wrote:
| I think in this case it's probably relevant as it does not
| exactly make fixing these things easier. For example, later
| he points out he can't talk the language of the institution
| this person works at. I don't think it's meant to accuse
| foreigners of fraudulent science here.
| Cocktail wrote:
| What an intresting but sadly somewhat common story. Thanks for
| sharing! Im a undergrad electronics student so basically a
| world apart, in terms of skill and department, but this is one
| of the reasons i do not wish to pursue academia and instead
| focus on intresting jobs
| leephillips wrote:
| An inspector general in the U.S. Navy informed me that the Navy
| does not have a rule against plagiarism.
|
| Take this as career advice if you want.
| drummer wrote:
| It is difficult to trust scienctific writings these days
| especially in medicine. Just listen to what Karry Mullis
| (inventor of PCR) explains here:
| https://archive.org/details/nobelprizewinnerchallengesthemyt...
| ssn wrote:
| Downvoted due to the "clickbait style" headline. You can rewrite
| it when posting to Hacker News.
| hmwhy wrote:
| > Science is supposed to be self-correcting.
|
| I admire the author's belief, not least because I used to be like
| that, but I personally think that couldn't be further form the
| truth for contemporary scientific research, and it's no better in
| evidence-base physical sciences. I personally know many people
| who used to be, or still are, in scientific research who wouldn't
| hesitate to agree with me that scientific research is mostly just
| a job for most people that's not too different to any other job
| that earns you a salary.
|
| I always ended up not posting my comment in related topics, but
| since this is getting so much traction, I might as well try not
| to appear to be bitter about my own experience and give my
| anecdote another go. If nothing else, at least this will become
| (albeit insignificant) a piece of history that stays on the
| Internet.
|
| I long time ago I received a prestigious postdoctoral fellowship
| to work with someone very well-known in the field on studying the
| mechanisms of a then relatively new type of chemical reactions. I
| spent a couple of months to meticulously prepare everything I
| needed for the study, and when finally I got everything ready, I
| began by reproducing the first break-through that was produced in
| the group that started it all -- and it didn't work.
|
| Since I was new to that particular type of chemistry at that
| time, I spent the next few months trying to reproduce the
| reaction while getting others, both within and without the group,
| to check my work. Nobody seemed to be able to figure out what I
| did wrong but one particularly thing stood out at that time:
| nobody I have spoken to actually tried to reproduce the results
| of the "first" reaction, ever, which was super strange to me. I
| had also spoken with my advisor then, who basically became well-
| known because of that first reaction, and he couldn't offer any
| solutions and the conversation always ended up being something
| completely unrelated to the irreproducible results. I spent most
| of that time blaming myself and suffering from some form of
| imposter syndrome, too, simply because I have the tendency to do
| that.
|
| Up till that point I had been following the procedure published
| in a journal article, but I thought I would dig up the first
| author's PhD thesis to check what I had done wrong. I started by
| casually scrolling through the experimental section and an C-13
| NMR spectrum of the catalyst that I was working with caught my
| eye immediately because of some very unnatural signal truncation
| that I thought was only possible with data manipulation, and sent
| the data to a few of my friends who are experts in NMR and they
| also confirmed that those "artifacts" are most certainly
| unnatural. I immediately e-mailed my advisor about it, but he
| never responded -- and that was the only e-mail from me (which
| obviously required a response) that he never responded to.
|
| I did find a few manipulated spectra in the same PhD thesis, but
| none of that really helped because I still couldn't reproduce the
| results that _nobody_ has ever mentioned anything wrong about.
| Then one night, when I was drinking with the group, someone
| working on a different floor I don 't usually talk to about my
| work asked me how things were going; after I told him my problems
| he immediately said that he'd met someone from industry at a
| conference complaining to him that the reaction doesn't work. He
| also said that a few people who came before me also tried to
| reproduce that reaction but none of them got it to work.
|
| At that point I was just angry because *I thought "science is
| supposed to be self-correcting"* and there is no way that this
| stuff was in the literature for 10 years and nobody ever said
| anything about it. In fact, it's impossible for my advisor to not
| know that something is wrong with it because he is very well
| connected to both academia and industry, and so many people in
| the 10 years before I arrived must have worked on it.
|
| During the time I was unable to get anything to work, I was
| constantly assigned work that seem somewhat related to what I do
| but wouldn't help me with my career in any way. In the end I had
| a hunch on what was really happened and determined that the
| procedures in the original paper and the PhD thesis that first
| reported the reaction were all out by a factor of 10. I was
| already on anti-depressants at that point and was drunk every
| night but was working 10+ hours a day, which was well-known in
| the group. When I had finally gotten the reaction to "work" (and
| had explained to people I trust and had them double check my
| work) and brought it to my advisor, he said "that's great"; I
| don't remember too well what else he'd said in between because
| none of it was neither an apology nor a solution, but he said at
| the end that maybe I should have deferred my fellowship because
| of my depression (which, frankly, wasn't affecting my ability to
| work).
|
| This is not an isolated case, and not the only type of academic
| misconduct. The thing that upsets us the most is that at the end
| of the day, it's not about how good and meticulous you are: for
| most of us it's mostly about how well you are at gaming the
| system. The way we fund scientific research is mostly broken, the
| way we disseminate scientific research is mostly broken, the way
| we assess potentially great scientists and appoint them is also
| mostly broken. It's only natural that, for most people, the
| experience is nothing but shit.
|
| Edit: typo.
| fabian2k wrote:
| Basing your own work on something that doesn't work is
| incredibly frustrating and can lead to enormous amounts of
| wasted effort. It doesn't even have to be fraud, there are so
| many factors you often can't fully control, and reactions can
| depend on very subtle details or minor impurities.
|
| My impression is that usually the informal communication about
| stuff "that everyone knows doesn't actually work" is far more
| efficient than in your case. But this is something the PI has
| to do, as a new PhD student won't be connected enough for this,
| and your PI seriously failed you there.
|
| It would be nice if someone published that this method doesn't
| work, but that doesn't seem to be how this works. The amount of
| effort to actually demonstrate that it really doesn't work is
| so much higher than the reward.
|
| In a healthy environment people should have been much more
| sceptical much earlier. At the latest when you saw potential
| manipulations in the NMR. I'm curious what kind of artifacts
| you saw there, did they just remove or add signals?
| mncharity wrote:
| > the informal communication about stuff "that everyone knows
| doesn't actually work"
|
| Informal communication, as an important part of the system of
| "science", seems very underappreciated in nearby threads.
|
| Science in quotes because even subfields can be very diverse.
|
| Often the corrective mechanism isn't retractions or demotion,
| it's the hallway gossip at conferences, the "don't believe it
| - he (high-profile PI) sees what he wants to see". And
| associated differential aging-out of relevance. There can be
| a lot of science system state that isn't captured by the
| short-term state of the research literature.
|
| But regrettably, as the stories here of smashed careers and
| lives illustrate, it can be very far from "everyone" that
| "knows". And a big difference between someone "knowing", and
| that being well expressed in their mentorship and leadership.
| thaumasiotes wrote:
| This is pretty interesting stuff, but one note:
|
| > The correction explains away the failures of randomization as
| an error in translation; the authors now claim that they let
| participants self-select their condition. This is difficult for
| me to believe. The original article's stressed multiple times its
| use of random assignment and described the design as a "true
| experiment."
|
| > They also had perfectly equal samples per condition ("n = 1,524
| students watched a 'violent' cartoon and n = 1,524 students
| watched a 'nonviolent' cartoon.") which is exceedingly unlikely
| to happen without random assignment.
|
| This actually cannot happen _with_ random assignment either. The
| only way you 're going to get equal numbers in each bin is if
| your process is intentionally constrained to do that. If
| assignment were random, the odds of assigning 1,524 to one bin
| and 1,524 to the other bin would be C(3048, 1524) / 2^3048, or
| 1.4%.
| redis_mlc wrote:
| > This actually cannot happen with random assignment either.
| The only way you're going to get equal numbers in each bin is
| if your process is intentionally constrained to do that.
|
| There are several CS shuffle/Fisher-Yates algorithms that can
| do this. Instead of calling the usual rand() on a mathematical
| interval multiple times, they do selection over the remaining
| elements (ie. constrained.)
|
| https://dev.to/babak/an-algorithm-for-picking-random-numbers...
|
| But I would expect CS people to have awareness about that, not
| social scientists, unless somebody wrote a paper with examples
| for that field.
|
| I've seen Fisher-Yates used in an SRE interview before, which
| is pedantic - it's just whiteboard hazing, at a very high cost
| to your recruiting and interviewing staff.
| chaoxu wrote:
| How about the following process? Each person gets randomly
| assigned to one of the two groups, when one group is full, move
| the rest to the other group. Does this make sure every equal
| partition have the same probability of showing up?
| thaumasiotes wrote:
| > Does this make sure every equal partition have the same
| probability of showing up?
|
| I'm not sure, but I wouldn't bet against it. But what is the
| value of having an exactly equal partition?
|
| On second thought, the algorithm you describe processes
| people in a particular order, and it is much more likely to
| put two people who both occur near the end of the list into
| the same bucket than to put them in different buckets. So if
| that processing order is constant, the algorithm cannot
| produce every equal partition with equal probability.
| asheldon wrote:
| I agree. It would be simpler to shuffle the list of people,
| then split the list in half.
|
| Here's a proof this algorithm doesn't work by counter-
| example (N=6)
|
| Consider a list of 6 elements. Elements 5 and 6 must be in
| the same bucket 50% of the time and different buckets 50%
| of the time. For this to be true, after we place the first
| 4 elements into their buckets according to this algorithm,
| there must be space left in both buckets 50% of the time
| and in only one bucket 50% of the time.
|
| Sequences of the first 4 coin flips where neither bucket is
| filled, followed by possible ending sequences, and the odds
| of the prefix.
|
| AABB(AB, BA) = 1/16th
|
| ABAB(AB, BA) = 1/16th
|
| ABBA(AB, BA) = 1/16th
|
| BBAA(AB, BA) = 1/16th
|
| BABA(AB, BA) = 1/16th
|
| BAAB(AB, BA) = 1/16th
|
| Total: 3/8ths
|
| Sequences of the first 3-4 coin flips where one bucket is
| filled, followed by possible ending sequences, and the odds
| of the prefix:
|
| AAA(BBB) = 1/8th
|
| BBB(AAA) = 1/8th
|
| AABA(BB) = 1/16th
|
| ABAA(AA) = 1/16th
|
| ABBB(AA) = 1/16th
|
| BBAB(AA) = 1/16th
|
| BABB(AA) = 1/16th
|
| BAAA(BB) = 1/16th
|
| Total: 5/8ths
|
| Since one bucket is filled 5/8ths of the time after 4
| elements are processed according to this algorithm, the
| final two elements will be in the same bucket 5/8ths of the
| time, not the expected 4/8ths of the time.
| sterlind wrote:
| couldn't you just assign each student an ID, get a random
| permutation of the array of students and assign violence to
| even indices and non-violence to odd? what am I missing here?
| thaumasiotes wrote:
| You can do that, but it requires all of the assignments to be
| done simultaneously at the beginning of the study, which will
| cause problems for e.g. medical trials where not everyone
| enrolls at once.
|
| But why bother? There's no special statistical value in
| having two exactly equal buckets as opposed to one bucket
| with 1,621 people in it and another with 1,427.
| OscarCunningham wrote:
| If you did want an exactly even split, you could assign
| every even numbered student randomly and every odd numbered
| student to the opposite group of the student before them.
| That guarantees an even split and doesn't require all the
| participants to be known in advance.
|
| It also guarantees that you split evenly any group of
| people arriving at similar times, so no correlation between
| arrival time and outcome will affect the study.
| [deleted]
| maximilianroos wrote:
| Here, "random assignment" means randomly assigning half the
| participants to each of two bins, I think.
| dxdm wrote:
| How do participants get assigned to the respective halves?
| thaumasiotes wrote:
| The simplest possible algorithm would be:
|
| 1. Shuffle the list of participants.
|
| 2. Put the front half of the list into one half of the
| trial, and the back half of the list into the other half.
|
| Generalizing this to more than two groups is
| straightforward. This algorithm is mentioned sidethread, by
| sterlind, with the (meaningless) modification of splitting
| the list even-and-odd instead of front-and-back. As I
| mentioned there, you can only do this if the list of
| participants is fixed before the beginning of the study,
| which is not in general the case.
| davetannenbaum wrote:
| Right --- it's called block randomization.
| archi42 wrote:
| I have friends (now M.Sc.s in CS) who did support people writing
| psychology stuff (PhD & Masters thesis mostly, as well as papers)
| at my alma mater. The key take away was: Many of the students at
| that dep were either bad in statistics, or outright abused them
| to "prove" the desired effect via manipulation of the data or
| intentionally using the wrong method. Both faculty and students
| would not listen to experts telling them that their statistical
| method was weak. The most amusing/saddening point was when one of
| them was fired because he said he couldn't solve the Halting
| Problem for them (& I mean that literally, it was a crucial part
| of an experiment).
|
| Now a family member works as a data scientist, supporting
| students with statistical analysis (for thesis/papers). Same
| thing there, a lot of students seek her help because they're bad
| with statistics (well, at least they don't fabricate data...),
| some want their thesis written by her (she drops that kind of
| job) and some expect her to hammer the data until it fits their
| hypothesis (which seems to be the most annoying/exhausting,
| because she has to convince them their method is wrong and the
| result pointless).
|
| Overall take away: I'm sorry, but for some fields simply have to
| classify a PhD as worthless unless I've read the work myself :(
| realradicalwash wrote:
| We are dealing with 5+ papers that are fraudulent and another 5+
| newer papers that are most likely fraudulent, too. That is, there
| have been 20, 25+ reviewers looking at those papers. Their job
| was to carefully read them and double check the numbers. All of
| them gave those papers a pass. I am at a loss here.
|
| The authors' behaviour is outrageous, but this story is also
| about a broken reviewing process, partly due to wrong incentives.
| rapht wrote:
| ^ This exactly!
|
| "Peer-reviewed" by whom?...
| chalst wrote:
| Journals typically list their referees per issue, but they do
| not say who reviewed which paper.
| remus wrote:
| > Their job was to carefully read them and double check the
| numbers.
|
| That's the theory. The reality is that there is no in-depth
| review. You're lucky if a reviewer actually reads the paper all
| the way through, let alone checks the numbers and applies a
| level of critical thought to the methodology, analysis and
| conclusions.
| roel_v wrote:
| "Peer review" is not "have someone else re-do the experiment".
| That's just not feasible, especially since reviews are done
| without pay. It's not realistic to expect people to spend more
| than a few hours reviewing a paper. That amount of time is
| barely enough to check for overall conceptual issues and maybe
| flag some really glaring deficiencies. (And then conclude with
| 'accepted with minor revisions', those 'revisions' preferably
| being 'add these three citations to my paper, that'll push your
| paper into 'acceptable' territory'.)
| driverdan wrote:
| But there were glaring deficiencies in the stats. Anyone
| reviewing the papers should have caught them.
| roel_v wrote:
| Well yeah in the papers of the OP maybe, I don't know. I
| more meant to address several commentors in this thread
| that seem to think in general that peer review is 'redo the
| research' and/or 'validate that it's correct'. It's not.
|
| Nowadays when you see articles results of new research of
| covid19 in the media, those articles often include 'hasn't
| been peer reviewed yet' or 'reviewed by other scientists'
| or any such verbiage, either as a disclaimer or as 'now it
| must be true'. But that's not how it works; it's not
| because something has been 'peer reviewed' that it's 'The
| Truth' or 'Real Science'. Peer review, in reality, just
| weeds out (most) quacks (although in the OP's case it seems
| it didn't even do that) and checks that the paper is not
| completely out of touch with what is happening in and known
| about the field. It's not QA of the work itself.
|
| (I don't care to debate if it should be, and if more money
| should be spend on replication etc, just providing some
| real world context on something that is quite opaque to and
| often misunderstood by those not in academia)
| KirillPanov wrote:
| > They said, no, "It's a China thing."
|
| This needs to stop being an acceptable answer.
|
| Not just in academia, but also in politics, in business, in the
| naming of viral strains, ...
| marcus_holmes wrote:
| It's tricky. There are huge differences in culture that we
| don't appreciate. What we think of as ethical and honest
| doesn't match what that culture thinks of as ethical and
| honest. And there's no reason for them to think that we're
| right and they're wrong.
| sn41 wrote:
| Please do not bring relativism into science. There is no
| eastern science and western science. I am an Indian, and work
| very hard to conform to high standards of scruples and
| ethics. There are asymmetries like paucity of travel
| opportunities, lab equipment shortages etc. but we struggle
| just the same to provide good and trustworthy results. Please
| encourage everyone to do the same.
| physicsguy wrote:
| If you want to publish in western science journals, then you
| should be held to western standards for science, whether
| that's convenient or not. Not that we're perfect by any
| means!
| Traster wrote:
| I don't know if it's true that this is a particularly chinese
| thing. But I don't think it's impossible that a country with a
| very different culture has a very different attitude towards
| fraud and cheating and that manifests. If theere is more
| scientific fraud in one group of publishers we need to be aware
| and tackle that.
| cccc4all wrote:
| Unfortunately, social politics infect group dynamics, even in
| supposed scientific settings.
|
| When people ask about historical scientific issues, like how did
| historical scientific consensus conclude the sun revolving around
| the earth. And it took Copernicus to right the wrongs.
|
| Simply look at the kind of scientific shenanigans happening now,
| false results, outright fraud, huge reproducibility issues in
| scientific studies. And many scientific communities just going
| along with the shenanigans. Explains many things in science.
| andi999 wrote:
| I do not think that receiving the raw data gives any rights to
| analyse the data and publish it on the own webpage.
| qqj wrote:
| Chinese science, lmao.
| raister wrote:
| What about Editor's ethics or lack thereof?
|
| I got accepted in a Chinese-oriented journal (i.e. most of the
| Editorial Board were Chinese) - I am not just 'saying' this, I'm
| saying because the OP mentioned "it's a Chinese thing" over
| results and datasets, whatever, I digress.
|
| On the last revision round, the Editor told me that I was lacking
| some references, which he promptly send me. Turned out that 6 out
| 6 of his 'recommendations' were papers HE WAS ONE OF THE AUTHORS.
|
| Since the paper was not OFFICIALLY accepted, I caved in and cited
| the guy (3 times), to my UTTER DISMAY.
|
| If you don't play the game, other Chinese are playing the game
| and having the results.
|
| I don't mean to insult Chinese people, but this is what is
| happening...
| lrem wrote:
| Oh, this is not a China thing. I've had a paper have a bunch of
| reviewers suggest a bunch of references each. Every bunch had
| every paper share at least one author. Every bunch was pairwise
| disjoint in the author sets. Draw your own conclusions.
|
| Edit: just to be clear: I didn't at the time read that as
| "submission tax". More of, trying to be helpful and using
| things they personally were familiar with. Most, if not all, of
| the extra references would make our paper better... If we
| weren't fighting that damned page limit, that is.
| jacquesm wrote:
| More and more evidence that the magazine based publication
| route is a net negative to science.
| DarkWiiPlayer wrote:
| Question is, why don't scientists just put everything on
| public platforms (read: github) and call it a day? Is it
| only a matter of funding, or do other factors also play a
| role in that?
| Jolter wrote:
| GitHub is not a great engine for driving discovery of
| quality content.
| Vinnl wrote:
| Because nobody reads it there and, more importantly,
| funders don't recognise the work you've done there. The
| "prestige" (as indicated by the scientific-looking but
| mostly inaccurate "Impact Factor") of the journal you
| publish in determines how good they think your work is.
|
| I wrote about that a while ago here:
| https://medium.com/flockademic/the-ridiculous-number-
| that-ca...
| DarkWiiPlayer wrote:
| > Because nobody reads it there
|
| That's a problem that would fix itself the moment most
| useful research was mainly available on such platforms.
|
| > more importantly, funders don't recognise the work
| you've done there
|
| Once again, that sounds like mostly a problem that would
| disappear if a large migration to open platforms was to
| happen.
|
| ----
|
| So it seems the main poroblem seems to be that there's no
| incentive to be among the first to make the move? IIRC
| it's often the journals that don't want content to be
| published elsewhere, so I guess just doing both is also
| not that simple.
| Vinnl wrote:
| Yep exactly, it's a classic coordination problem.
| jpeloquin wrote:
| Scientific data does not fit on most public platforms.
| GitHub in particular has tight limits on file size, push
| size (100 MB), bandwidth, and storage ($100 / TB /
| month). Which isn't that surprising; git is designed for
| code, not data.
|
| Even if funders gave large sums of money dedicated to
| data publication, if recurring billing is involved it
| will eventually break as attention wanes. Data archives
| need to be managed by an institution or purchased with a
| single up-front fee, otherwise they won't stick around.
|
| There's also the aspect that, even if you as an
| individual take it upon yourself to publish your data
| without institutional support, anyone who reads your
| paper will most likely ignore your dataset. Which is
| somewhat demotivating.
|
| https://docs.github.com/en/github/managing-large-
| files/condi...
| captain_price7 wrote:
| For all its faults, peer-review is still the best
| mechanism to keep science in right track.
|
| What you propose would mean twitter or facebook will
| replace those journals, people with huge twitter
| followings, or "celebrity" scientists would dominate
| science, the works of people without such marketing
| skills would get drowned out.
|
| (This is sort of true for current system too, but I think
| situation would be much worse in new system.)
| jpeloquin wrote:
| > For all its faults, peer-review is still the best
| mechanism to keep science in right track.
|
| Peer review is often effective, but it can't reliably
| block fraudulent publications like those described in the
| posted article. Most bad papers are rejected, but the
| authors can always try again at another journal. Any
| paper will probably get published somewhere, eventually,
| even if only in a Hindawi or MDPI journal. The journals
| aren't accountable to anyone, and as long as they have
| enough good articles to serve as cover, academics will
| need to pay for access because citing relevant prior work
| is obligatory. The publishing system is very weak against
| fraud.
| nxpnsv wrote:
| I found this behavior in Europeans and Americans too. It is not
| a Chinese specific thing...
| iagovar wrote:
| I've never seen it in Spanish publications, although I've
| been told it happens (social sciences).
|
| I know about the politics too, that's the main reason why I
| never went to pursue an academic career, but being honest I
| never witnessed such plain fraud in my UNI. It was more of a
| friends-get-all scheme.
| nxpnsv wrote:
| I think it depends on your field a little. I did not see
| this during my years in particle physics...
| hasjekyll wrote:
| Unfortunately this happens in astronomy in non-Chinese journals
| too.
| idoubtit wrote:
| I don't think this is related to the country of the editor. The
| lack of ethics is more preponderant in low-quality reviews
| (many junk reviews are Asian) and in some domains (more in
| medical reviews than mathematics).
|
| Here is an example that even the highest profile journal can
| lack ethics: circa 2005, Nature published a paper comparing a
| selection of scientific articles from Wikipedia and the
| Encyclopedia Britannica. The editorial board of Nature selected
| the articles and sent them to reviewers. They only publishes
| metrics and a few quotes of their data (the list of selected
| articles and the reviews). The results were surprising and made
| a lot of buzz. But Britannica noted that one of these quotes
| was a sentence that was not it their encyclopedia. Nature had
| to admit that they selected some Wikipedia articles, and when
| they could not find the equivalent Britannica article, they
| sometimes built it by mixing articles and adding a few
| sentences of their own. Obviously, the process were totally
| biased, from the selection to the publication.
| sn41 wrote:
| Oh boy, this is very common. This is not specific to a country
| or ethnicity, unfortunately. You also see grad students
| shilling for their guides.
| crististm wrote:
| So you lowered your bar huh? Who am I to judge, but I would
| have preferred a story with something more than the game is
| rigged and that's what I get to play with
| raister wrote:
| The paper was not accepted at that point. He could just
| denied publication out of spite. I played the odds, and got
| published, despite doing that.
|
| I'm sorry the story ended badly :) and yes - I've lowered the
| bar, sadly.
| sn41 wrote:
| I will not judge you. Citation indices are horrible and
| perpetuate this fraud. I was telling a student of mine
| yesterday, 10 years ago the game was to get publications in
| prestigious venues. Now the game is to have a stellar
| scholar.google.com profile. The two games are perhaps
| correlated, but the correlation coefficient is not very
| high.
| austinjp wrote:
| As others have noted, this is a global problem, not just
| Chinese.
|
| The version that is more difficult to detect is when a cabal of
| colleagues agree to push each others' papers in this way. So
| editor A says "you should really quote authors B, C and D." And
| somewhere else, editor B is saying "you should really quote
| authors A, C and D."
|
| Machine learning might be a way to tackle this at scale, by
| teasing out these associations. Of course, this relies on a
| degree of transparency. Some journals publish all editors'
| comments and all revisions of a paper. This is a Good Thing,
| but humans aren't reading all published research, let alone all
| the meta data.
|
| If someone with relevant ML skills wants to address this, and
| fancies starting a project, do get in touch :)
|
| A note on the Chinese insinuations that have been mentioned: As
| always, it's a bit more complex. There may well be reasons that
| some states might sponsor or 'encourage' gaming of intellectual
| institutions. If the world is viewed as a zero-sum game, and
| the currency is power, this unfortunately seems inevitable.
| Science tends away from this and towards collaboration, but
| 'politics' often seems to tend toward competition. I've seen
| university heads explicitly declare to all staff how they
| intend to game the national rankings, and nobody bats an
| eyelid, it's business as usual. It's daft and harmful, and
| frankly I think it requires hard effort from idealistic
| grassroots activists to address it. Societal improvements are
| often won through struggle, they're not given away, they don't
| happen by incremental evolution.
| petschge wrote:
| How do you propose to detect if A, B, C and D are a cabal
| that push their own papers or if they are the people who
| actually know the subject and want to improve the quality of
| paper that new people produce?
| johncessna wrote:
| Oh My Science!
|
| I don't understand where this ideal that Science is infallible
| and beyond corruption, influence, and politics comes from.
___________________________________________________________________
(page generated 2021-01-27 23:02 UTC)