[HN Gopher] Peer review is an honor-based system
       ___________________________________________________________________
        
       Peer review is an honor-based system
        
       Author : ibobev
       Score  : 90 points
       Date   : 2024-01-13 18:26 UTC (4 hours ago)
        
 (HTM) web link (lemire.me)
 (TXT) w3m dump (lemire.me)
        
       | tptacek wrote:
       | Stefan Savage put it best, I think: a paper accepted in a journal
       | is part of a conversation that science is having; it's the start
       | of a debate, not the conclusion. What's important about a paper
       | is whether the ideas in it are validated and get built on by
       | other scientists. Peer review is just a sanity check before that
       | process starts, nothing more.
        
         | HarryHirsch wrote:
         | Well, yes. But what's worrying is that so many people who have
         | seen a university from inside and even have advanced degrees do
         | not know this. Every time some one comes up with grants for
         | reproducing published stuff the silly idea finds enthusiastic
         | support. How come? People should know about the practice of
         | science but don't.
        
           | BalinKing wrote:
           | It seems like you're agreeing with the parent, but then the
           | sentence
           | 
           | > Every time some one comes up with grants for reproducing
           | published stuff the silly idea finds enthusiastic support.
           | 
           | suggests that you think replication isn't useful. Or, am I
           | misunderstanding what "the silly idea" refers to?
        
             | wizzwizz4 wrote:
             | The silly idea is the thing they're trying to replicate
             | (likely out of suspicion that it's a _false_ silly idea).
        
         | j7ake wrote:
         | The value of a paper depends on much it influences the thinking
         | of other scientists multiplied by the number of scientists it
         | influences.
         | 
         | Therefore the recent super conductivity papers may well end up
         | being very important, if only to stir up the community to
         | action.
        
         | loceng wrote:
         | I think us humans are flawed enough where we need a reminder of
         | timescale, e.g. how time tested is a theory, how long did a
         | different belief exist before a new understanding arose?
         | 
         | Einstein for example had public resistance to his theory of
         | relativity in the beginning.
        
           | dctoedt wrote:
           | > _Einstein for example had public resistance to his theory
           | of relativity in the beginning._
           | 
           | More recently, it took _years_ for the medical community to
           | get over its opposition to the evidence that common peptic
           | ulcers are caused by bacteria (which earned two Aussie
           | physicians the 2005 Nobel Prize in Medicine). [0] A NY Times
           | correspondent (and physician) wrote in 2002,  "I've never
           | seen the medical community more defensive or more critical of
           | a story."
           | 
           | (Much of the opposition was interest-based, e.g., surgeons
           | who didn't want to lose lucrative fees for operating on ulcer
           | patients who could instead be treated with inexpensive
           | antibiotics without the pain and recovery time of major
           | surgery.)
           | 
           | [0] https://www.nobelprize.org/prizes/medicine/2005/7693-the-
           | nob...
           | 
           | [1] https://en.wikipedia.org/wiki/Timeline_of_peptic_ulcer_di
           | sea...
        
         | photochemsyn wrote:
         | A funny story from academia is about the cohort of peer
         | reviewers whose primary standard for acceptance is that their
         | own work is cited in the bibliography of the submitted paper.
         | 
         | Peer review often fails to catch cheaters, too, as the infamous
         | Jan Hendrik Schon case demonstrated (from 2000-2001 the Bell
         | Labs researcher published 9 papers in Science and 7 in Nature,
         | all of which made it through peer review, all of which were
         | later retracted on grounds of fraudulent data manipulation). In
         | the long run, the scandal did improve the field, as all
         | publications on microelectronic graphite etc. devices now
         | require electron microscopy proof that the claimed devices
         | actually exist. Note current AI technology allows for data
         | fraud and image manipulation that's much harder to detect than
         | in the past, though.
        
           | spookie wrote:
           | Yes, this is indeed a big problem. If anything, more eyes are
           | needed. Therefore, less walls.
        
           | neilv wrote:
           | > _A funny story from academia is about the cohort of peer
           | reviewers whose primary standard for acceptance is that their
           | own work is cited in the bibliography of the submitted
           | paper._
           | 
           | In some circles, the two first pieces of feedback on a paper
           | draft, from a nominal co-author:
           | 
           | 1. Cite [my people's various loosely related work].
           | 
           | 2. Cite [particular researcher in this niche], who'll
           | probably be a reviewer.
        
         | RcouF1uZ4gsC wrote:
         | Then devalue published papers.
         | 
         | Don't use them as the basis for rewards.
         | 
         | Don't use them as the basis for policy
         | 
         | If they are nothing more than the the starting conversation of
         | scientists then we should not put much value in them.
         | 
         | Let's not play the have it both ways game where scientific
         | papers are given deference and then when there is cheating just
         | say li g that scientific papers are just a conversation
         | starter.
        
           | tptacek wrote:
           | People doing science professionally already understand the
           | value of a paper.
        
           | concordDance wrote:
           | If not using papers as a basis for policy then what? Making a
           | committee of academics from a selection of prestigious
           | institutions every time you have a question that needs
           | answering?
        
         | Almondsetat wrote:
         | We should start framing some journals as "here's what the
         | authors have found convincing from bleeding edge submissions,
         | please take a look and try to confirm these findings" for other
         | researchers and others as "here's a conservative and
         | comprehensive list of results which have been solidly
         | reproduced" for professionals in the field who want dependable
         | stuff
        
           | zamfi wrote:
           | > here's a conservative and comprehensive list of results
           | which have been solidly reproduced
           | 
           | This sounds more like textbooks.
           | 
           | Journals are usually intended to be venues for experts to
           | talk to _each other_ , not even "professionals in the field".
           | 
           | There are some journals (sometimes called "translational")
           | that try to bring scientific results into practice, though
           | these are often limited to fields where "practice" is its own
           | huge body of knowledge that doesn't always overlap with
           | "research" in that field (e.g., Medicine).
        
       | loceng wrote:
       | Dr. Christopher Essex in a recent interview
       | [https://www.youtube.com/watch?v=jpjpBWxvamA] highlighted that
       | it's called "peer review" and not "expert review" - which I think
       | is an often most important detail that should not put anyone on a
       | pedestal; of course the "peer review" system has been hijacked-
       | corrupted to some degree, where there are gatekeepers to getting
       | published in the "most reputable" journals.
        
         | hinkley wrote:
         | I think the layman assumes that the authors _are_ experts and
         | thus so too would the peers.
         | 
         | Journalists are a huge problem here. I blame about half of the
         | anti science backlash on breathless journalism writing checks
         | science can't cash.
        
           | logifail wrote:
           | > I blame about half of the anti science backlash on
           | breathless journalism [..]
           | 
           | This isn't a science-specific problem. Journalism has
           | _serious_ issues with incentives.
        
       | TheAceOfHearts wrote:
       | Peer reviews should be done in public and considered an ongoing
       | process rather than a one-time thing. If an expert thinks some
       | questions are worth asking, then the resulting discussion should
       | be available for younger generations to learn from it as well. I
       | think the open source ecosystem has shown the effectiveness of
       | such a system. As more data is made public it becomes easier and
       | more effective to sniff out bad actors as well.
       | 
       | We could also take steps towards helping establish high
       | reputation for certain papers by introducing a mechanism other
       | than citation count. Maybe some kind of stamp of approval.
        
         | nextos wrote:
         | Yes, this is more or less how the non-profit journal
         | https://elifesciences.org (funded by HHMI, Wellcome, Max
         | Planck, and Wallenberg Foundation) works.
         | 
         | A problematic aspect of non-public peer reviews is waste and
         | political battles. I have witnessed first-hand how big names in
         | a particular field reject articles from incumbents that are
         | perfectly sound just to delay their publication and/or to copy
         | them.
         | 
         | A public review introduces some skin in the game and avoids
         | this kind of behavior, as well as rejections or requests to
         | make changes because of reviewer incompetence. It also avoids
         | the opposite thing, blind acceptance of flawed studies.
        
           | sampo wrote:
           | > Yes, this is more or less how the non-profit journal
           | https://elifesciences.org (funded by HHMI, Wellcome, Max
           | Planck, and Wallenberg Foundation) works.
           | 
           | Also, all 19 journals published by EGU (European Geophysical
           | Union)
           | 
           | https://www.egu.eu/publications/open-access-journals/
        
         | taeric wrote:
         | I think that was a view on peer review that was lost. Used to,
         | the citations and continued exploration of a topic was a vital
         | part of the peer review.
        
         | adtac wrote:
         | I'd take it even further: make peer review public _and open_ to
         | all members of that community. Imagine a forum-like discussion
         | anybody can anonymously review any submission and all reviews
         | are public.
         | 
         | Becoming a reviewer should still be invite-only and the system
         | should keep track of the reviewer's identity behind the scenes
         | to monitor for abuse, of course. The review can include coarse-
         | grained reputation signals like "has reviewed 100+ papers in
         | the last 5 years".
         | 
         | It might be worth embargoing reviews with a fixed time delay
         | before making them public to prevent bandwagon effects and
         | disincentivise review plagiarism tho. The reviewer's identity
         | should be deanonymised too after something along the same time
         | scale as when the paper author's identity is revealed.
        
           | blackbear_ wrote:
           | Fortunately this is already happening in some fields such as
           | machine learning. Check this out: https://openreview.net/grou
           | p?id=ICLR.cc/2024/Conference#tab-...
        
       | niceice wrote:
       | Consensus review is on the way out. I don't know what replaces
       | it, perhaps a Github-like system? Whatever it is, hopefully it
       | includes a focus on replication.
        
       | photochemsyn wrote:
       | There are two kinds of peer review in academic science - at the
       | point of publication, and the point of funding. While quantities
       | of ink have been spilled on the former, the latter gets far less
       | attention - though as you might guess, this is a much more
       | contentious issue because millions of dollars of funding may be
       | on the line. A good discussion is here:
       | 
       | "Is there hard evidence that the grant peer review system
       | performs significantly better than random?"
       | 
       | https://academia.stackexchange.com/a/128343
       | 
       | In short, 'freedom of research direction', aka 'blue skies
       | research'[1] is steadily becoming a thing of the past, as grant
       | managers and politicians and academic administration teams
       | increasingly take the view that they're the ones who should be
       | directing what kinds of research are done, rather than the
       | academic researchers themselves. This is enforced by a grant
       | system that narrowly defines how the funds can be spent, meaning
       | that your average academic researcher has been transformed into a
       | corporate drone following orders from the executive floor.
       | 
       | [1] https://en.wikipedia.org/wiki/Blue_skies_research
       | 
       | This transformation of academic research began around the same
       | time Bayh-Dole legislation granted exclusive licensing of
       | university patents developed with taxpayer funds to private
       | interests (1980).
        
       | apwheele wrote:
       | Based on personal experience, I have a very different opinion
       | than Daniel.
       | 
       | I do not think peer review adds much value over self-publishing.
       | Consumers still need to read and verify the work themselves. Bad
       | stuff gets published in peer review so often you still need to
       | verify the integrity of everything yourself. For a simple
       | hypothetical, say peer review is "good quality" 80% of the time,
       | and self-published is good quality 50% of the time. These are
       | made up numbers, but I am saying "80% is too low for the pain of
       | peer review to provide much value". As a consumer I find 0 value
       | in peer review (I can use google and read what I want, being
       | published in peer review is an annoying paywall if anything).
       | 
       | I believe the majority of comments in peer review are not based
       | on technical accuracy (what an outsider may think peer review is
       | about, verifying if something is right or wrong), but tend to be
       | more clearly opinions. So from a writer standpoint for people who
       | say "peer review improves my work", that does not jive with my
       | experience.
       | 
       | There are so many other negatives with peer review in academia
       | (people bean counting pubs, paywalled, the club issue Daniel
       | mentions), I just don't think it adds much of any value. If
       | everyone decided tomorrow "I am just going to publish stuff on
       | ArXiv" (or whatever preprint server), the world would not be
       | worse off. I think we would be better off actually.
        
         | matthewdgreen wrote:
         | Right now in my field (cryptography and security) the number of
         | papers is exploding. The real battle at this point is not only
         | finding time to read the papers, it's even finding time to
         | learn about them. Unfortunately self-publishing (AKA preprints)
         | just means we're producing a lot of papers nobody has time to
         | read, many of them with obvious flaws and bad presentation that
         | could easily be fixed. Peer-review is _one_ rating system that
         | helps to filter some signal from the noise.
        
       | sfryxell wrote:
       | This applies to code review, another honor based system.
        
         | orm wrote:
         | I've done both, and been on the receiving end of both, but
         | hadn't thought of this similarity. I think it's a good analogy.
        
       | T-A wrote:
       | This seems topical:
       | 
       | https://tvtropes.org/pmwiki/pmwiki.php/Main/NoHonorAmongThie...
        
       | timkam wrote:
       | What I disagree with in the article is the _I never make
       | mistakes_ attitude; it could be worse, but I still think it's
       | good to discuss. The author writes that because they are
       | "serious", they can essentially rebut all criticism "easily". We
       | are all human and even excellent scientists make honest mistakes.
       | Strong theorists can sometimes make "hard" math mistakes. In the
       | best cases, peer review gives us some assurance that we at least
       | did not make obvious mistakes that can be relatively easily
       | spotted by other specialized researchers. I think the _never make
       | mistakes_ attitude is dangerous, because it means that
       | researchers need to be very cautious when admitting honest
       | mistakes and their own intellectual fallibility in order to not
       | lose face.
        
       | 2cynykyl wrote:
       | The statements made in the original post are so foreign to me. It
       | sounds like the author is digging really deep to try and say
       | something nice about the peer review process. To be fair, it
       | _might_ be true for CS where they are stuck in the weird trap of
       | publishing in conferences, but in other circles everything goes
       | into journals, and in this case the peer-review processes is
       | definitely not there to  "help the authors".
       | 
       | The peer review is there to help the journal maintain its
       | reputation by preventing the publication of sub-standard stuff.
       | Period. Sub-standard can mean uninteresting, incomplete, poorly
       | written, or whatever the journal is aiming for. It is _not_ there
       | to safe guard the integrity of the literature against erroneous
       | results...it 's purely self-interest on the part of the journal.
       | 
       | In reality, a rejected paper will just be submitted elsewhere
       | until it is eventually accepted. The authors cannot afford to
       | spend 1-2 years worth of work on a project then have nothing to
       | show for it, just because a reviewer didn't "get it". So authors
       | will keep submitting it (hopefully with some improvements based
       | on past reviewer comments, but maybe not) until it "gets through"
       | somewhere, and eventually nearly everything gets the seal of
       | "peer review".
       | 
       | > There is SO much more I could write on this subject, but I'm
       | trying to stay on point (-:
        
         | slimsag wrote:
         | Isn't this almost identical to the general admissions process,
         | too?
         | 
         | It seems strange to me that we built institutions on the idea
         | of filtering in/out applicants based on relatively arbitrary
         | criteria, and then express shock/surprise when the reward
         | systems inside that institution are.. basically the same?
         | 
         | There are parallels everywhere, e.g. scientists feeling they
         | must get positive 'groundbreaking discovery!' news reporting
         | about their publications, not just actually doing impactful
         | work, in the same way good grades aren't enough and you need
         | some other impactful story to tell in order to be accepted to
         | many schools.
         | 
         | All of it can be traced back to money, money, money.
        
         | Blahah wrote:
         | True for a particular dominant but antiquated and rapidly aging
         | out model of peer review. Peer review as practiced at PeerJ,
         | eLife, F1000, etc. is collaborative, productive, and maintains
         | integrity in a visible way.
         | 
         | Peer review is not inherently terrible. Exploitative rent
         | seeking publishers that commoditize academic careers and
         | outputs, and hold knowledge to ransom, are the problem.
         | 
         | Please, everyone, stop publishing in journals that do harm.
         | Think about what the impact of where you publish a paper is and
         | align your choices with your values.
        
         | tptacek wrote:
         | Right, it's a low bar, and it's meant to be a low bar, and
         | that's fine.
         | 
         | It's not _no_ bar. Peer review adds _some_ credibility to a
         | paper. And the venue does as well. It 's just less credibility
         | than the popular imagination assumes.
        
       | thsksbd wrote:
       | "Peer review is an honor-based system"
       | 
       | But we are not an honor society anymore
        
       | pacbard wrote:
       | Maybe a better descriptor would be that peer review is a
       | reputation-based system.
       | 
       | The peers that will review your work likely know about the paper
       | you submitted already, because they work on related work
       | themselves and sat through your conference presentations. Most of
       | them want you to publish your work and will provide a good/non-
       | adversarial review of a paper.
       | 
       | Sometimes though, your paper hits too close to home for them,
       | then they will try not to get it published or will slow walk the
       | review so that their own work can come out before yours or at the
       | same time.
       | 
       | On top of that, you have journal editors who can see everything
       | about the process and can decide to ignore a good/bad review to
       | fit their ideas about the paper itself and to fit the overall
       | vision they have for the journal for the coming publication
       | schedule.
        
         | concordDance wrote:
         | > Sometimes though, your paper hits too close to home for them,
         | then they will try not to get it published or will slow walk
         | the review so that their own work can come out before yours or
         | at the same time.
         | 
         | How often does this actually happen? Can't say I've heard of
         | people doing this.
        
           | tovej wrote:
           | I have heard from colleagues that this has happened to them.
        
       | chrchang523 wrote:
       | (2008)
        
       | bsdpufferfish wrote:
       | Reminder that "peer review" used to mean sending a letter to your
       | friend to see what they think of your work.
       | 
       | The formalized system of opaque and unaccountable criticism that
       | gatekeeps science had to have been invented in the 20th century.
        
       | johnchristopher wrote:
       | Meanwhile:
       | 
       | > It has come to my attention that academics are now using
       | generative AI (Chat GPT or whatever) to conduct their peer
       | reviews.
       | 
       | > [..]
       | 
       | > If ever there were a compelling argument to totally abandon the
       | intellectually dishonest notion of "blind" peer review - a system
       | everyone knows is broken, rarely fully "blind" and frequently not
       | "blind" at all, but allowing unscrupulous or mediocre yet
       | established scholars to sabotage promising work - then this
       | finally is it: accountability and transparency in the face of the
       | robot takeover. Every person who reviews another person's
       | scholarship must be willing to sign their name to their own
       | evaluation, to stand by it and assert it was not the work of the
       | machines.
       | 
       | Unethical academics, AI, and peer review
       | https://nicospage.eu/blog
        
         | bee_rider wrote:
         | It is far too early to say there's some ethics convention for
         | not using ML models to write peer reviews.
         | 
         | If you used a machine to write your peer review, you've staked
         | your reputation on its output being correct (in the same way
         | that you stake your reputation on not producing bullshit peer
         | reviews, which is to say... eh, probably not a make-or-break
         | thing but it is in the mix). So you need to check the output.
         | That's the skill we as a society value.
         | 
         | We don't employ scientists for their literature skills, but for
         | their ability to build and evaluate theories and data, that
         | sort of thing. And hey, less brainpower remembering grammar
         | rules means more for the study of science. This is good.
         | 
         | Some people will torpedo their careers by putting too much
         | faith in the ML model, but the way to avoid being them is to
         | apply the same level of diligence to the model's output as any
         | other tool.
         | 
         | I'm more worried that ML models will enable some continued
         | silliness. There's something odd going on if people type their
         | actual arguments into an ML model, then it produces extra
         | necessary filler text to get into a journal, then we use ML
         | models to review that text and hope to distill it to the actual
         | arguments, and then the general public again uses ML models to
         | do the same. It seems like we'd just be better off sharing the
         | prompts or something, hahaha.
        
       | shermantanktop wrote:
       | I'm outside this world but what always strikes me is the
       | risk/reward gamble that cheaters are taking.
       | 
       | By the time they are caught cheating, they have invested dozens
       | of years if not decades into a career that is now pretty much
       | dead. Is there a life-after-cheating story in the relevant field?
       | I can't imagine much of one. Part-time lecturer/tutor in a fourth
       | rate school, perhaps.
       | 
       | Of course, that presumes a moment in time where they begin
       | cheating, risking it all. If they were cheating all along, from
       | age 12 onward, maybe stopping is the problem.
        
       | omeze wrote:
       | This is a great characterization of why peer review is great - it
       | makes honest scientists better. A lot of progress is driven by
       | "jumps" from influential papers, and we want those to be as good
       | as they can. It may not stop frauds, but the frauds weren't going
       | to help us anyway. I think fraudulent research mostly hurts by
       | distracting honest scientists and new scientists, not by
       | convincing them of something untrue.
        
       | Whoppertime wrote:
       | Peer Review seems like a system designed to encourage group
       | think. The findings of Copernicus that the Earth rotated around
       | the sun instead of vice versa would not pass peer review. Nor
       | would Louis Pasteur's Germ Theory.
        
         | frozenport wrote:
         | Yeah I agree. The only good system is one where I judge the
         | validity of the work. Especially if its something I don't know
         | about because then I have no baises.
        
           | bjornsing wrote:
           | You may think it's a joke. But it's pretty much the core idea
           | of the Enlightenment (that everyone can think for themselves
           | and don't need priests to tell them what to think). The motto
           | of the Royal Society is "Nullius in verba", "take nobody's
           | word for it".
        
         | fasterik wrote:
         | Physicists have been using preprint servers for decades, which
         | means anyone can put a paper on the internet for everyone in
         | the field to read and evaluate. So this idea that Copernicus
         | would be suppressed under the current system is absurd.
         | Einstein and Bohr's ideas passed peer review, as radical as
         | they were. Every physicist is _hoping_ for data that contradict
         | our current theories. It was a huge disappointment when the
         | only result of the LHC was to confirm the standard model.
         | 
         | For all the flaws of academic publishing, we are still in a
         | much better place than when church dogma was the gatekeeper of
         | knowledge.
        
           | puzzledobserver wrote:
           | If I understand correctly, only one of Einstein's papers was
           | ever subjected to peer review. He didn't like it. [0]
           | 
           | There are some situations where peer review has led to
           | groupthink. The one that comes to mind is the amyloid
           | hypothesis. [1]
           | 
           | [0] https://theconversation.com/hate-the-peer-review-process-
           | ein...
           | 
           | [1] https://www.science.org/content/article/potential-
           | fabricatio...
        
       | alpineidyll3 wrote:
       | If people would actually _measure_ the outcomes of peer review
       | instead of talking about it, I think it would meet a swift end.
       | 
       | Empirically, I find no improvement in reproducibility between
       | arxiv and journals. The costs are incredibly high too.
       | 
       | Like many things in our world peer review is a short lived
       | extrapolation which doesn't resemble it's origins but is regarded
       | as immutable gospel. It matters most if what you need is the
       | respect of academics.
        
       | dataangel wrote:
       | He says peer review is for the authors, but I think science being
       | peer reviewed is one of the key points used to convince the
       | public to trust scientific results.
        
         | fasterik wrote:
         | If someone thinks that a scientific paper is true because it
         | passed peer review, they need to change their mental model of
         | how science works. Peer review ensures that a given paper meets
         | a minimum standard of quality. Trust of scientific results
         | emerges gradually as the broader field forms a consensus based
         | on dozens or hundreds of papers.
        
       | currymj wrote:
       | where things seem to really go wrong with peer review is when
       | entities outside a scientific community want to use published
       | research to set public policy, decide where to invest, or make
       | high-stakes hiring decisions.
       | 
       | doing this actually requires a costly investment in deep
       | understanding of the research and the literature around it, to
       | know if it is sound and high-impact. but nobody wants to make
       | this investment, and they've mostly convinced themselves that
       | just free-riding off peer review is good enough.
       | 
       | of course it is not, flawed papers make it through peer review
       | all the time. also this outside use of peer review introduces
       | extremely strong distorting incentives -- an even hugely greater
       | desire to be published in specific prestigious outlets -- in the
       | face of which peer review is not really adequate to catch
       | misconduct.
       | 
       | I think the ideal form of peer-reviewed journal is unfortunately
       | logically impossible: its contents would be completely open
       | access, costing nothing for scientists who want to build on the
       | results. But it would require an extremely expensive subscription
       | to find out who has published in it, the money going to fund
       | extremely thorough peer review, so that outside decision-makers
       | can't try to free ride and distort the incentives.
        
         | bee_rider wrote:
         | I think this is not even really a problem with the peer review
         | system, but a problem with the (mostly, nonexistence of) an in
         | depth science journalism field. Peer review just means it is ok
         | to stick in a scientific journal and have peers read it...
         | people with advanced bullshit detectors and nuance parsers in a
         | particular domain.
         | 
         | A paper is a brick. A policy is a building. It is not a problem
         | in the brick manufacturing field, if people keep trying to
         | build houses without any mortar.
         | 
         | (Just to be explicit, I think you are right on the money and
         | just wanted to elaborate/rant).
        
       | cs702 wrote:
       | In fields such as Math, Physics, EE, CS, and AI, "peer review" is
       | already being replaced by open debate online. Cutting-edge
       | research in those fields is now routinely posted _first_ on
       | repositories like arXiv, with supporting code and data made
       | public _first_ on sites like Github. The work is reviewed _first_
       | online, in the open, in a variety of forums, including X.com
       | (formerly Twitter). Anyone with something to contribute can
       | participate, regardless of pedigree. It 's _so much better_ than
       | the outdated system of  "peer review," which actually isn't that
       | old.[a]
       | 
       | The question is: _Who_ benefits the most from preserving the
       | dying  "peer review" system? _Who_ loses the most from the
       | transition to open debate? Short answer: Publishing houses and
       | conference organizations. Neither wants the status quo to change.
       | 
       | [a] https://michaelnielsen.org/blog/three-myths-about-
       | scientific...
        
         | jltsiren wrote:
         | If you are well known in your field, you don't need peer
         | review. People will read your preprints, invite you to
         | conferences, and follow you on your preferred platforms anyway.
         | If you are in a well-known project, people will pay attention
         | to your work, even if you are not famous yourself.
         | 
         | For everyone else, there is peer review. Attention economy is
         | unforgiving and benefits the elite. With peer review,
         | everyone's work gets at least a minimum level of attention.
         | Which may then lead to more attention if the work is worth it.
         | 
         | Every serious proposal for replacing the current peer review
         | practices must have the same feature. When someone submits new
         | work, there must be people who have to review it. Fully
         | voluntary reviews don't work.
        
       | donatj wrote:
       | This is what I was trying to say a couple weeks ago and kept
       | getting down voted into oblivion. History had shown anything that
       | depends on human actors will be manipulated.
        
       ___________________________________________________________________
       (page generated 2024-01-13 23:00 UTC)