[HN Gopher] Why isn't preprint review being adopted?
___________________________________________________________________
Why isn't preprint review being adopted?
Author : dbingham
Score : 52 points
Date : 2024-03-24 18:06 UTC (4 hours ago)
(HTM) web link (www.theroadgoeson.com)
(TXT) w3m dump (www.theroadgoeson.com)
| timr wrote:
| Even if academics could review all the papers on a preprint
| server (which the article argues -- rightly -- that they can't),
| it wouldn't solve the perceived problems (or the actual problems)
| with scientific peer review.
|
| The vast majority of irreproducible papers aren't detectible as
| irreproducible at time of publication. They look fine, and many
| actually _are fine_. They just don 't reproduce. That's an
| expected outcome in science. The system will self-correct over
| time.
|
| IMO, the main _actual problem_ with peer review is that non-
| practitioners put too much faith in it. Nobody in science
| actually takes a paper on faith because it 's been published, and
| you shouldn't either. Peer review is little more than a
| lightweight safeguard against complete nonsense being published.
| It barely works for that. Just because you found a paper doesn't
| mean you should believe it. You have to understand it.
|
| A secondary actual problem is that it's _impossible_ to reproduce
| a lot of papers, or they 're methodologically broken from the
| start (e.g. RCTs that are not pre-registered, or observational
| studies without control groups). These are problems we could
| actually solve. For example, just requiring that any paper
| publish the raw study data would help to self-control the system.
| There are high-profile researchers out there, right now, who do
| little more than statistically mine the same secret data set --
| these people are likely publishing crap, but we have no way to
| prove it, because the data is secret.
| s0rce wrote:
| There is no place to make a note that something doesn't
| reproduce so its extremely time consuming or you need some
| source of tribal knowledge. In my postdoc I was trying to make
| some porous films and a bunch of paper's methods didn't seem to
| work, maybe I did it wrong, maybe some detail wasn't described,
| who knows. I couldn't get it to work and there was no way to
| document that failure.
| bumby wrote:
| I wonder if there's a way to document such replaceability
| failures as an erratum to the original manuscript. I feel
| like this would help in at least two major ways:
|
| 1) It provides a reproducibility filter. If a method isn't
| shown to be reproducible, publically documenting that adds to
| the body of knowledge, and this would help drive an incentive
| towards reproducing work rather than just searching for
| novelty. It would document work that would otherwise be lost
| because there's no incentive to showcase it. When the lack of
| reproducible results isn't public, it's now more likely that
| others may waste considerable effort in the same vein.
|
| 3) It may enlist the original authors to help understand why
| the work didn't reproduce well. Maybe the secondary effort
| lacked some crucial step or understanding. The people best
| positioned to remedy this are the original authors, and this
| secondary publication incentivizes them to dialogue with
| those who couldn't reproduce the outcome. It doesn't mean
| they have to engage, but it at least gives them some reason
| to involve themselves in the process.
| Retric wrote:
| Peer review is a spam filter, and it's quite useful in that
| content. But it's a spam filer for people who filter out 99.9+%
| of papers by having such a narrow scope they probably recognize
| several names on a given paper.
| pacbard wrote:
| To your second point, I alway go back to this quote: "You can't
| fix by analysis what you bungled by design" (Light, Singer and
| Willett, 1990).
|
| If a paper is broken by design, there isn't much to do after
| the fact. It's just broken.
|
| The problem is that doing a good RCT takes both time and
| effort, with the huge risk of having null results, which
| usually results in a desk rejection from most top journals.
|
| So, you either are a top-fund raising researcher who can both
| fund multiple RCTs and people to support them, or you just try
| your best with what you have and hope to squeeze a paper out
| from you did.
|
| Releasing the data won't really help much if the data
| generating process is flawed. Sure, other people will be able
| to run different kind of analyses (e.g., jackknife your
| standard errors instead of just using a robust correction), but
| I'm not sure how helpful that will be.
|
| A third issue that I have also encountered is that journal
| editors have an agenda when putting together an issue, which
| sometimes overwrites the "quality' of the research with "fit"
| to the issue. This could lead to "lower quality" articles to be
| published because they fit the (often unspoken) direction of
| the journal. Most editors see their role as steering the field
| towards new directions (a sort of a meta service to the field)
| and sometimes that comes at the expense of the quality of the
| work.
| YeGoblynQueenne wrote:
| >> (Light, Singer and Willett, 1990)
|
| A citation like the one above should normally point to a full
| reference in a bibliography section. Did you forget the
| \bibliography{} command at the end of your comment?
| barfbagginus wrote:
| In AI work - which naturally lends itself to replicability
| improvements - we could get truly solid replicability by
| ratcheting up the standards for code quality, testing, and
| automation in AI projects. And I think llms can start doing a
| lot of that kind of QA and engineering / pre-operationalization
| work, because the best llms are already better at software and
| value engineering than the average postdoc AI researcher.
|
| Most AI codes are missing key replicability factors - either
| the training data/trainer are missing, the code has a poor
| testing / benchmark automation strategy, the code documentation
| is meager, or there's no real CI/CD practice for advancing the
| project or operationalizing it against problems caused by the
| anthropocentric collapse.
|
| Some researchers are even hardened against such things, seeing
| them as false worship of harmful business metrics, rather than
| a fundamental duty that could really improve the impact of
| their research, and it's applicability towards a universal
| crisis that faces us all.
|
| But we can put the lie to this view with just one glance at
| their code. Too much additional work is necessary to turn it
| into anything useful, either for further research iterations or
| productive operationalization. The gaps in code quality exist
| not because that form of code is optimal for research aims, but
| because researchers lack software engineering expertise, and
| cannot afford software engineering labor.
|
| But thankfully the level of software engineering labor is not
| even that great - llms can now help swing that effort.
|
| As a result I believe that we should work to create standards
| for AI assisted research repos that correct the major deficits
| of replicability, usability, and code quality that we see in
| most AI repos. Then we should campaign to adopt those standards
| into peer review. Let one of the reviewers be an AI that really
| grills your code on its quality. And actually incorporate the
| PRs that it proposes.
|
| I think that would change the situation, from the current
| standard where academic AI repos are mainly nonreplicating
| throw-away code, to an opposite situation where the majority of
| AI research repos are easy to replicate, improve, and mobilize
| against the social and environmental problems facing humanity,
| as it navigates through the anthropocene disaster.
| ramblenode wrote:
| > The vast majority of irreproducible papers aren't detectible
| as irreproducible at time of publication. They look fine, and
| many actually are fine. They just don't reproduce. That's an
| expected outcome in science.
|
| This is not entirely true. A power analysis is how you
| determine reproducibility, and researchers should be doing it
| before they begin collecting data. Reviewers can do it post-hoc
| with assumptions about the expected effect size (which might
| come from similar studies). False positives produce inflated
| effect sizes, so if a result is marginally significant but
| shows a large effect, that is a good heuristic the result will
| not reproduce.
| llm_trw wrote:
| Because I don't want to spend two years fighting the second
| referee when she's fundamentally misunderstood the point of my
| paper.
|
| I'm no longer in academia. Either take what I put up on arxiv or
| leave it. I _really_ don't care.
| s0rce wrote:
| While not always true my metric for clear/understandable is for
| other people to understand it. This usually supports my
| argument when people show me a document and I have no idea what
| its saying, they argue its perfectly clear... my definition was
| for people other than the author to grasp the intended meaning.
| Muller20 wrote:
| Peer review goes beyond simple issues about clarity or
| misunderstanding. In particular, peer review is sometimes
| seen as an adversarial process.
|
| Often, the reviewer will not understand because he is not the
| intended audience. Other times, he will understand but he
| just doesn't like your method, because he is working in an
| opposite direction. Or maybe your method is a direct
| competitor of his and yours work better, which incentivizes
| some people to block your work.
| wrycoder wrote:
| Or your paper goes against the current paradigm or is
| otherwise politically unpalatable.
| bumby wrote:
| If the intended audience is reading it, I would agree.
| However, many reviewers seem to be assigned and agree to
| review a topic that they are ill-equipped to understand
| without a substantial amount of background knowledge.
|
| A good reviewer in this situation will review the referenced
| papers to bolster their understanding. A bad reviewer will
| expect everything to be spelled out within the manuscript,
| and, unfortunately, the length limits often don't permit that
| kind of write-up.
| queuebert wrote:
| Precisely. The continued hyperspecialization of science
| shrinks the potential pool of qualified reviewers to zero.
| I rarely get knowledgeable reviews these days. Instead of
| disputing my methods, they complain about the font or
| figure styles, or raise questions that weren't answered in
| the paper because they are common knowledge in the field.
| matwood wrote:
| You actually touch on an interesting point. Is peer review
| still necessary? When there is limited journal space, sure. But
| now that we have effectively unlimited space, let it be
| reviewed when people go to cite it. We've already seen a lot of
| peer reviewed papers be retracted later, so should we just
| accept the reality?
| dazed_confused wrote:
| Yes. The crap people publish even with peer-reviews is bad
| enough. A few subfields in CS have bitten a couple times by
| relying on non peer reviewed ecosystems.
| gwerbret wrote:
| Peer review is the backbone of journals, and (some) journals
| serve an absolutely critical function in academia: they
| increase the signal:noise ratio. Most of what is published is
| noise; without the curation provided by (certain) journals,
| anything of significance is very likely to be drowned out.
|
| As a casual example in the biomedical sciences, the _Journal
| of Biological Chemistry_ has an output of ~30,000 pages per
| year, most of which is 'noise'. That's just ONE journal. The
| journal _Cell_ , on the other hand, has an order of magnitude
| less, most of which is 'signal'.
|
| EDIT: This is not to say the peer review approach doesn't
| need work, and lots of it. The whole current approach to
| research needs an overhaul. I'm just saying it's a bit hasty
| to throw the baby out with the bath water.
| freedomben wrote:
| Peer review is an important signal to potential citers. If
| everyone has to fully read and understand every paper in
| order to (responsibly) cite it, there's gonna be a lot less
| research. Given the exposure of how much bad research there
| is, maybe there does need to be a slowing and focus on
| quality, but I think we still need peer review, although it
| definitely need to be reformed somehow because it's clearly
| broken.
|
| We need to take a serious look at the incentive structure in
| academia because it's not guaranteeing the scientific results
| that we expected it to. I don't think we should just abandon
| the system though.
| adastra22 wrote:
| No, it is not necessary. Just as it was not necessary for the
| first 300 years of the Royal society's existence. "Peer
| review" used to be just a stand in phrase for the marketplace
| of ideas--seeing how your peers evaluate your work in a
| public process of getting published and sending letters to
| the editor. Only in living memory has it been warped into
| this pre-publication gatekeeping process.
| daft_pink wrote:
| I would worry that preprint review would turn into another front
| of the culture wars for certain fields and science by consensus.
| zer00eyz wrote:
| The review process is broken.
|
| Reviewing pre print papers isnt any more effective than reviewing
| printed papers. Review, and publication is a meaningless bar.
|
| Publish -> people find insight and try to pick it apart -> You
| either have flaws or you get reproduced... Only then should your
| paper be of any worth to be quoted or sighted from.
|
| The current system is glad-handing, intellectual protectionism
| and mastrubation.
|
| Academia has only itself to blame for this, and they are
| apparently unwilling to fix it.
| bumby wrote:
| I think we need to find ways of giving status to reproducing
| studies. Maybe not as much as novelty, but definitely something
| greater than what it is currently.
| glial wrote:
| IMO reproducing findings could/should be a mandatory part of
| PhD training.
| ajmurmann wrote:
| "Publish -> people find insight and try to pick it apart -> You
| either have flaws or you get reproduced... Only then should
| your paper be of any worth to be quoted or sighted from."
|
| This is already how it's supposed to work. The review before
| publication is a fairly superficial check that just confirms
| that what you describe follows basic scientific practices.
| There is no validation of the actual research. A proper
| reproduction is what's supposed to come after publication.
|
| IMO the real problems are that a) there isn't much glamour and
| funding for reproducing other's studies and b) "science
| journalists", university PR departments and now in part people
| on social media are picking up research before people on the
| field looked at it or misrepresent it. Suddenly the audience is
| a lot of folks who never were the intended audience of the
| process.
| zer00eyz wrote:
| There is a pretty simple way to change all of that.
|
| Academic standards: You are not longer allowed to site a non
| reproduced paper in yours.
|
| Citations matter as much as the print, put the hurdle there
| and all of a sudden things will change real quick.
| freedomben wrote:
| > _You are not longer allowed to site a non reproduced
| paper in yours._
|
| I fully agree that in an ideal world, that would be the
| case. But some reproductions (especially now with machine
| learning) could cost millions of dollars and years to do. I
| don't think that's a reasonable or feasible thing to
| require.
| BlueTemplar wrote:
| Well, then, at least there should be pressure around
| having to mention this : a non-replicated study (by an
| independent group) is after all inherently suspect.
| ramblenode wrote:
| > Academic standards: You are not longer allowed to site a
| non reproduced paper in yours.
|
| > Citations matter as much as the print, put the hurdle
| there and all of a sudden things will change real quick.
|
| The reproducibility crisis is just a symptom of the
| publish-or-perish culture that pushes academics to churning
| out bad research. Academia already over-emphasizes
| publishing positive results at the expense of studying
| important questions. Your solution would further
| incentivize low risk, low impact research that we have too
| much of.
|
| Aside from that, there are a lot of edge cases that would
| make this difficult. If I do five studies that are
| modifications of each other, and all show the same basic
| effect, but I publish it as one paper, does that count as
| being reproduced? What if a result has only ever been
| reproduced within a single research group? Does the Higgs
| Boson need to be confirmed at at a collider outside the
| LHC?
| rhelz wrote:
| One of my profs once remarked, "All of science is done on a
| volunteer basis." He was talking about peer review, which--as
| crucial as it is--is not something you get paid for.
|
| Writing a review--a good review--is 1) hard work, 2) can only be
| done by somebody who has spent years in postgraduate study, and
| 3) takes up a lot of time, which has many other demands on it.
|
| The solution? Its obvious. In a free market, how do you signal if
| you want more of something to be produced? Bueller? Bueller?
|
| Yeah, that's right, you gotta pay for it. This cost should just
| be estimated and factored into the original grant proposals--if
| its not worth $5k or $10k to fund a round of peer review, and
| perhaps also funds to run confirming experiments--well, then its
| probably not research worth doing in the first place.
|
| So yeah, write up the grants to include the actual full cost of
| doing and publishing the research. It would be a great way for
| starving grad students to earn some coin, and the experience
| gained in running confirming experiments would be invaluable to
| help them get that R.A. position or postdoc.
| bumby wrote:
| I don't disagree with the proposed idea of paying for review,
| but I would prefer also to have guardrails to ensure a _good_
| review. I would be willing to pay for a good review because it
| makes the paper /investigation better. But let's face it: under
| the current paradigm, there are also a lot of really bad
| reviews. It's one thing when it's apparent that a reviewer
| doesn't understand something because of a lack of clarity in
| the writing. But it's also extremely frustrating when it's
| obvious the reviewer hasn't even bothered to carefully read the
| manuscript.
|
| Under a payment paradigm, we need mechanisms to limit the
| incentive to maximize throughput as a means of getting the most
| pay and instead maximize the review quality. I assume there'd
| be good ways to do that, but I don't know what those would be.
| heikkilevanto wrote:
| So, we just need a meta-review to review the reviews. At a
| cost, of course. And in order to keep that honest, we need a
| meta-meta-review...
| notatoad wrote:
| we need twitter community notes for science
| currymj wrote:
| many CS conferences have something literally called a
| "meta-review" and then there are further senior people who
| read and oversee the meta-reviews. it stops there though.
| nitwit005 wrote:
| Unfortunately, what you'll actually incentivize is spending as
| little effort as possible to get the money paid out.
| matwood wrote:
| Opposed to now where it appears lots of science is peer
| reviewed with all the problems found later?
| nitwit005 wrote:
| Replacing a broken system, with a broken and also expensive
| system, does not sound like an improvement.
| andyferris wrote:
| It might cost $100k - $1m (or more) to repeat the work and
| run replications. The $5k - $10k mentioned earlier would be
| enough to allow time reading and thinking and checking some
| stuff on pen-and-paper.
| michaelt wrote:
| _> The $5k - $10k mentioned earlier would be enough to
| allow time reading and thinking and checking some stuff
| on pen-and-paper._
|
| The average postdoc in the US earns $32.81/hour,
| according to the results of some googling. Even taking
| overheads into account, $5k should cover more than a
| week's full time work.
| ramblenode wrote:
| Is that different from any other job?
| aashiq wrote:
| It depends! There should probably also be a process by which
| reviewers themselves get graded. Then paper writers can
| choose whether to splurge for fewer really amazing reviewers,
| or a larger quantity of mediocre reviewers. Also, readers
| will be able to see the quality of the reviewers that looked
| at a preprint.
| BlueTemplar wrote:
| How do you have all three of anonymous authors, anonymous
| reviewers, and reviewer ratings ?
| hiddencost wrote:
| I hear this sentiment a lot.
|
| There was a time when academia was intensely driven by
| culture. People did stuff because they cared about the
| community surviving.
|
| It is, in fact, possible to choose the "cooperate" triangle
| of the prisoner's dilemma, but it takes a lot of work, and is
| vulnerable to being wrecked by people who don't care /
| believe in the goals.
| eviks wrote:
| The outcome would most likely be exactly this: "it's probably
| research not worth doing in first place" (and why would you
| want to signal you want more busy work?)
| geysersam wrote:
| Or, research just gets published online free of charge for
| everyone to access, and important work will prove itself over
| time by being discovered, discussed and by becoming
| influential.
|
| If anyone wants to change something about an article (the
| writing, the structure of the paper, or anything else a
| reviewer might want to edit) they can just do it and publish a
| new version. If people like the new version better, good, if
| they don't they can read the old version.
|
| Peer review as a filter for publishing is terrible in a time
| when making a few megabytes of text and images accessible is
| literally free. If anyone wants to run a content aggregator
| (aka a Journal) they just do it. If they want to change
| something about the article before it's approved for the
| aggregator they can contact the authors or ask someone to
| review it or whatever.
|
| Just make it accessible.
| rhelz wrote:
| Whether or not your papers pass peer review---and which
| journals it is published in--are important criteria for
| hiring, tenure, whether your grants are funded, etc.
|
| If you get rid of peer review, it's not science. It's just a
| vanity press.
| ska wrote:
| > Just make it accessible.
|
| We already have that system, it's called the internet.
| Nothing stops you or I from putting our ideas online for all
| to read, comment on, update, etc.
|
| The role of the publishers, flawed as it is, has little to do
| with the physical cost of producing or providing an article,
| and is filling (one can argue badly) a role in curation and
| archival that is clearly needed. Any proposal to change the
| system really has to address how those roles are met,
| hopefully cheaper than currently but definitely not more
| expensive because mostly people don't get paid (in $ or
| career or anything) now - or has to provide a funding source
| for it.
|
| I don't really see how your outlined scenario addresses that,
| at least not in a way that's functionally different than
| today. Can you expand?
| frozenport wrote:
| They should just open a comment field on arXiv.
|
| Then I can anonymously critique the paper without fear of the
| authors rejecting my career making Nature paper.
| patel011393 wrote:
| I know someone working on a plugin for that currently.
| _delirium wrote:
| Does openreview.net not count as preprint review in the sense the
| author means? It has substantial uptake in computer science.
| dbingham wrote:
| Nice catch! I was going from the data shared in that paper[1]
| and didn't notice that it excluded OpenReview.net (which I'm
| aware of). The paper got their data[2, 3] from Sciety and it
| looks like OpenReview isn't included in Sciety's data.
|
| It may have been excluded because OpenReview (as I understand
| it) seems to be primarily used to provide open review of
| conference proceedings, which I suspect the article puts in a
| different category than generally shared preprints.
|
| But it would be worth analyzing OpenReview's uptake separately
| and thinking about what it's doing differently!
|
| [1]
| https://journals.plos.org/plosbiology/article?id=10.1371/jou...
|
| [2]https://zenodo.org/records/10070536
|
| [3]
| https://lookerstudio.google.com/u/0/reporting/b09cf3e8-88c7-...
| _delirium wrote:
| I do agree it's a bit different. How close maybe depends on
| what motivates you to be interested in the preprint review
| model in the first place? Could imagine this varies by
| person.
|
| In a certain sense, the entire field of comp sci has become
| reorganized around preprint review. The 100% normal workflow
| now is that you first upload your paper to arXiv, circulate
| it informally, then whenever you want a formal review, submit
| to whatever conference or journal you want. The conferences
| and journals have basically become stamp-of-approval
| providers rather than really "publishers". If they accept it,
| you edit the arXiv entry to upload a v2 camera-ready PDF and
| put the venue's acceptance stamp-of-approval in the comments
| field.
|
| A few reasons this might not fit the vision of preprint
| review, all with different solutions:
|
| 1. The reviews might not be public.
|
| 2. If accepted, it sometimes costs $$ (e.g. NeurIPS has a
| $800 registration fee, and some OA journals charge APCs).
|
| 3. Many of the prestigious review providers mix together two
| different types of review: review for technical quality and
| errors, versus review for perceived importance and impact.
| Some also have quite low acceptance rates (due to either
| prestige reasons or literal capacity constraints).
|
| TMLR [1] might be the closest to addressing all three points,
| and has some similarity to eLife, except that unlike eLife it
| doesn't charge authors. It's essentially an overlay journal
| on openreview.net preprints (covers #1), is platinum OA
| (covers #2), and explicitly excludes "subjective
| significance" as a review criterion (covers #3).
|
| [1] https://jmlr.org/tmlr/
| adastra22 wrote:
| PREPUBLICATION REVIEW IS BAD! STOP TRYING TO REINTRODUCE IT.
|
| Sorry for the all caps. Publishing papers without "peer review"
| isn't some radical new concept--it's how all scientific fields
| operated prior to ca. 1970. That's about when the pace of article
| writing outstripped available pages in journals and this system
| of pre-publication review was adopted and formalized. For the
| first 300 years of science you published papers by sending it off
| as a letter to the editor (sometimes via a sponsor if you were
| new to the journal), and they either accepted or rejected it as-
| is.
|
| The idea of having your intellectual competitors review your work
| and potentially sabotage your publication prospects as a standard
| process is a relatively recent addition. And one that has not
| been shown to actually be effective.
|
| The rise of Arxiv is a recognition by researchers that we don't
| need or want that system, and we should do away with it entirely
| in this era of digital print where page counts don't matter. So
| please stop trying to force it back on us!
| freedomben wrote:
| > _The idea of having your intellectual competitors review your
| work potentially sabotage your publication prospects as a
| standard process is a relatively recent addition. And one that
| has not been shown to actually be effective._
|
| If this is true (and I'm not doubting you, just acknowledging
| that I'm taking your word for it) then why abandon the entire
| system? Why not just roll it back to the state before we added
| the intellectual competitor review?
| frozenport wrote:
| >> Why not just roll it back to the state before we added the
| intellectual competitor review?
|
| Journals don't add much value outside of their peer review.
|
| Most researcher don't care about the paper copies, or
| pagination, or document formatting services provided by
| publishers. Their old paper based distribution channels are
| simply not used.
| adastra22 wrote:
| The prior state was a situation of no pre-publication review
| other than the editorial staff of the journal. We should go
| back to that, yes. By disbanding entirely the "peer review"
| system that currently exists.
| salty_biscuits wrote:
| The whole process comes from a time when publishing was
| expensive, should be cheap as chips now. The system needs a
| rethink so "quality" can somehow bubble up to the surface given
| mass publication is so simple.
| BlueTemplar wrote:
| Also, the author IMHO failed to clearly explain how "preprint
| review" wasn't a contradiction in terms (though they do seem to
| gesture towards the main issue being commercialization of
| journals, in the first post).
|
| In the same vein, the positive mention of Github and "open
| source platform" (another contradiction in terms) were at first
| red flags in the third article, but at least they then
| mentioned the threat of corporate takeover.
| SkyPuncher wrote:
| I've found the entire peer review concept in academia to be
| extremely odd. I'm not entirely sure what problem it solves. It
| seems like you have one of two types of people reading these
| articles:
|
| * People who are already specialists/familiar enough with
| concepts. They'll either call BS upfront or run experimentation
| themselves to validate results.
|
| * People who aren't specialists and will need to corroborate
| evidence against other sources.
|
| My entire life as a software engineer has been built blogs,
| forums, and discussion from "random" people doing exactly the
| above.
| nextos wrote:
| You are right. It is often a problem that established
| competitors get to review articles that contradict their work
| and, unsurprisingly, try to sabotage them. Incentives are not
| well aligned.
|
| A good mid-ground is something like the non-profit journal
| ELife, where articles are accepted by an editor and published
| before review, then reviewed publicly by selected reviewers.
|
| Very transparant, and also leaves room for article updates. See
| the whole process here: https://elifesciences.org/about/peer-
| review.
| adastra22 wrote:
| That's a better system. But why involve the journal in review
| at all?
|
| Journals should go back to just publishing papers and any
| unsolicited letter-to-the-editor reviews, reproductions, or
| commentary they receive in response. Why add a burden of
| unpaid work reviewing every single paper that comes through?
| nextos wrote:
| I believe ELife may eventually get deeper paid reviews.
| That is a reason to involve the journal in this process.
| The way reviews at ELife work can be seen as solicited
| letter-to-the editor reviews, as these do not influence the
| outcome. Your article is already published.
| coolhand2120 wrote:
| https://en.wikipedia.org/wiki/Replication_crisis
|
| The reputation of scientific researchers has been greatly
| harmed by the current system. Please, help find a way to fix
| it, or at the very least don't hinder people trying to fix it.
| Thanks to the way we do things now a coin flip is _better_ than
| peer review. Public trust in science is at an all time low. I
| really hope you don't think "this is fine".
| dbingham wrote:
| Hi! Author here.
|
| Preprint review as it is being discussed here is post-
| publication. The preprint is shared first, and review layered
| on top of it later. Click through to the paper[1] I'm
| responding to and give it a read.
|
| But, also, prepublication review doesn't need to be
| "reintroduced". It's still the standard for the vast majority
| of scholarship. By some estimates there are around 5 million
| scholarly papers published per year. There are only about 10 -
| 20 million preprints published _total_ over the past 30 years
| since Arxiv 's introduction.
|
| There are a bunch of layered institutions and structures that
| are maintaining it as the standard. I don't have data for it to
| hand, but my understanding is that the vast majority of
| preprints go on to be published in a journal with pre-
| publication review. And as far as most of the institutions are
| concerned, papers aren't considered valid until they have
| published in a journal with prepublication review.
|
| There is a significant movement pushing for the adoption of
| preprint review as an alternative to journal publishing with
| the hope that it can begin a change to this situation.
|
| The idea is that preprint review offers a similar level of
| quality control as journal review (which, most reformers would
| agree is not much) and could theoretically replace it in those
| institutional structures. That would, at least invert the
| current process: with papers being shared immediately and
| review coming later after the results were shared openly.
|
| [1]
| https://journals.plos.org/plosbiology/article?id=10.1371/jou...
| ccppurcell wrote:
| Peer review is pretty unpopular round these parts. In
| mathematics/TCS I've had mostly good experiences. Actually most
| of the time the review process improved my papers.
|
| Clearly something is rotten about the way peer review is
| implemented in the empirical sciences. But think of all those
| high profile retractions you read about these days. Usually that
| comes about by a sort of post hoc peer review, not by anything
| resembling market forces.
| YeGoblynQueenne wrote:
| Not to be mean to the HN community but at least a substantial
| minority of people who complain about peer review on here have
| no experience of peer review, even in applied CS and AI and
| machine learning, which are the hot topics today. But they've
| read that peer review is broken and, by god, they'll let the
| world know! For science!
|
| I publish in machine learning and my experience is the same as
| yours: reviews have mainly helped me to improve my papers.
| Though to be fair this is mainly the case in journals; in
| conferences it's true that reviewers will often look for
| reasons to reject and don't try to be constructive (I always
| do; I still find it very hard to reject).
|
| This is the result of the field of AI research having
| experienced a huge explosion of interest, and therefore
| submissions, in the last few years, so that all the conferences
| are creaking under the strain. Most of the new entrants are
| also junior researchers without too much experience- and that
| is true for both authors and reviewers (who are only invited to
| review after they publish in a venue). So the conferences are a
| bit of a mess at the moment, and the quality of the papers that
| get submitted and published, overall low.
|
| But that's not because "peer review is broken", it's because
| too many people start a PhD in machine learning thinking
| they'll get immediately hired by Google, or OpenAI I guess.
| That too shall pass, and then things will calm down.
| milancurcic wrote:
| "True peer review begins after publication." --Eric Weinstein
| cachemoi wrote:
| The solution to the dated model exists, it's git/github. I'm
| trying to build a "git journal" (essentially a github org where
| projects/research gets published paired with a substack
| newsletter), details here [0]
|
| Let me know if you have a project you'd like to get on there!
| Here's what it looks like, a paper on directed evolution [1]
|
| [0] https://versioned.science/ [1] https://github.com/versioned-
| science/DNA_polymerase_directed...
___________________________________________________________________
(page generated 2024-03-24 23:00 UTC)