[HN Gopher] One of my papers got declined today
       ___________________________________________________________________
        
       One of my papers got declined today
        
       Author : GavCo
       Score  : 801 points
       Date   : 2025-01-01 19:12 UTC (1 days ago)
        
 (HTM) web link (mathstodon.xyz)
 (TXT) w3m dump (mathstodon.xyz)
        
       | remoquete wrote:
       | I find it refreshing when researchers disclose their own
       | failures. Science is made of negative results, errors, and
       | rejections, though it's often characterized in a much different,
       | unrealistic way.
       | 
       | By the way, even though some of you may know about it, here's the
       | link to the Journal of Negative Results: https://www.jnr-
       | eeb.org/index.php/jnr
        
       | amichail wrote:
       | Sure, even top mathematicians have paper rejections.
       | 
       | But I think the more important point is that very few people are
       | capable of publishing papers in top math journals.
        
       | jraph wrote:
       | I was confused by the title because paper rejection is incredibly
       | common in research, but that's the point and one of the goals is
       | to fight imposter syndrome.
       | 
       | It's a good initiative. Next step: everybody realizes that
       | researchers are just random people like everybody. Maybe that
       | could kill any remaining imposter syndrome.
       | 
       | A rejection, although common, is quite tough during your PhD
       | though, even ignoring the imposter syndrome, because in a short
       | time, you are expected to have a bunch of accepted papers, in
       | prestigious publications if possible. It feels like a rejection
       | slows you down, and the clock is still ticking. If we could kill
       | some of this nefarious system, that'd be good as well.
        
         | bisby wrote:
         | It's especially important coming from someone like Terence Tao.
         | If one of the best and brightest mathematicians out there can
         | get a paper declined, then it can happen to literally anyone.
        
         | arrowsmith wrote:
         | It's noteworthy because it's from Terence Tao, regarded by many
         | as the world's greatest living mathematician.
         | 
         | If you read the full post he's making the exact same point as
         | you: it's common and normal to get a paper rejected even if
         | you're Terence Tao, so don't treat a rejection like the end of
         | the world.
        
           | vitus wrote:
           | > regarded by many as the world's greatest living
           | mathematician.
           | 
           | Oh?
           | 
           | Perelman comes to mind (as the only person who has been
           | eligible to claim one of the Millennium prizes), although he
           | is no longer actively practicing math AFAIK. Of Abel prize
           | winners, Wiles proved Fermat's last theorem, and Szemeredi
           | has a number of number-theoretic and combinatorial
           | contributions.
           | 
           | Recently deceased (past ~10 years) include figures such as
           | John Nash, Grothendieck, and Conway.
           | 
           | Tao is definitely one of the most well-known mathematicians,
           | and he's still got several more decades of accomplishments
           | ahead of him, but I don't know that he rises to "greatest
           | living mathematician" at this point.
           | 
           | That said, I do appreciate that he indicates that even his
           | papers get rejected from time to time.
        
             | xanderlewis wrote:
             | Having been a child prodigy somehow gives one infamy (in
             | the wider public consciousness) beyond anything one can
             | achieve as an adult.
        
               | vanderZwan wrote:
               | He's a child prodigy who _didn 't_ burn out, that does
               | make him quite rare.
        
               | xanderlewis wrote:
               | Well, true.
        
             | flocciput wrote:
             | To add to your list, you can also find Richard Borcherds
             | teaching math on YouTube.
        
             | kenjackson wrote:
             | He said "regarded by many", which I think is probably an
             | accurate statement.
        
           | jraph wrote:
           | > It's noteworthy because it's from Terence Tao, regarded by
           | many as the world's greatest living mathematician.
           | 
           | I didn't know :-)
           | 
           | > If you read the full post he's making the exact same point
           | as you
           | 
           | Oh yeah, that's because I did read the full post and was
           | summarizing. I should have made this clear.
        
           | motorest wrote:
           | > It's noteworthy because it's from Terence Tao, regarded by
           | many as the world's greatest living mathematician.
           | 
           | I think it's important to post a follow-up comment clarifying
           | that papers are reviewed following a double blind peer review
           | process. So who the author is shouldn't be a factor.
           | 
           | Also, the author clarified that the paper was rejected on the
           | grounds that the reviewer felt the topic wasn't a good fit
           | for the journal. This has nothing to do with the quality of
           | the paper, but uploading editorial guidelines on the subject.
           | Trying to file a document in a wrong section and being gently
           | nudged to file under another section hardly matches the
           | definition of a rejection that leads authors to question
           | their life choices.
        
             | ccppurcell wrote:
             | Just a quick point: double blind is not common for
             | mathematics journals, at least in my area. Some TCS
             | conferences have started it.
        
         | jonathan_landy wrote:
         | I guess it is nice to know that he is also not perfect. But
         | it's still the case that his accomplishments outshine my own,
         | so my imposter syndrome remains intact.
        
         | 2-3-7-43-1807 wrote:
         | terence tao is suffering from imposter syndrome? if anything,
         | imposter syndrome is suffering from terence tao ... do you
         | maybe not know who terence tao is?
        
           | danielmarkbruce wrote:
           | It's terence tao trying to help others with imposter
           | syndrome. It seems quite unlikely he himself would suffer
           | from it...
        
             | NooneAtAll3 wrote:
             | on the contrary, that's exactly what he states in comment
             | discussion below the thread
             | 
             | having higher reputation means higher responsibility not to
             | crush someone with it in the sub-fields you aren't as
             | proficient as
        
               | danielmarkbruce wrote:
               | yeah... he's telling a white lie of sorts...reread the
               | comment. That doesn't sound like someone lacking self
               | confidence. "the other members _collectively_ ... ". He's
               | basically saying "get the world leading experts in some
               | area of math that I'm sort of interested in in a room and
               | between them they'll know more than I do myself". Lol.
               | And, that's happened "several times".
               | 
               | I'm sure he's a genuinely nice, friendly person trying to
               | do the right thing. But he is also likely confident as
               | hell and never felt like an imposter anywhere.
        
               | jraph wrote:
               | I don't think it's a white lie. Whether he has imposter
               | syndrome is beside the point. It shows he has sympathy
               | for his colleagues who might have it. Maybe he himself
               | had it before which would let him understand even better
               | what it is, and now he doesn't anymore, this would
               | motivate him to make this point.
               | 
               | The point he is making is all the motr convincing
               | especially that he is seen as very good, whether he had
               | imposter syndrome or not.
        
               | danielmarkbruce wrote:
               | Yes, that's the point.
        
         | firesteelrain wrote:
         | At my college, you only need one paper not many
        
           | jraph wrote:
           | In mine, I don't think there was a hard requirement, but your
           | PhD would be seen as weak with zero paper, and only one would
           | be common enough I guess but still be seen a bit weak. It's
           | not very important to grade, but it's important for what
           | follows: your carrier, including getting a position.
        
         | TheRealPomax wrote:
         | I'd counter the "like everybody": they're not. They spent a
         | decade or more focused on honing their skills and deepening
         | their knowledge to become experts in their subfield, and
         | sometimes even entire fields. They are very much not random
         | people like everybody in this context.
        
       | tinktank wrote:
       | I wish I has an IQ that high...
        
         | revskill wrote:
         | IQ means interesting questions.
        
         | aleph_minus_one wrote:
         | If you want to become smarter in math, read and attempt to
         | understand brutally hard math papers and textbooks. Torture
         | yourself harder than any time before in your life. :-)
        
       | abetusk wrote:
       | The second post in that thread is gold:
       | 
       | """
       | 
       | ... I once almost solved a conjecture, establishing the result
       | with an "epsilon loss" in a key parameter. We submitted to a
       | highly reputable journal, but it was rejected on the grounds that
       | it did not resolve the full conjecture. So we submitted
       | elsewhere, and the paper was accepted.
       | 
       | The following year, we managed to finally prove the full
       | conjecture without the epsilon loss, and decided to try
       | submitting to the highly reputable journal again. This time, the
       | paper was rejected for only being an epsilon improvement over the
       | previous literature!
       | 
       | ...
       | 
       | """
        
         | YouWhy wrote:
         | While I'm not a mathematician, I think such an attitude on
         | behalf of the journal does not encourage healthy community
         | dynamics.
         | 
         | Instead of allowing the community to join forces by breaking up
         | a larger problem into pieces, it encourages siloing and camper
         | mentality.
        
           | abetusk wrote:
           | I agree. This is also a lack of effort on the journal's part
           | to set expectations of what the reviewers should be looking
           | for in an accepted paper.
           | 
           | In the journal's defense though, what most likely happened is
           | that the reviewers were different between submissions and
           | they didn't know about the context. Ultimately, I think, this
           | type of rejection comes down to the mostly the reviewers
           | discretion and it can lead to this type of situation.
           | 
           | I cut off the rest of the post but Tao finished it with this:
           | 
           | """
           | 
           | ... Being an editor myself, and having had to decline some
           | decent submissions for a variety of reasons, I find it best
           | not to take these sorts of rejections personally,
           | 
           | ...
           | 
           | """
        
       | asah wrote:
       | Non-zero failure rate is indeed often optimal because it provides
       | valuable feedback toward finding the optimal horizon for various
       | metrics, e.g. speed, quality, LPU[1], etc.
       | 
       | That said, given the labor involved in academic publishing and
       | review, the optimal rejection rate should be quite low, i.e. find
       | a lower cost way to pre-filter papers. OTOH, the reviewers may
       | get value from rejected papers...
       | 
       | [1] least publishable unit
        
       | dwaltrip wrote:
       | Hilarious irony:
       | 
       | > With hindsight, some of my past rejections have become amusing.
       | With a coauthor, I once almost solved a conjecture, establishing
       | the result with an "epsilon loss" in a key parameter. We
       | submitted to a highly reputable journal, but it was rejected on
       | the grounds that it did not resolve the full conjecture. So we
       | submitted elsewhere, and the paper was accepted.
       | 
       | > The following year, we managed to finally prove the full
       | conjecture without the epsilon loss, and decided to try
       | submitting to the highly reputable journal again. This time, the
       | paper was rejected for only being an epsilon improvement over the
       | previous literature!
        
         | bradleyjg wrote:
         | This seems reasonable?
         | 
         | Suppose the full result is worth 7 impact points, which is
         | broken up into 5 points for the partial result and 2 points for
         | the fix. The journal has a threshold of 6 points for
         | publication.
         | 
         | Had the authors held the paper until they had the full result,
         | the journal would have published it, but neither part was
         | significant enough.
         | 
         | Scholarship is better off for them not having done so, because
         | someone else might have gotten the fix, but the journal seems
         | to have acted reasonably.
        
           | remus wrote:
           | > This seems reasonable?
           | 
           | In some sense, but it does feel like the journal is missing
           | the bigger picture somewhat. Say the two papers are A and B,
           | and we have A + B = C. The journal is saying they'll publish
           | C, but not A and B!
        
             | cubefox wrote:
             | ... A and B separately.
        
             | Nevermark wrote:
             | How many step papers before a keystone paper seems
             | reasonable to you?
             | 
             | I suspect readers don't find it as exciting to read partial
             | result papers. Unless there is an open invitation to
             | compete on its completion, which would have a purpose and
             | be fun. If papers are not page turners, then the journal is
             | going to have a hard time keeping subscribers.
             | 
             | On the other hand, publishing a proof of a Millennium
             | Problem as several installments, is probably a fantastic
             | idea. Time to absorb each contributing result. And the
             | suspense!
             | 
             | Then republish the collected papers as a signed special
             | leather limited series edition. Easton, get on this!
        
               | slow_typist wrote:
               | Publishing partial results is always an invitation to
               | compete in the completion, unless the completion is
               | dependent on special lab capabilities which need time and
               | money to acquire. There is no need to literally invite
               | anyone.
        
               | Nevermark wrote:
               | I meant if the editors found the paper's problem and
               | progress especially worthy of a competition.
        
               | remus wrote:
               | > I suspect readers don't find it as exciting to read
               | partial result papers. Unless there is an open invitation
               | to compete on its completion, which would have a purpose
               | and be fun. If papers are not page turners, then the
               | journal is going to have a hard time keeping subscribers.
               | 
               | Yeah I agree, a partial result is never going to be as
               | exciting as a full solution to a major problem. Thinking
               | on it a little more, it seems more of a shame the journal
               | wasn't willing to publish the first part as that sounds
               | like it was the bulk of the work towards the end result.
               | 
               | I quite like that he went to publish a less-than-perfect
               | result, rather than sitting on it in the hopes of making
               | the final improvement. That seems in the spirit of
               | collaboration and advancing science, whereas the journal
               | rejecting the paper because it's 98% of the problem
               | rather than the full thing seems a shame.
               | 
               | Having said that I guess as a journal editor you have to
               | make these calls all the time, and Im sure every author
               | pitches their work in the best light ("There's a
               | breakthrough just around the corner...") and Im sure
               | there are plenty of ideas that turn out to be dead ends.
        
           | Ar-Curunir wrote:
           | I don't think that's a useful way to think about this,
           | especially when theres so little information provided about
           | this. Reviewing is a capricious process.
        
           | tux3 wrote:
           | If people thought this way - internalizing this publishing
           | point idea - it would incentivize sitting on your incremental
           | results, fiercely keeping them secret if and until you can
           | prove the whole bigger result by yourself. However long that
           | might take.
           | 
           | If a series of incremental results were as prestigious as
           | holding off to bundle them people would have reason to
           | collaborate and complete each other's work more eagerly.
           | Delaying an almost complete result for a year so that a
           | journal will think it has enough impact point seems
           | straightforwardly net bad, it slows down both progress &
           | collaboration.
        
             | bradleyjg wrote:
             | The big question here is if journal space is a limited
             | resource. Obviously it was at one point.
             | 
             | Supposing it is, you have to trade off publishing these
             | incremental results against publishing someone else's
             | complete result.
             | 
             | What if it had taken ten papers to get there instead of
             | two? For a sufficiently important problem, sure, but the
             | interesting question is at a problem that's interesting
             | enough to publish complete but barely.
        
               | parpfish wrote:
               | The limiting factor isn't journal space, but attention
               | among the audience. (In theory) the journals publishing
               | restrictions help to filter and condense information so
               | the audience is maximally informed given that they will
               | only read a fixed amount
        
               | btilly wrote:
               | Journal space is not a limited resource. Premium journal
               | space is.
               | 
               | That's because every researcher has a hierarchy of
               | journals that they monitor. Prestigious journals are read
               | by many researchers. So you're essentially competing for
               | access to the limited attention of many researchers.
               | 
               | Conversely, publishing in a premium journal has more
               | value than a regular journal. And the big scientific
               | publishers are therefore in competition to make sure that
               | they own the premium journals. Which they have multiple
               | tricks to ensure.
               | 
               | Interestingly, their tricks only really work in science.
               | That's because in the humanities, it is harder to
               | establish objective opinions about quality. By contrast
               | everyone can agree in science that _Nature_ generally has
               | the best papers. So attempting to raise the price on a
               | prestigious science journal, works. Attempting to raise
               | the price on a prestigious humanities journal, results in
               | its circulation going down. Which makes it less
               | prestigious.
        
               | waldrews wrote:
               | Space isn't a limited resource, but prestige points are
               | deliberatly limited, as a proxy for the publications'
               | competition for attention. We can appreciate the irony,
               | while considering the outcome reasonable - after all, the
               | results weren't kept out of the literature. They just got
               | published with a label that more or less puts them lower
               | in the search ranking for the next mathematician who
               | looks up the topic.
        
             | YetAnotherNick wrote:
             | Two submission in medium reputation journal does not have
             | significantly lower prestige than one in high reputation
             | journal.
        
             | chongli wrote:
             | The reasonable thing to do here is to discourage all of
             | your collaborators from ever submitting anything to that
             | journal again. Work with your team, submit incremental
             | results to journals who will accept them, and let the picky
             | journal suffer a loss of reputation from not featuring some
             | of the top researchers in the field.
        
             | gwerbret wrote:
             | > If people thought this way - internalizing this
             | publishing point idea - it would incentivize sitting on
             | your incremental results, fiercely keeping them secret if
             | and until you can prove the whole bigger result by
             | yourself. However long that might take.
             | 
             | This is exactly what people think, and exactly what
             | happens, especially in winner-takes-all situations. You end
             | up with an interesting tension between how long you can
             | wait to build your story, and how long until someone else
             | publishes the same findings and takes all the credit.
             | 
             | A classic example in physics involves the discovery of the
             | J/ps particle [0]. Samuel Ting's group at MIT discovered it
             | first (chronologically) but Ting decided he needed time to
             | flesh out the findings, and so sat on the discovery and
             | kept it quiet. Meanwhile, Burton Richter's group at
             | Stanford also happened upon the discovery, but they were
             | less inclined to be quiet. Ting found out, and (in a spirit
             | of collaboration) both groups submitted their papers for
             | publication at the same time, and were published in the
             | same issue of _Physical Review Letters_.
             | 
             | They both won the Nobel 2 years later.
             | 
             | 0: https://en.wikipedia.org/wiki/J/psi_meson
        
               | jvanderbot wrote:
               | Wait, how did they both know that they both discovered
               | it, but only after they had both discovered it?
        
               | davrosthedalek wrote:
               | People talk. The field isn't that big.
        
               | ahartmetz wrote:
               | They got an optimal result in that case, isn't that nice.
        
             | JJMcJ wrote:
             | Gauss did something along these lines and held back
             | mathematical progress by decades.
        
               | lupire wrote:
               | Gauss had plenty of room for slack, giving people time to
               | catch up on his work..
               | 
               | Every night Gauss went to sleep, mathematics was held
               | back a week.
        
             | slow_typist wrote:
             | Don't know much about publishing in maths but in some
             | disciplines it is clearly incentivised to create the
             | biggest possible number of papers out of a single research
             | project, leading automatically to incremental publishing of
             | results. I call it atomic publishing (from Greek atomos -
             | indivisible) since such a paper contains only one result
             | that cannot be split up anymore.
        
               | hanche wrote:
               | Or cheese slicer publishing, as you are selling your
               | cheese one slice at a time. The practice is usually
               | frowned upon.
        
               | lupire wrote:
               | Andrew Wiles spent 6 years working on 1 paper, and then
               | another year working on a minor follow-up.
               | 
               | https://en.m.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat
               | %27...
        
               | dataflow wrote:
               | I thought this was called salami slicing in publication.
        
             | SoftTalker wrote:
             | Science is almost all incremental results. There's far more
             | incentive to get published now than there is to "sit on" an
             | incremental result hoping to add to it to make a bigger
             | splash.
        
             | jvanderbot wrote:
             | Hyper focusing on a single journal publication is going to
             | lead to absurdities like this. A researcher is judged by
             | the total delta of his improvements, at least by his peers
             | and future humanity. (the sum of all points, not the max).
        
             | bennythomsson wrote:
             | To supply a counter viewpoint here... The opposite is the
             | "least publishable unit" which leads to loads and loads of
             | almost-nothing results flooding the journals and other
             | publication outlets. It would be hard to keep up with all
             | that if there wasn't a reasonable threshold. If anything
             | then I find that threshold too low currently, rather than
             | too high. The "publish or perish" principle also pushes
             | people that way.
        
               | lupire wrote:
               | That's much less of a problem than the fact that papers
               | are such poor media for sharing knowledge. They are
               | published too slowly to be immediately useful versus just
               | a quick chat, and simultaneously written in too rushed a
               | way to comprehensively educate people on progress in the
               | field.
        
               | ahartmetz wrote:
               | The educational and editorial quality of papers from
               | before 1980 or so beats just about anything published
               | today. That is what publish or perish - impact factor -
               | smallest publishable unit culture did.
        
               | bennythomsson wrote:
               | > versus just a quick chat,
               | 
               | Everybody is free to keep a blog for this kind of
               | informal chat/brainstorming kind of communication. Paper
               | publications should be well-written, structured, thought-
               | through results that make it worthwhile for the reader to
               | spend their time. Anything else belongs to a blog post.
        
             | krick wrote:
             | It is easy to defend any side of the argument by inflating
             | the "pitfalls of other approach" ad absurdum. This is
             | silly. Obviously, balance is the key, as always.
             | 
             | Instead, we should look at which side the, uh, _industry_
             | currently tends to err. And this is definitely not the
             | "sitting on your incremental results" side. The current
             | motto of academia is to publish more. It doesn't matter if
             | your papers are crap, it doesn't matter if you already have
             | significant results and are working on something big, you
             | have to publish to keep your position. How many crappy
             | papers you release is a KPI of academia.
             | 
             | I mean, I can imagine a world were it would have been a
             | good idea. I think it's a better world, where science
             | journals don't exist. Instead, anybody can put any crap on
             | ~arxiv.org~ Sci-Hub and anybody can leave comments,
             | upvote/downvote stuff, papers have actual links and all
             | other modern social network mechanics up to the point you
             | can have a feed of most interesting new papers tailored
             | specially for you. This is open-source, non-profit, 1/1000
             | of what universities used to pay for journal subscriptions
             | is used to maintain the servers. Most importantly, because
             | of some nice search screens or whatever the paper's
             | metadata becomes more important than the paper itself, and
             | in the end we are able to assign 10-word simple summary on
             | what the current community consensus on the paper is: if it
             | proves anything, "almost proves" anything, has been 10
             | times disproved, 20 research teams failed to reproduce to
             | results or 100 people (see names in the popup) tried to
             | read and failed to understand this gibberish. Nothing gets
             | retracted, ever.
             | 
             | Then it would be great. But as things are and all these
             | "highly reputable journals" keep being a plague of society,
             | it is actually kinda nice that somebody encourages you to
             | finish your stuff before publishing.
             | 
             | Now, should have been this paper of Tao been rejected? I
             | don't know, I think not. Especially the second one. But
             | it's somewhat refreshing.
        
             | Too wrote:
             | Academic science discovers continuous integration.
             | 
             | In the software world, it's often desired to have a steady
             | stream of small, individually reviewable commits, that each
             | deliver a incremental set of value.
             | 
             | Dropping a 20000 files changed bomb "Complete rewrite of
             | linux kernel audio subsystem" is not seen as prestigious.
             | Repeated, gradual contributions and involvement in the
             | community is.
        
           | pinkmuffinere wrote:
           | I agree this is reasonable from the individual publisher
           | standpoint. I once received feedback from a reviewer that I
           | was "searching for the minimum publishable unit", and in some
           | sense the reviewer was right -- as soon as I thought the
           | result could be published I started working towards the
           | publication. A publisher can reasonably resist these kinds of
           | papers, as you're pointing out.
           | 
           | I think the impact to scholarship in general is less clear.
           | Do you immediately publish once you get a "big enough"
           | result, so that others can build off of it? Or does this
           | needlessly clutter the field with publications? There's
           | probably some optimal balance, but I don't think the right
           | balance is immediately clear.
        
             | nextn wrote:
             | Why would publishing anything new needlessly clutter the
             | field?
             | 
             | Discovering something is hard, proving it correct is hard,
             | and writing a paper about is hard. Why delay all this?
        
               | bumby wrote:
               | Playing devils advocate, there isn't a consensus on what
               | is incremental vs what is derivative. In theory, the
               | latter may not warrant publication because anyone
               | familiar with the state-of-the-art could connect the dots
               | without reading about it in a publication.
        
             | SilasX wrote:
             | Ouch. That would hurt to hear. It's like they're
             | effectively saying, "yeah, _obviously_ you came up with
             | something more significant than this, which you 're holding
             | back. No one would be so incapable that _this_ was as far
             | as they could take the result! "
        
               | pinkmuffinere wrote:
               | Thankfully the reviewer feedback was of such low quality
               | in general that it had little impact on my feelings,
               | haha. I think that's unfortunately common. My advisor
               | told me "leave some obvious but unimportant mistakes, so
               | they have something to criticize, they can feel good, and
               | move on". I honestly think that was good advice.
        
           | cvoss wrote:
           | The idea that a small number of reviewers can accurately
           | quantify the importance of a paper as some number of "impact
           | points," and the idea that a journal should rely on this
           | number and an arbitrary cut off point to decide publication,
           | are both unreasonable ideas.
           | 
           | The journal may have acted _systematically_ , but the system
           | is arbitrary and capricious. Thus, the journal did not act
           | reasonably.
        
           | Arainach wrote:
           | These patterns are ultimately detrimental to team/community
           | building, however.
           | 
           | You see it in software as well: As a manager in calibration
           | meetings, I have repeatedly seen how it is harder to convince
           | a committee to promote/give a high rating to someone with a
           | large pile of crucial but individually small projects
           | delivered than someone with a single large project.
           | 
           | This is discouraging to people whose efforts seem to be
           | unrewarded and creates bad incentives for people to hoard
           | work and avoid sharing until one large impact, and it's
           | disastrous when (as in most software teams) those people
           | don't have significant autonomy over which projects they're
           | assigned.
        
             | mlepath wrote:
             | Hello, fellow Metamate ;)
        
           | saghm wrote:
           | If this was actually how stuff was measured, it might be
           | defensible. I'm having trouble believing that things are
           | actually done this objectively rather than the rejections
           | being somewhat arbitrary. Do you think that results can
           | really be analyzed and compared in this way? How do you know
           | that it's 5 and 2 and not 6 and 1 or 4 and 3, and how do you
           | determine how many points a full result is worth in total?
        
           | Brian_K_White wrote:
           | It's demonstrably (there is one demonstration right there)
           | self-defeating and counter-productive, and so by definition
           | not reasonable.
           | 
           | Each individual step along the way merely has some rationale,
           | but rationales come in the full spectrum of quality.
        
           | omoikane wrote:
           | But proportionally, wouldn't a solution without an epsilon
           | loss be much better than a solution with epsilon?
           | 
           | I am not sure what's the exact conjecture that the author
           | solved, but if the epsilon difference is between an
           | approximate solution versus an exact solution, and the
           | journal rejected the exact solution because it was "only an
           | epsilon improvement", I might question how reputable that
           | journal really was.
        
           | sunshowers wrote:
           | Given the current incentive scheme in place it's locally
           | reasonable, but the current incentives suck. Is the goal to
           | score the most impact points or to advance our understanding
           | of the field?
        
             | mnky9800n wrote:
             | In my experience, it depends on the scientist. But it's
             | hard to know what an advance is. Like, people long searched
             | for evidence of aether before giving up and accepting that
             | light doesn't need a medium to travel in. Perhaps 100 years
             | from now people will laugh at the attention is all you need
             | paper that led to the llm craze. Who knows. That's why it's
             | important to give space to science. From my understanding
             | Lorenz worked for 5 years without publishing as a research
             | scientist before writing his atmospheric circulation paper.
             | That paper essentially created the field of chaos. Would he
             | be able to do the same today? Maybe? Or maybe counting
             | papers or impact factors or all these other metrics turned
             | science into a game instead of an intellectual pursuit.
             | Shame we cannot ask Lorenz or Maxwell about their times as
             | a scientist. They are dead.
        
         | gxs wrote:
         | Are you sure this wasn't an application to the DMV or an
         | attempt to pull a building permit?
        
         | stevage wrote:
         | It actually seems reasonable for a journal that has limited
         | space and too many submissions. What's the alternative, to
         | accept on or two of the half proofs, and bump one or two other
         | papers in the process?
        
           | jiggawatts wrote:
           | Wow, it's so sad that their budget doesn't stretch to
           | purchasing hard drives with capacities measured in gigabytes.
           | It must be rough having to delete old files from the floppies
           | they're still forced to use in this day and age.
        
             | y1n0 wrote:
             | That logic is absurd. You might as well consider the whole
             | internet a journal and everything is already published, so
             | there is nothing to complain about.
        
               | jiggawatts wrote:
               | It pretty much _is_ the logic -- except replace digital
               | media with paper.
               | 
               | It's also "why" research papers can't have color pictures
               | or tables of raw data -- because they're expensive to
               | print.
               | 
               | Scientists internalised their limitations and treat these
               | as virtues now.
               | 
               | Limited space in printing means you have to "get in", and
               | that exclusivity has a cachet. They also now advise each
               | other that photos are "not real science" (too much
               | color!) and raw data shouldn't be published at all.
               | 
               | I was making a joke to highlight how inane this is in an
               | era where I can keep every paper ever published on one
               | hard drive.
               | 
               | The same people that complain about negative results or
               | reproductions not getting published will defend these
               | limitations to the death.
        
               | stevage wrote:
               | Just because the storage is free doesn't mean there's no
               | cost. It costs everyone time to read: the editorial
               | staff, people who subscribe to the journal, etc. It costs
               | copyediting time. More content creates more work.
        
         | JJMcJ wrote:
         | Do Reddit mods also edit math journals?
        
         | bumby wrote:
         | A lot of the replies make it seem like there is some great
         | over-arching coordination and intent between subsequent
         | submissions, but I'll offer up an alternative explanation:
         | sometimes the reviewer selection is an utter crap shoot. Just
         | because the first set of reviewers may offer a justification
         | for rejection, it may be completely unrelated to the rationale
         | of a different set of reviewers. Reviewers are human and bring
         | all kinds of biases and perspectives into the process.
         | 
         | It's frustrating but the result of a somewhat haphazard
         | process. It's also not uncommon for conflicting comments within
         | the same review cycle. Some of this may be attributed to a lack
         | of clear communication by the author. But on occasion, it leads
         | me to believe many journals don't take a lot of time selecting
         | appropriate reviewers and settle for the first few that agree
         | to review.
        
           | grepLeigh wrote:
           | What's the compensation scheme for reviewers?
           | 
           | Are there any mechanisms to balance out the "race to the
           | bottom" observed in other types of academic compensation?
           | e.g. increase of adjunct/gig work replacing full-time
           | professorship.
           | 
           | Do universities require staff to perform a certain number of
           | reviews in academic journals?
        
             | acomjean wrote:
             | I know from some of my peers that reviewed biology
             | (genetics) papers, they weren't compensated.
             | 
             | I was approached to review something for no compensation as
             | well, but I was a bad fit.
        
             | hanche wrote:
             | Normally, referees are unpaid. You're just supposed to do
             | your share of referee work. And then the publisher sells
             | the fruits of all that work (research and refereeing) back
             | to universities at a steep price. Academic publishing is
             | one of the most profitable businesses on the planet! But
             | univesities and academics are fighting back. Have been for
             | a few years, but the fight is not yet over.
        
               | throwaway2037 wrote:
               | If unis "win", what is the likely outcome?
        
               | bumby wrote:
               | More/easier/cheaper dissemination of research.
        
             | paulpauper wrote:
             | It's implicitly understood that volunteer work makes the
             | publishing process 'work'. It's supposed to be a level
             | playing field where money does not matter.
        
             | davrosthedalek wrote:
             | Typically, at least in physics (but as far as I know in all
             | sciences), it's not compensated, and the reviewers are
             | anonymous. Some journals try to change this, with some
             | "reviewer coins", or Nature, which now publishes reviewer
             | names if a paper is accepted and if the reviewer agrees. I
             | think these are bad ideas.
             | 
             | Professors are expected to review by their employer,
             | typically, and it's a (very small) part of the tenure
             | process.
        
             | tokinonagare wrote:
             | I don't thing it's a money problem. It's more like a
             | framing issue, with some reviewers being too narrow-minded,
             | or lacking background knowledge on the topic of the paper.
             | It's not uncommon to have a full lab with people focussing
             | on very different things, when you look in the details, the
             | exact researchers interests don't overlap too much.
        
             | canjobear wrote:
             | There is no compensation for reviewers, and usually no
             | compensation for editors. It's effectively volunteer work.
             | I agree to review a paper if it seems interesting to me and
             | I want to effectively force myself to read it a lot more
             | carefully than normal. It's hard work, especially if there
             | is a problem with the paper, because you have to dig out
             | the problem and explain it clearly. An academic could
             | refuse to do any reviews with essentially no formal
             | consequences, although they'd get a reputation as a "bad
             | citizen" of some kind.
        
             | SJC_Hacker wrote:
             | > Do universities require staff to perform a certain number
             | of reviews in academic journals?
             | 
             | No. Reviewers mostly do it because its expected of them,
             | and they want to publish their own papers so they can get
             | grants
             | 
             | In the end, the university only cares about the grant
             | (money), because they get a cut - somewhere between 30-70%
             | depending on the instituition/field - for "overhead"
             | 
             | Its like the mafia - everyone has a boss they kick up to.
             | 
             | My old boss (PI on an RO1) explained it like this
             | 
             | Ideas -> Grant -> Money -> Equipment/Personnel ->
             | Experiments -> Data -> Paper -> Submit/Review/Publish
             | (hopefully) -> Ideas -> Grant
             | 
             | If you don't review, go to conferences/etc. its much less
             | likely your own papers will get published, and you won't
             | get approved for grants.
             | 
             | Sadly there is still a bit of "junior high popularity
             | contest" , scratch my back I'll scratch yours that is still
             | present in even "highly respected" science journals.
             | 
             | I hear this from basically every scientist I've known. Even
             | successful ones - not just the marginal ones.
        
               | davrosthedalek wrote:
               | While most of what you write is true to some extend, I do
               | not see how reviewing will get your paper published,
               | except maybe for the cases the authors can guess the
               | reviewer. It's anonymous normally.
        
               | SJC_Hacker wrote:
               | The editor does though, they all know each other. They
               | would know who's not refereeing - and word gets around.
        
             | jasonfarnon wrote:
             | Do universities require staff to perform a certain number
             | of reviews in academic journals?
             | 
             | Depends on what you mean by "require". At most research
             | universities it is a plus when reviewing tenureship files,
             | bonuses, etc. It is a sign that someone cares about your
             | work, and the quality of the journal seeking your review
             | matters. If it were otherwise faculty wouldn't list the
             | journals they have reviewed for on their CVs. If no one
             | would ever find out about a reviewers' efforts e.g. the
             | process were double blind to everyone involved, the setup
             | wouldnt work.
        
           | hanche wrote:
           | > sometimes the reviewer selection is an utter crap shoot
           | 
           | Indeed, but when someone of Tao's caliber submits a paper,
           | any editor would (should) make an extra effort to get the
           | very best researchers to referee the paper.
        
             | httpsterio wrote:
             | depending on the publication the reviewers might not even
             | know who the authors are.
        
               | sharth wrote:
               | But the journal editor should.
        
             | crote wrote:
             | But isn't that _exactly_ why the submission should be
             | anonymous to the reviewer? It 's science, the paper should
             | speak for itself. You don't want a reviewer to be biased by
             | the previous accomplishments of the author. An absolute
             | nobody can make groundbreaking and unexpected discoveries,
             | and a Nobel prize winner can make stupid mistakes.
        
               | hoten wrote:
               | The reviewer wouldn't need to know, just the one
               | coordinating who should review what.
        
               | sokoloff wrote:
               | Inherent in the editor trying to "get the very best
               | researchers to [review] the paper" is likely to be a leak
               | of signal. (My spouse was a scientific journal editor for
               | years; reviewers decline to review for any number of
               | reasons, often just being too busy and the same reviewer
               | is often asked multiple times per year. Taking the extra
               | effort to say "but _this specific paper_ is from a really
               | respected author " would be bad, but so would "but please
               | make time to review _this specific paper_ for reasons
               | that I can 't tell you".)
        
               | bumby wrote:
               | I didn't read the comment to mean the editor would
               | explicitly signal anything was noteworthy about the
               | paper, but rather they would select referees from a
               | specific pool of experts. From that standpoint, the
               | referee would have no insight into whether it was
               | anything special (and they couldn't tell if the other
               | referees were of distinction either).
        
               | sokoloff wrote:
               | The editor is _already_ selecting the best matched
               | reviewers though, for any paper they send out for review.
               | 
               | They have more flexibility on how hard they push the
               | reviewer to accept doing the specific review, or for a
               | specific timeline, but they still get declines from some
               | reviewers on some papers.
        
               | bumby wrote:
               | I know that's the ideal but my original post ends with
               | some skepticism at this claim. I've had more than a few
               | come across my desk that are a poor fit. I try to be
               | honest with the editors about why I reject the chance to
               | review them. If I witness it more than a few times, they
               | obviously aren't being as judicial at their assignments
               | as the ideal assumes.
        
               | wslh wrote:
               | When submitting papers to high-profile journals, the
               | expectations are very high for all authors. In most
               | cases, the editorial team can determine from the abstract
               | whether the paper is likely to meet their standards for
               | acceptance.
        
               | taneq wrote:
               | Doesn't that just move the source of bias from the
               | reviewer to the coordinator? Some 'nobody' submitting a
               | paper would get a crapshoot reviewer while a recognisable
               | 'somebody' gets a well regarded fair reviewer.
        
               | derefr wrote:
               | Full anonymity _may_ be valuable, if the set of a paper
               | 's reviewers has to stay fixed throughout the review
               | process
               | 
               | If peer review worked more like other publication
               | workflows (where documents are handed across multiple
               | teams that review them for different reasons), I think
               | partial anonymity (e.g. rounding authors down to a
               | citation-count number) might actually be useful.
               | 
               | Basically: why can't we treat peer review like the
               | customer service gauntlet?
               | 
               | - Papers must pass all levels from the level they enter
               | up to the final level, to be accepted for publication.
               | 
               | - Papers get triaged to the inbox of a given level based
               | on the citation numbers of the submitter.
               | 
               | - Thus, papers from people with no known previous
               | publications, go _first_ to the level-1 reviewers, who
               | exist purely to distinguish and filter off crankery
               | /quackery. They're just there so that everyone else
               | doesn't have to waste time on this. (This level is what
               | non-academic publishing houses call the "slush pile.")
               | _However_ , they should be using criteria that give only
               | false-positives [treating bad papers as good] but never
               | false-negatives [treating good papers as bad.] The
               | positives pass on to the level-2 ("normal") stream.
               | 
               | - Likewise, papers from pre-eminent authors are assumed
               | to not _often_ contain stupid obvious mistakes, and
               | therefore, to avoid wasting the submitter 's time _and_
               | the time of reviewers in levels 1 through N-1, these
               | papers get routed straight to final level-N reviewers.
               | This group is mostly made up of pre-eminent authors
               | themselves, who have the highest likelihood of catching
               | the smallest, most esoteric fatal flaws. (However, they
               | 're still also using criteria that requires them to be
               | extremely critical of any _obvious_ flaws as well. They
               | just aren 't supposed to bother looking for them _first_
               | , since the assumption is that they won't be there.)
               | 
               | - Papers from people with an average number of citations
               | end up landing on some middle level, getting reviewed for
               | middling-picky stuff by middling-experienced people, and
               | then either getting bounced back for iteration at that
               | point, or getting repeatedly handed up the chain with
               | those editing marks pre-picked so that the reviewers on
               | higher levels don't have to bother looking for those
               | things and can focus on the more technically-difficult
               | stuff. It's up to the people on the earlier levels to
               | make the call of whether to bounce the paper back to the
               | author for revision.
               | 
               | (Note that, under this model, no paper is ever _rejected
               | for publication_ ; papers just get trapped in an infinite
               | revision loop, under the premise that in theory, even a
               | paper fatally-flawed in its premise could be ship-of-
               | Theseus-ed during revision into an entirely different,
               | non-flawed paper.)
               | 
               | You could compare this to a software toolchain -- first
               | your code is "reviewed" by the lexer; then by the parser;
               | then by the macro expansion; then by any static analysis
               | passes; then by any semantic-model transformers run by
               | the optimizer. Your submission can fail out as invalid at
               | any step. More advanced / low-level code (hand-written
               | assembler) skips the earlier steps entirely, but that
               | also means talking straight to something that expected
               | pre-picked output and will give you very terse, annoyed-
               | sounding, non-helpful errors if it _does_ encounter a
               | flaw that would have been caught earlier in the toolchain
               | for HLL code.
        
               | satellite2 wrote:
               | Assuming citations follow a zip distribution, almost all
               | papers would have to go through all levels.
        
               | davrosthedalek wrote:
               | Typically, papers are reviewed by 1 to 3 reviewers. I
               | don't think you realistically can have more than two
               | levels -- the editor as the first line, and then one
               | layer of reviewers.
               | 
               | You can't really blind the author names. First, the
               | reviewers must be able to recognize if there is a
               | conflict of interest, and second, especially for papers
               | on experiments, you know from the experiment name who the
               | authors would be.
        
               | bumby wrote:
               | I agree with a lot of this premise but this gave me
               | pause:
               | 
               | > _under this model, no paper is ever rejected for
               | publication; papers just get trapped in an infinite
               | revision loop_
               | 
               | This could mean a viable paper never gets published. Most
               | journals require that you only submit to one journal at a
               | time. So if it didn't meet criteria for whatever reason
               | (even a bad scope fit) it would never get a chance at a
               | better fit somewhere else).
        
               | melagonster wrote:
               | Unfortunately, reviewers do not get salary from this...
        
               | aj7 wrote:
               | In subfields of physics, and I suspect math, the
               | submitter is never anonymous. These people talk at
               | conferences, have a list of previous works, etc., and
               | fields are highly specialized. So the reviewer knows with
               | 50-95% certainty who he is reviewing.
        
               | gus_massa wrote:
               | I agree, also many papers near the begining say
               | 
               | > _We are exending our previous work in [7]_
               | 
               | or cite a few relevant papers
               | 
               | > _This topic has been studied in [3-8]_
               | 
               | Where 3 was published by group X, 5 by group Y, 7 by
               | group Z and 4, 6 and 8 by group W. Anyone can guess the
               | author of the paper is in group W.
               | 
               | Just looking at the citations, it's easy to guess the
               | group of the author.
        
               | hexane360 wrote:
               | In many subfields, the submitter isn't even attempted to
               | be hidden from the reviewers. Usually, even the reviewers
               | can be guessed with high accuracy by the submitters
        
           | foxglacier wrote:
           | Or maybe it doesn't matter. He got them published anyway and
           | just lost some prestigious journal points on his career.
           | Science/math was the winner on the day and that's the whole
           | point of it. Maybe some of those lower ranked journals are
           | run better and legitimately chipping away at the prestige of
           | the top ones due to their carelessness.
        
             | bumby wrote:
             | Research and publication incur opportunity costs. For every
             | manuscript that has to be reworked and submitted elsewhere,
             | you're losing the ability to do new research. So a
             | researcher is left trying to balance the cost/benefit of
             | additional time investment. Sometimes that results in a
             | higher quality publication, sometimes it results in
             | abandoning good (or bad) work, and sometimes it just wastes
             | time.
        
               | melagonster wrote:
               | foxglacier offered a very good point! If some guy is so
               | talented as Tao, perhaps this is the time to ameliorate
               | journal by his power (like what he did here).
        
           | wrsh07 wrote:
           | Right - it's somewhat similar to code review
           | 
           | Sometimes one person is looking for an improvement in this
           | area while someone else cares more about that other area
           | 
           | This is totally reasonable! (Ideally if they're contradicting
           | each other you can escalate to create a policy that prevents
           | future contradictions of that sort)
        
           | dhosek wrote:
           | Luck plays a lot of a role in many vaguely similar things. I
           | regularly submit fiction and poetry for publication (with
           | acceptance rates of 2% for fiction and 1.5% for poetry) and
           | so much depends on things well out of my control (which is
           | part of why I'm sanguine about those acceptance rates--given
           | the venues I'm submitting to, they're not unreasonable
           | numbers and more recent years' stats are better than that).1
           | In many cases the editors like what they read, but don't have
           | a place for it in the current issue or sometimes they're just
           | having a bad day.
           | 
           | [?]
           | 
           | 1. For those who care about the full messy details I have
           | charts and graphs at https://www.dahosek.com/2024-in-
           | reejctions-and-acceptances/
        
           | keepamovin wrote:
           | And this is how we do science? How is that a good basis for
           | scientific reality? Seems there should at least be
           | transparency and oversight, or maybe the whole system is
           | broke: open reviews on web not limited to a small committee
           | sounds better.
           | 
           | Science is about the unknown, building testable models and
           | getting data.
           | 
           | Even an AI review system could help.
        
             | n144q wrote:
             | It is not a good way of doing science, but it is the best
             | we have.
             | 
             | All the alternatives, including the ones you proposed, have
             | their own serious downsides, which is why we kept the
             | status quo for the past few decades.
        
               | fastball wrote:
               | What is the serious downside of open internet centric
               | review?
        
               | reilly3000 wrote:
               | The open internet.
               | 
               | i.e. trolls, brigades, spammers, bots, and all manner of
               | uninformed voices.
        
               | bruce511 wrote:
               | To expand on this - because if the barrier to publishing
               | is zero, then the "reputation" of the publisher is also
               | zero.
               | 
               | (Actually, we already have the "open publishing" you are
               | suggesting - it's called Blogging or social media.)
               | 
               | In other words, if we have open publishing, then someone
               | like me (with zero understanding of a topic) can publish
               | a very authentic-looking pile of nonsense with exactly
               | the same weight as someone who, you know, has actually
               | done some science and knows what they're talking about.
               | 
               | The common "solution" to this is voting - like with
               | StackOverflow answers. But that is clearly trivial to
               | game and would quickly become meaningless.
               | 
               | So human review it is - combined with the reputation that
               | a journal brings. The author gains reputation because
               | some reviewers (with reputation) reviewed the paper, and
               | the journal (with reputation) accepted it.
               | 
               | Yes, this system is cumbersome, prone to failure, and
               | subject to outside influences. It's not perfect. Just the
               | best we have right now.
        
               | eru wrote:
               | > To expand on this - because if the barrier to
               | publishing is zero, then the "reputation" of the
               | publisher is also zero.
               | 
               | That's fine. I don't read eg Astral Codex Ten because I
               | think the reputation of Substack is great. The blog can
               | stand entirely on its own reputation (and the reputation
               | of its author), no need for the publisher to rent out
               | their reputation.
               | 
               | See also Gwern.net for a similar example.
               | 
               | No need for any voting.
        
               | ricksunny wrote:
               | Reviewers could themselves have reputation levels that
               | weight how visible their review is. This would make
               | brigading more costly. There might still be a
               | pseudoscientific brigade trying to take down (or boost) a
               | particular paper, one that clusters so much that it
               | builds its own competing reputatation, but that's okay.
               | The casual reader can decide which high-vote reviews to
               | follow on their own merits.
        
               | daemontus wrote:
               | As others have mentioned, the main problem is that open
               | systems are more vulnerable to low-cost, coordinated
               | external attacks.
               | 
               | This is less of an issue with systems where there is
               | little monetary value attached (I don't know anyone whose
               | mortgage is paid for by their Stack Overflow reputation).
               | Now imagine that the future prospects of a national lab
               | with multi-million yearly budget are tied to a system
               | that can be (relatively easily) gamed with a Chinese or
               | Russian bot farm for a few thousand dollars.
               | 
               | There are already players that are trying hard to game
               | the current system, and it sometimes sort of works, but
               | not quite, exactly because of how hard it is to get into
               | the "high reputation" club (on the other hand, once
               | you're in, you can often publish a lot of lower quality
               | stuff just because of your reputation, so I'm not saying
               | this is a perfect system either).
               | 
               | In other words, I don't think anyone reasonable is
               | seriously against making peer review more transparent,
               | but for better or worse, the current system (with all of
               | its other downsides) is relatively robust to outside
               | interference.
               | 
               | So, unless we (a) make "being a scientist" much more
               | financially accessible, or (b), untangle funding from
               | this new "open" measure of "scientific achievement", the
               | open system would probably not be very impactful. Of
               | course, (a) is unlikely, at least in most high-impact
               | fields; CS was an outlier for a long time, not so much
               | today. And (b) would mean that funding agencies would
               | still need something else to judge your research, which
               | would most likely still be some closed, reputation-based
               | system.
               | 
               | Edit TL;DR: Describe how the open science peer-review
               | system should be used to distribute funding among
               | researchers while begin reasonably robust to coordinated
               | attacks. Then we can talk :)
        
               | Al-Khwarizmi wrote:
               | If by "open" you mean that the paper is there and people
               | just voluntarily choose to review it, rather than having
               | some top-down coordinated assignment process, the problem
               | is that papers by the superstars would get hundreds of
               | reviews while papers from unknown labs would get zero.
               | 
               | You could of course make it double blind, but that seems
               | hard to enforce in practice in such an open setup, and
               | still, hyped papers in fashionable topics would get many
               | reviews while papers that are hardcore theoretical, in an
               | underdog domain, etc. would get zero.
               | 
               | Finally, it also becomes much more difficult to handle
               | conflicts of interest, and the system is highly
               | vulnerable to reviewer collusion.
        
               | Panoramix wrote:
               | We kept that mostly due to inertia and because it's the
               | most profitable for the journals (everybody does their
               | work for free and they don't have to invest in new
               | systems), not because it's the best for science and
               | scientists.
        
               | eru wrote:
               | > It is not a good way of doing science, but it is the
               | best we have.
               | 
               | What makes you think so? We already have and had plenty
               | of other ways. Eg you can see how science is done in
               | corporations or for the military or for fun (see those
               | old gentlemen scientists, or amateurs these days), and
               | you can also just publish things on your own these days.
               | 
               | The only real function of these old fashioned journals is
               | as gatekeepers for funding and career decisions.
        
               | n144q wrote:
               | I heard first hand accounts from multiple people of
               | running into a different set of problems (from academia)
               | publishing papers in corporations. Publishing is never
               | simple or easy. If you have concrete examples, or better,
               | generally recognized studies that show there is an
               | objectively better way to do research, I'd very like to
               | know that.
               | 
               | Because, as an PhD who knows dozens of other PhDs in both
               | academia and industry, and who has never heard of this
               | magic new approach to doing science, it would be quite a
               | surprise.
        
               | bumby wrote:
               | I think the distinction in the examples given
               | (corporations, military), science is being done but much
               | less open.
        
               | eeeeeeehio wrote:
               | Peer review is not designed for science. Many papers are
               | not rejected because of an issue with the _science_ -- in
               | fact, reviewers seldom have the time to actually check
               | the science! As a CS-centric example: you 'll almost
               | never find a reviewer who reads a single line of code (if
               | code is submitted with the paper at all). There is
               | artifact review, but this is never tied to the acceptance
               | of the paper. Reviewers focus on ideas, presentation, and
               | the _presented_ results. (And the current system is a
               | good filter for this! Most accepted papers are well-
               | written and the results always look good on paper.)
               | However, reviewers never take the time to actually verify
               | that the experiment code matches the ideas described in
               | the paper, and that the results reproduce. Ask any CS
               | /engineering PhD student how many papers (in top venues)
               | they've seen with a critical implementation flaw that
               | invalidates the results -- and you might begin to
               | understand the problem.
               | 
               | At least in CS, the system _can_ be fixed, but those in
               | power are unable and unwilling to fix it. Authors don 't
               | want to be held accountable ("if we submit the code with
               | the paper -- someone might find a critical bug and reject
               | the paper!"), and reviewers are both unqualified (i.e.
               | haven't written a line of code in 25 years) and unwilling
               | to take on more responsibility ("I don't have the time to
               | make sure their experiment code is fair!"). So we are
               | left with an obviously broken system where junior PhD
               | students review artifacts for "reproducibility" and this
               | evaluation has no bearing whatsoever on whether a paper
               | gets accepted. It's too easy to cook up positive results
               | in almost any field (intentionally, or unintentionally),
               | and we have a system with little accountability.
               | 
               | It's not "the best we have", it's "the best those in
               | power will allow". Those in power do not want
               | consequences for publishing bad research, and also don't
               | want the reviewing load required to keep bad research
               | out.
        
               | DiogenesKynikos wrote:
               | > It's not "the best we have", it's "the best those in
               | power will allow". Those in power do not want
               | consequences for publishing bad research, and also don't
               | want the reviewing load required to keep bad research
               | out.
               | 
               | This is a very conspiratorial view of things. The simple
               | and true answer is your last suggestion: doing a more
               | thorough review takes more time than anyone has
               | available.
               | 
               | Reviewers work for free. Applying the level of scrutiny
               | you're requesting would require far more work than
               | reviewers currently do, and maybe even something
               | approaching the amount of work required to write the
               | paper in the first place. The more work it takes to
               | review an article, the less willing reviewers are to
               | volunteer their time, and the harder it is for editors to
               | find reviewers. The current level of scrutiny that papers
               | get at the peer-review stage is a result of how much time
               | reviewers can realistically volunteer.
               | 
               | Peer review is a very low standard. It's only an initial
               | filter to remove the garbage and to bring papers up to
               | some basic quality standard. The real test of a paper is
               | whether it is cited and built upon by other scientists
               | after publication. Many papers are published and then
               | forgotten, or found to be flawed and not used any more.
        
               | ksenzee wrote:
               | > Reviewers work for free.
               | 
               | If journals were operating on a shoestring budget, I
               | might be able to understand why academics are expected to
               | do peer review for free. As it is, it makes no sense
               | whatsoever. Elsevier pulls down huge amounts of money and
               | still manages to command free labor.
        
               | withinboredom wrote:
               | I think it has to be this way, right? Otherwise a paid
               | reviewer will have obvious biases from the company.
        
               | ksenzee wrote:
               | It seems to me that paying them for their time would
               | remove bias, rather than add it.
        
               | nativeit wrote:
               | How is that?
        
               | flir wrote:
               | I guess the sensible response is "what bias does being
               | paid by Elsevier add that working for free for Elsevier
               | doesn't add?"
               | 
               | The external bias is clear to me (maybe a paper
               | undermines something you're about to publish, for
               | example) but I honestly can't see much additional bias in
               | adding cash to a relationship that already exists.
        
               | ksenzee wrote:
               | Exactly. At least if the work is paid, the incentive to
               | do it is clearer.
        
               | vixen99 wrote:
               | https://www.science.org/content/article/fake-scientific-
               | pape...
        
               | Ar-Curunir wrote:
               | This is much too negative. Peer review indeed misses
               | issues with papers, but by-and-large catches the most
               | glaring faults.
               | 
               | I don't believe for one moment that the vast majority of
               | papers in reputable conferences are wrong, if only for
               | the simple reason that putting out incorrect research
               | gives an easy layup for competing groups to write a
               | follow-up paper that exposes the flaw.
               | 
               | It's also a fallacy to state that papers aren't
               | reproducible without code. Yes code is important, but in
               | most cases the core contribution of the research paper is
               | not the code, but some set of ideas that together
               | describe a novel way to approach the tackled problem.
        
               | withinboredom wrote:
               | I spent 3 months implementing a paper once. Finally, I
               | got to the point where I understood the paper probably
               | better than the author. It was an extremely complicated
               | paper (homomorphic encryption). At this point, I realized
               | that it doesn't work. There was nothing about it that
               | would ever work, and it wasn't for lack of understanding.
               | I emailed the author asking to clarify some specific
               | things in the paper, they never responded.
               | 
               | In theory, the paper could work, but it would be
               | incredibly weak (the key turned out to be either 1 or 0
               | -- a single bit).
        
               | Ar-Curunir wrote:
               | Do you have a link to the paper?
        
               | no_identd wrote:
               | +1
        
               | kortilla wrote:
               | They aren't necessarily wrong but most are nearly
               | completely useless due to some heavily downplayed or
               | completely omitted flaw that surfaces when you try to
               | implement the idea in actual systems.
               | 
               | There is technically academic novelty so it's not
               | "wrong". It's just not valuable for the field or science
               | in general.
        
               | franga2000 wrote:
               | I don't think anyone is saying it's not reproducible
               | without code, it's just much more difficult for
               | absolutely no reason. If I can run the code of a ML
               | paper, I can quickly check if the examples were cherry-
               | picked, swap in my own test or training set... The new
               | technique or idea was still the main contribution, but I
               | can test it immediately, apply it to new problems,
               | optimise the performance to enable new use-cases...
               | 
               | It's like a chemistry paper for a new material (think the
               | recent semiconductor thing) not including the amounts
               | used and the way the glassware was set up. You can
               | probably get it to work in a few attempts, but then the
               | result doesn't have the same properties as described, so
               | now you're not sure if your process was wrong or if their
               | results were.
        
               | pastage wrote:
               | More code should be released, but code is dependent on
               | the people or environment that run it. When I release
               | buggy code I will almost always have to spend time
               | supporting others in how to run it. This is not what you
               | want to do in Proof of concept to prove an idea.
               | 
               | I am not published but I have implemented a number of
               | papers to code, it works fine (hashing, protocols and
               | search mostly). I have also used code dumps to test
               | something directly. I think I spend less time on code
               | dumps, and if I fail I give up easier. That is the danger
               | you start blaming the tools instead of how good you have
               | understood the ideas.
               | 
               | I agree with you that more code should be released.. It
               | is not a solution for good science though.
        
               | cauch wrote:
               | Sharing the code may also share the incorrect
               | implementation biases.
               | 
               | It's a bit like saying that to help reproduce the
               | experiment, the experimental tools used to reach the
               | conclusion should be shared too. But reproducing the
               | experiment does not mean "having a different finger
               | clicking on exactly the same button", it means "redoing
               | the experiment from scratch, ideally with a _different
               | experimental setup_ so that it mitigates the unknown
               | systematic biases of the original setup".
               | 
               | I'm not saying that sharing code is always bad, you give
               | examples of how it can be useful. But sharing code has
               | pros and cons, and I'm surprised to see so often people
               | not understanding that.
        
               | HPsquared wrote:
               | If they don't publish the experimental setup, another
               | person could use the exact same setup anyway without
               | knowing. Better to publish the details so people can
               | actually think of independent ways to verify the result.
        
               | cauch wrote:
               | But they will not make the same mistakes. If you ask two
               | persons to build a software, they can use the same logic
               | and build the same algorithm, but what are the chances
               | they will do exactly the same bugs.
               | 
               | Also, your argument seems to be "_maybe_ they will use
               | the exact same setup". So it already looks better than
               | the solution where you provide the code and they _will
               | for sure_ use the exact same setup.
               | 
               | And "publish the details" corresponds to explain the
               | logic, not share the exact implementation.
               | 
               | Also, I'm not saying that sharing the code is bad, but
               | I'm saying that sharing the code is not the perfect
               | solution and people who thinks not sharing the code is
               | very bad are usually not understanding what are the
               | danger of sharing the code.
        
               | pegasus wrote:
               | Nobody said sharing the code "is the perfect solution".
               | Just that sharing the code is way better and should be
               | commonplace, if not required. Your argument that not
               | doing so will force other teams to do re-write the code
               | seems unrealistic to me. If anyone wants to check the
               | implementation they can always disregard the shared code,
               | but having it allows other, less time-intensive checks to
               | still happen: like checking for cherry-picked data, as GP
               | suggested, looking through the code for possible pitfalls
               | etc. Besides, your argument could be extended to any
               | specific data the paper presents: why publish numbers so
               | people can get lazy and just trust them? Just publish the
               | conclusion and let other teams figure out ways to
               | prove/disprove it! - which is (more than) a bit
               | ridiculous, wouldn't you say?
        
               | cauch wrote:
               | > Just that sharing the code is way better
               | 
               | And I disagree with that and think that you are
               | overestimating the gain brought by sharing the code and
               | are underestimating the possible problems that sharing
               | the code bring.
               | 
               | At CERN, there are 2 generalistic experiments, CMS and
               | ATLAS. The policy is that people from one experiment are
               | not allowed to talk of undergoing work with people from
               | the other. You notice that they are officially forbidden,
               | not "if some want to discuss, go ahead, others may choose
               | to not discuss". Why? Because sharing these details is
               | ruining the fact that the 2 experiments are independent.
               | If you hear from your CMS friend that they have observed
               | a peak at 125GeV, you are biased. Even if you are a nice
               | guy and try to forget about it, it is too late, you are
               | unconsciously biased: you will be drawn to check the
               | 125GeV region and possibly notice a fluctuation as a peak
               | while you would have not noticed otherwise.
               | 
               | So, no, saying "I give the code but if you want you may
               | not look at it" is not enough, you will still de-blind
               | the community. As soon as some people will look at the
               | code, they will be biased: if they will try to reproduce
               | from scratch, they will come up with an implementation
               | that is different from the one they would have come up
               | with without having looked at the code.
               | 
               | Nothing too catastrophic either. Don't get me wrong, I
               | think that sharing the code is great, in some cases. But
               | this picture of saying that sharing the code is very
               | important is just misunderstanding of how science is
               | done.
               | 
               | As for the other "specific data", yes, some data is
               | better not to share too if it is not needed to reproduce
               | the experiment and can be source of bias. The same could
               | be said about everything else in the scientist process:
               | why sharing the code is so important, and not sharing all
               | the notes of each and every meetings? I think that often
               | the person who don't understand that is a software
               | developer, and they don't understand that the code that
               | the scientist creates is not the science, it's not the
               | publication, it's just the tool, the same way a pen and a
               | piece of paper was. Software developers are paid to
               | produce code, so code is for them the end goal.
               | Scientists are paid to do research, and code is not the
               | end goal.
               | 
               | But, as I've said, sharing the code can be useful. It can
               | help other teams working on the same subject to reach the
               | same level faster or to notice errors in the code. But in
               | both case, the consequence is that these others teams are
               | not producing independent work, and this is the price to
               | pay. (and of course, they are layers of dependence: some
               | publications tend to share too much, other not, but it
               | does not mean some are very bad and others very good. Not
               | being independent is not the end of the world. The
               | problem is when someone considers that sharing the code
               | is "the good thing to do" without understanding that)
        
               | izacus wrote:
               | What you're deliberately ignoring is that omitting
               | important information is material to a lot of papers
               | because the methodology was massaged into desired results
               | to created publishable content.
               | 
               | It's really strange seeing how many (academic) people
               | will talk themselves into bizarre explanations for a
               | simple phenomenon of widespread results hacking to
               | generate required impact numbers. Occams razor and all
               | that.
        
               | cauch wrote:
               | If it is massaged into desired results, then it will be
               | invalidated by facts quite easily. Inversely, obfuscating
               | things is also easy if you just provide the whole package
               | and just say "see, you click on the button and you get
               | the same result, you have proven that it is correct". No
               | providing code means that people will redo their own
               | implementation and come back to you when they will see
               | they don't get the same results.
               | 
               | So, no, no need to invent that academics are all part of
               | this strange crazy evil group. Academics are debating and
               | are being skeptical of their colleagues results all the
               | time, which is already contradictory to your idea that
               | the majority is motivated by frauding.
               | 
               | Occams razor is simply that there are some good reasons
               | why code is not shared, going from laziness to lack of
               | expertise on code design to the fact that code sharing is
               | just not that important (or sometimes plainly bad) for
               | reproducibility, no need to invent that the main reason
               | is fraud.
        
               | izacus wrote:
               | Ok, that's a bit naive now. The whole "replication
               | crisis" is exactly the term for bad papers not being
               | invalidated "easily". [1]
               | 
               | Beacuse - if you'd been in academia - you'd find out that
               | replicating papers isn't something that will allow you to
               | keep your funding, your job and your path to next title.
               | 
               | And I'm not sure why did you jump to "crazy evil group" -
               | noone is evil, everyone is following their incentives and
               | trying to keep their jobs and secure funding. The
               | incentives are perverse. This willing blindness against
               | perverse incentives (which appears both in US academia
               | and corporate world) is a repeated source of confusion
               | for me - is the idea that people aren't always perfectly
               | honest when protecting their jobs, career success and
               | reputation really so foreign to you?
               | 
               | [1]:https://en.wikipedia.org/wiki/Replication_crisis
        
               | cauch wrote:
               | That's my point: people here link the replication crisis
               | to "not sharing the code", which is ridiculous. If you
               | just click on a button to run the code written by the
               | other team, you haven't replicated anything. If you
               | review the code, you have replicated "a little bit" but
               | it is still not as good as if you would have recreated
               | the algorithm from scratch independently.
               | 
               | It's very strange to pretend that sharing the code will
               | help the replication crisis, while the replication crisis
               | is about INDEPENDENT REPLICATION, where the experience is
               | redone in an independent way. Sometimes even with a
               | totally perpendicular setup. The closer the setup, the
               | weaker is the replication.
               | 
               | It feels like it's watching the finger who point at the
               | moon: not understanding that replication does not mean
               | "re-running the experiment and reaching the same numbers"
               | 
               | > noone is evil, everyone is following their incentives
               | and trying to keep their jobs and secure funding
               | 
               | Sharing the code has nothing to do with the incentives. I
               | will not loose my funding if I share the code. What you
               | are adding on top of that, is that the scientist is
               | dishonest and does not share because they have cheated in
               | order to get the funding. But this is the part that does
               | not make sense: unless they are already established
               | enough to have enough aura to be believed without proofs,
               | they will lose their funding because the funding is
               | coming from peer committee that will notice that the
               | facts don't match the conclusions.
               | 
               | I'm sure there are people who down-play the fraud in the
               | scientific domain. But pretending that fraud is a good
               | strategy for someone's career and that it is why people
               | will fraud so massively that sharing the code is rare,
               | this is just ignorance of the reality.
               | 
               | I'm sure some people fraud and don't want to share their
               | code. But how do you explain why so many scientists don't
               | share their code? Is that because the whole community is
               | so riddled with cheaters? Including cheaters that happens
               | to present conclusions that keep being proven correct
               | when reproduced? Because yes, there are experiments that
               | have been reproduced and confirmed and yet the code, at
               | the time, was not shared. How do you explain that if the
               | main reason to not share the code is to hide cheating?
        
               | izacus wrote:
               | I've spent plenty of time of my career doing exactly the
               | type of replication you're talking about and easily the
               | majority of CS papers weren't replicable with the
               | methodology written down on the paper and on dataset that
               | wasn't optimized and preselected by the papers author.
               | 
               | I didn't care about sharing code (it's not common), but
               | independent implementation and comparison of ML and AI
               | algorithms with purpose of independent comparison. So I'm
               | not sure why you're getting so hung up on the code part:
               | majority of papers were describing trash science even in
               | their text in effort to get published and show results.
        
               | cauch wrote:
               | I'm sorry that the area you are exercising in is rotten
               | and does not have the minimum scientific standard. But
               | please, do not reach conclusion that are blatantly
               | incorrect in areas you don't know.
               | 
               | The problem is not really "academia", it is that, in your
               | area, the academic community is particularly poor. The
               | problem is not really the "replication crisis", it is
               | that, in your area, even before we reach the concept of
               | replication crisis, the work is not even reaching the
               | basic scientific standard.
               | 
               | Oh, I guess it is Occams Razor after all: "It's really
               | strange seeing how many (academic) people will talk
               | themselves into bizarre explanations for a simple
               | phenomenon of widespread results hacking to generate
               | required impact numbers". Occams Razor explanation: so
               | many (academic) people will not talk about the
               | malpractice because so many (academic) people work in an
               | area where these malpractice are exceptional.
        
               | izacus wrote:
               | I spent a chunk of my career working on productionizing
               | code from ML/AI papers and huge part of them are outright
               | not reproducible.
               | 
               | Mostly they lack critical information (missing chosen
               | constants in equations, outright missing information on
               | input preparation or chunks of "common knowledge
               | algorithms"). Those that don't have measurements that
               | outright didn't fit the reimplemented algorithms or only
               | succeeded in their quality on the handpicked, massaged
               | dataset of the author.
               | 
               | It's all worse than you can imagine.
        
               | tsurba wrote:
               | That's the difference between truly new approaches to
               | modelling an existing problem, or coming up with a new
               | problem. No set of a bit different results or missing
               | exact hyperparameter settings really invalidates the
               | value of the aforementioned research. If the math works,
               | and is a nice new point of view, its good. It may not
               | even help anyone with practical applications right now,
               | but may inspire ideas further down the line that do make
               | the work practicable, too.
               | 
               | In contrast, if the main value of a paper is a claim that
               | they increase performance/accuracy in some task by x%,
               | then its value can be completely dependent on whether it
               | actually is reproduceable.
               | 
               | Sounds like you are complaining about the latter type of
               | work?
        
               | izacus wrote:
               | I don't think theres much value in theoretical approaches
               | that lack important derivation data either, so no need to
               | try to split the papers like this. The academic CS
               | publishing is flooded with bad quality papers in any
               | case.
        
               | jeltz wrote:
               | Anecdotally it is not. Most papers in CS I have read have
               | been bad and impossible to reproduce. Maybe I have been
               | unlucky but my experience is sadly the same.
        
               | tuyiown wrote:
               | > It is not a good way of doing science, but it is the
               | best we have.
               | 
               | It may have been for some time, but there is human social
               | dynamics in play.
        
               | psychoslave wrote:
               | So the lesson is there is not a single good way to do
               | science (or anything really), as whatever the approach
               | retained, there will be human biases involved.
               | 
               | So the less brittle option obviously might be to go
               | through all possible approaches, but this is obviously
               | more resources demanding, plus we still have the issue of
               | creating some synthesis of all the accumulated insights
               | from various approaches which itself might be taken into
               | various approaches. That's more of a indefinitely deep
               | spiral, under that perspective
               | 
               | An other perspective is to consider, what are the
               | expected outcomes of the stakeholders maybe. A shiny
               | academic career? An attempt to bring some enlightenment
               | on deep cognitive patterns to the luckiest follows that
               | have the resources at end to follow your high level
               | intellectual gymnastic? A pursuit of ways to improve
               | humanity condition through relevant and sound knowledge
               | bodies? There are definitely many others.
        
             | michaelt wrote:
             | _> And this is how we do science? How is that a good basis
             | for scientific reality?_
             | 
             | The journal did not go out empty, and the paper did not
             | cease to exist.
             | 
             | The incentives on academics reward them for publishing in
             | exclusive journals, and the most exclusive journals -
             | Nature, Science, Annals of Mathematics, The BMJ, Cell, The
             | Lancet, JAMS and so on - only publish a limited number of
             | pages in each issue. Partly because they have print
             | editions, and partly because their limited size is _why_
             | they 're exclusive.
             | 
             | A rejection from "Science" or "Nature" doesn't mean that
             | your paper is wrong, or that it's fraudulent, or that it's
             | trivial - it just means you're not in the 20 most important
             | papers out of the 50,000 published this week.
             | 
             | And yes, if instead of making one big splash you make two
             | smaller splashes, you might well find neither splash is the
             | biggest of the week.
        
             | larodi wrote:
             | This is how we don't do papers.
             | 
             | Even though my pal did a full Gouraud shading in pure
             | assembly using registers only (including the SP and a dummy
             | stack segment) - absolute breakthrough back in 1997.
             | 
             | We did a 4 server p3 farm seeding 40mbits of outward
             | traffic in 1999. Myself did a complete Perl-based binary
             | stream unpacking - before protobuf was a thing. Still live
             | handling POS terminals.
             | 
             | Discovered a much more effective teaching methodology which
             | almost doubled effectiveness. Time-series compression with
             | grammars,... And many more as we keep doing new r&d.
             | 
             | None of it is going to be published as papers on time (if
             | ever), because we really don't want to suffer this process
             | which brings very little value afterwards for someone
             | outside academia or even for people in academia unless they
             | peruse PHD and similar positions.
             | 
             | I'm struggling to force myself to write an article on
             | text2sql which is already checked and confirmed to contain
             | a novel approach to RAG which works, but do I want to
             | suffer such rejection humiliation? Not really...
             | 
             | It seems this paper ground is reserved for academics and
             | mathematics in a certain 'sectarian modus operandi', and
             | everyone else is a sucker. Sadly after a while the code is
             | also lost...
        
               | tsurba wrote:
               | If you are not even going to bother writing them up
               | properly, no one is going to care. Seems fair to me.
               | 
               | You don't have to make a "paper" out of it, maybe make
               | blog post or whatever if that is more your style. Maybe
               | upload a pdf to arxiv.
               | 
               | Half the job in science is informing (or convincing)
               | everyone else about what you made and why it is
               | significant. That's what conferences try to facilitate,
               | but if you don't want to do that, feel free to do the
               | "advertising" some other way.
               | 
               | Complaining about journals being selective is just a lazy
               | excuse for not publishing anything to help others. Sure
               | the system sucks, but then you can just publish some
               | other way. For example, ask other people who understand
               | your work to "peer review" your blog posts.
        
               | fhdjkghdfjkf wrote:
               | > Half the job in science is informing (or convincing)
               | everyone else about what you made and why it is
               | significant.
               | 
               | Additionally, writing is the best way to properly think
               | things through. If you can't write an article about your
               | work then most likely you don't even understand it yet.
               | Maybe there are critical errors in it. Maybe you'll find
               | that you can further improve it. By researching and
               | citing the relevant literature you put your work in
               | perspective, how it relates to other results.
        
               | pabs3 wrote:
               | > Sadly after a while the code is also lost...
               | 
               | Get it included in the archives of Software Heritage and
               | Internet Archive:
               | 
               | https://archive.softwareheritage.org/
               | https://wiki.archiveteam.org/index.php/Codearchiver
        
               | marvel_boy wrote:
               | >Discovered a much more effective teaching methodology
               | which almost doubled effectiveness.
               | 
               | Please, could you elaborate?
        
               | spenczar5 wrote:
               | "do I want to suffer such rejection humiliation? Not
               | really"
               | 
               | The point of Terence Tao's original post is that you just
               | cannot think of rejection as humiliation. Rejection is
               | not a catastrophe.
        
           | Salgat wrote:
           | This is all due to the preverse incentives of modern academia
           | prioritizing quantity over quantity, flooding journals with
           | an unending churn of low effort garbage.
        
             | bruce511 wrote:
             | There are easily tens of thousands of researchers globally.
             | If every one did a single paper per year, that would still
             | be way more than journals could realistically publish.
             | 
             | Since it is to some extent a numbers game, yes, academics
             | (especially newer ones looking to build reputation) will
             | submit quantity over quality. More tickets in the lottery
             | means more chances to win.
             | 
             | I'm not sure though how you change this. With so many
             | voices shouting for attention it's hard to distinguish
             | "quality" from the noise. And what does it even mean to
             | prioritize "quality"? Is science limited to 10 advancements
             | per year? 100? 1000? Should useful work in niche fields be
             | ignored simply because the fields are niche?
             | 
             | Is it helpful to have academics on staff for multiple years
             | (decades?) before they reach the standard of publishing
             | quality?
             | 
             | I think perhaps the root of the problem you are describing
             | is less one of "quantity over quality" and more one of an
             | ever-growing "industry" where participants are competing
             | against more and more people.
        
               | eru wrote:
               | > [...] way more than journals could realistically
               | publish.
               | 
               | In what sense? If you put it on a website, you can
               | publish a lot more without breaking a sweat.
               | 
               | People who want a dead tree version can print it out on
               | demand.
        
               | bruce511 wrote:
               | Publishing in the sense or reviewing, editing, etc.
               | Distribution is the easy part.
        
               | eru wrote:
               | Well, but that scales with the number of people.
               | 
               | The scientists themselves are working as reviewers.
               | 
               | More scientists writing papers also means more scientists
               | available for reviewing papers.
               | 
               | And as you say, distribution is easy, so you can do
               | reviewing after publishing instead of doing it before.
        
               | bumby wrote:
               | The featured article demonstrates that _good_ review may
               | not be a function of the number of reviewers available. I
               | personally think that with a glut of reviewers, there 's
               | a higher chance an editor will assign a referee who
               | doesn't have the capability (or time!) to perform an
               | adequate review and manuscripts will be rejected for poor
               | reasoning.
        
               | Salgat wrote:
               | Perhaps you have better insight into this, why do you
               | think having the primary incentive for
               | professors/researchers being quantity of papers published
               | is appropriate? Or are you saying that it's simply
               | unfixable and we must accept this? As far as I'm aware,
               | quantity of papers published has no relevance to the
               | value of the papers being published, with regard to
               | contributing to the scientific record, and focusing on
               | quantity is a very inappropriate and misleading metric to
               | a researcher's actual contributions. And don't downplay
               | that it isn't purely a numbers game for most people. Your
               | average professor has their entire career tied to the
               | quantity, from getting phd candidates through in a timely
               | manner to acquiring grants. All of it hinging on
               | quantity.
        
           | nine_k wrote:
           | It's as if big journals are after some drama. Or excitement
           | at least. Not just an important result, but a groundbreaking
           | result in its own right. If it's a relatively small
           | achievement that finishes a long chain of gradual progress,
           | it better be some really famous problem, like Fermat's last
           | theorem, Poinrcare's conjecture, etc.
           | 
           | I wonder if it's actually optimal from the journal's selfish
           | POV. I would expect it to want to publish articles that would
           | be cited most widely. These should be results that are
           | _important_ , that is, are hubs for more potential related
           | work, rather that impressive but self-contained results.
        
         | paulpauper wrote:
         | Don't you hate it when you lose your epsilon, only to find it
         | and it's too late?
         | 
         | I wonder what the conjecture was?
        
         | dumbfounder wrote:
         | Sort of. But it makes sense. They missed out the first time and
         | don't want to be an also-ran. If he had gone for the glory from
         | the start it may have been different. The prestigious journals
         | probably don't want incremental papers.
        
         | pentae wrote:
         | So it's basically like submitting an iOS app to the app store.
        
         | generationP wrote:
         | To be the devil's advocate: Breaking a result up into little
         | pieces to increase your paper count ("salami-slicing") is
         | frowned upon.
         | 
         | Of course this is not what Terry Tao tried to do, but it was
         | functionally indistinguishable from it to the
         | reviewers/editors.
        
       | UniverseHacker wrote:
       | I am actually quite surprised Terence Tao still gets papers
       | rejected from math journals... but appreciate him sharing this,
       | as hearing this from him will help newer scientists not get
       | discouraged by a rejection.
       | 
       | I had the lucky opportunity to do a postdoc with one of the most
       | famous people in my field, and I was shocked how much difference
       | the name did make- I never had a paper rejection from top tier
       | journals submitting with him as the corresponding author. I am
       | fairly certain the editors would have rejected my work for not
       | being fundamentally on an interesting enough topic to them, if
       | not for the name. The fact that a big name is interested in
       | something, alone can make it a "high impact subject."
        
         | vouaobrasil wrote:
         | > I am actually quite surprised Terence Tao still gets papers
         | rejected from math journals
         | 
         | At least it indicates that the system is working somewhat
         | properly some of the time...
        
           | scubbo wrote:
           | Could you elaborate on this statement? It sounds like you're
           | implying something, but it's not clear what.
        
             | monktastic1 wrote:
             | I interpret it as saying that at least the system hasn't
             | just degraded into a rubber stamp (where someone like Tao
             | can publish anything on name alone).
        
             | TN1ck wrote:
             | I think it's that a paper submitted by one of the most
             | famous authors in the math field is not auto approved by
             | the journals. That even he has to go through the normal
             | process and gets rejected at times.
        
           | 9dev wrote:
           | I find it bewildering that it wouldn't, actually. I would
           | have expected one of the earliest things in the review
           | process happening would be to black out the submitters name
           | and university, only to be revealed after the review is
           | closed.
        
             | vouaobrasil wrote:
             | Well, the editor still sees the name of the submitter, and
             | can also push the reviewers for an easy publication by
             | downplaying the requirements of the journal.
        
         | jcrites wrote:
         | Could that also be because he reviewed the papers first and
         | made sure they were in a suitable state to publish? Or you
         | think it really was just the name alone, and if you had
         | published without him they would not have been accepted?
        
           | UniverseHacker wrote:
           | He only skimmed them- scientists at his level are more like a
           | CEO than the stereotype of a scientist- with multiple large
           | labs, startups, and speaking engagements every few days. He
           | trusted me to make sure the papers were good- and they were,
           | but his name made the difference between getting into a good
           | journal in the field, and a top "high impact" journal that
           | usually does not consider the topic area popular enough to
           | accept papers on, regardless of the quality or content of the
           | paper. At some level, high impact journals are a popularity
           | contest- to maintain the high citation rate, they only
           | publish from people in large popular fields, as having more
           | peers means more citations.
        
       | aborsy wrote:
       | Research is getting more and more specialized. Increasingly there
       | may not be many potential journals for a paper, and, even if
       | there are, the paper might be sent to the same reviewers (small
       | sub communities).
       | 
       | You may have to leave a year of work on arxiv, with the
       | expectation that the work will be rehashed and used in other
       | published papers.
        
       | atrettel wrote:
       | I agree with the discussion that rejection is normal and
       | researchers should discuss it more often.
       | 
       | That said, I do think that "publish or perish" plays an unspoken
       | role here. I see a lot of colleagues trying to push out "least
       | publishable units" that might barely pass review (by definition).
       | If you need to juice your metrics, it's a common strategy that
       | people employ. Still, I think a lot of papers would pass peer
       | review more easily if researchers just combined multiple results
       | into a single longer paper. I find those papers to be easier to
       | read since they require less boilerplate, and I imagine they
       | would be easier to pass peer review by the virtue that they
       | simply contain more significant results.
        
         | nextn wrote:
         | Longer papers with more claims have more to prove, not less. I
         | imagine they would be harder to pass peer review.
        
           | tredre3 wrote:
           | > Longer papers with more claims have more to prove, not
           | less. I imagine they would be harder to pass peer review.
           | 
           | Yes, a longer paper puts more work on the peer reviewers
           | (handful of people). But splitting one project in multiple
           | papers puts more work on the reader (thousands of people).
           | There is a balance to strike.
        
           | atrettel wrote:
           | I agree with your first part but not your second. Most
           | authors do not make outrageous claims, and I surely would
           | reject their manuscript if they did. I've done it before and
           | will do it again without any issue.
           | 
           | To me, the point of peer review is to both evaluate the
           | science/correctness of the work, but also to ensure that this
           | is something novel that is worth telling others about. Does
           | the manuscript introduce something novel into the literature?
           | That is my standard (and the standard that I was taught). I
           | typically look for at least one of three things: new theory,
           | new data/experiments, or an extensive review and summation of
           | existing work. The more results the manuscript has, the more
           | likely it is to meet this novelty requirement.
        
         | paulpauper wrote:
         | Lots of co-authors. That is one surefire way to inflate it.
        
         | matthewdgreen wrote:
         | One of the issues is that we have grad students, and they need
         | to publish in order to travel through the same cycle that we
         | went through. As a more senior scientist I would be thrilled to
         | publish one beautiful paper every two years, but then none of
         | my students would ever learn anything or get a job.
        
       | ak_111 wrote:
       | - hey honey how was work today?
       | 
       | - it was fine, I desk rejected terence tao, his result was a bit
       | meh and the write up wasn't up to my standard. Then I had a bit
       | of a quite office hour, anyway, ...
        
         | Der_Einzige wrote:
         | I've had the surreal moment of attending a workshop where the
         | main presenter (famous) is talking about their soon to-be-
         | published work where I realize that I'm one of their reviewers
         | (months after I wrote the review, so no impact on my score). In
         | this case, I loved their paper and gave it high marks, and so
         | did the other reviewers. Not surprising when I found out who
         | the author was!!!
         | 
         | I have to not say a word to them as I talk to them or else I
         | could ruin the whole peer review thing!
         | 
         | "Hey honey, I reviewed X work from Y famous person today"
        
           | krisoft wrote:
           | > I have to not say a word to them as I talk to them or else
           | I could ruin the whole peer review thing!
           | 
           | In what sense would it ruin peer review to reveal your role
           | after you already wrote and submitted the review?
        
       | haunter wrote:
       | fwiw, editorial review =/= peer review
        
       | ak_111 wrote:
       | I always thought that part of the upside of being tenured and
       | extremely recognised as a leader of your field is the freedom to
       | submit to incredibly obscure (non-predatory) journals just for
       | fun.
        
       | d0mine wrote:
       | Why journals exist at all? Could papers be published on something
       | like arxiv.org (like software is on github.com)?
       | 
       | It could support links/backref, citations(forks),
       | questions(discussions), tags, followers, etc easily.
        
         | bumby wrote:
         | Part of the idea is that journals help curate better
         | publications via the peer review process. Whether or not that
         | occurs in practice is up for some debate.
         | 
         | Having a curated list can be important to separate the wheat
         | from the chaff, especially in an era with ever increasing rates
         | of research papers.
        
           | d0mine wrote:
           | Eliminating journals as a corporate monopoly doesn't
           | eliminate peer review. For example, it should be easy to show
           | the number of citations and even their specific context in
           | other articles on the arxiv-like site. For example, if I like
           | some app/library implementation on github, I look at their
           | dependencies (a citation in a sense) to discover things to
           | try.
           | 
           | Curated lists can also exist on the site. Look at awesome*
           | repos on github eg https://github.com/vinta/awesome-python
           | 
           | Obviously, some lists can be better than the others. Usual
           | social mechanics is adequate here.
        
             | bumby wrote:
             | I think citation is a noisy/poor signal for peer-review.
             | I've refereed a number of papers where I dig into the
             | citations and find the article doesn't actually support the
             | author's claim. Still, the vast majority of citations go
             | unchecked.
             | 
             | I don't think peer-review has to be done by journals, I'm
             | just not sure what the better solution is.
        
               | d0mine wrote:
               | I've definitely encountered such cases myself (when
               | actual cited paper didn't support author's claims).
               | 
               | Nothing prevents the site introducing more direct peer
               | review (published X papers on a topic -> review a paper).
               | 
               | Though If we compare two cases: reading a paper to leave
               | an anonymous review vs reading a paper to cite it. The
               | latter seems like more authentic and useful (less
               | perversed incentives).
        
         | sunshowers wrote:
         | I think in math, and in many other fields, it is pretty normal
         | to post all papers on arXiv. But arXiv has a lot of incorrect
         | papers on it (tons of P vs NP papers for example), so journals
         | are supposed to act as a filtering mechanism. How well they
         | succeed at it is debated.
        
           | d0mine wrote:
           | It is naive to think that "journal paper" means correct
           | paper. There are many incorrect papers in journals too
           | (remember reproduction crisis).
           | 
           | Imagine, you found a paper on arxiv-like site: there can be
           | metadata that might help determine quality (author
           | credentials, citations by other high-ranked papers, comments)
           | but nothing is certain. There may be cliques that violently
           | disagree with each other (paper clusters with incompatible
           | theories). The medium can help with highlighting quality
           | results (eg by choosing the default ranking algorithm for the
           | search, introducing StackOverflow-like gamification) but it
           | can't and shouldn't do science instead of practitioners.
        
       | bumby wrote:
       | Adam Grant once related an amusing rejection from a double-blind
       | review. One of the reviewers justified the rejection with
       | something along the lines of "The author would do well to
       | familiarize themselves with the work of Adam Grant"
        
         | Upvoter33 wrote:
         | This also happens pretty commonly. However, it's not even
         | unreasonable! Sometimes you write a paper and you don't do a
         | good enough of a job putting in the context of your own related
         | work.
        
           | CrazyStat wrote:
           | And sometimes the reviewer didn't read carefully and doesn't
           | understand what you're doing.
           | 
           | I once wrote a paper along the lines of "look we can do X
           | blazingly fast, which (among other things) lets us put it
           | inside a loop and do it millions of times to do Y." A
           | reviewer responded with "I don't understand what the point of
           | doing X fast is if you're just going to put it in a loop and
           | make it slow again." He also asked us to run simulations to
           | compare our method to another paper which was doing an
           | unrelated thing Z. The editor agreed that we could ignore his
           | comments.
        
         | Nevermark wrote:
         | Yes, funny the first time.
         | 
         | Not so much the fifth!
        
         | orthoxerox wrote:
         | Life imitates art. In a 1986 comedy "Back to School" Rodney
         | Dangerfield's character delegates his college assignments to
         | various subject matter experts. His English Lit teacher berates
         | him for it, saying that not only did he obviously cheat, but he
         | also copied his essay from someone who's unfamiliar with the
         | works of Kurt Vonnegut. Of course, the essay was written by
         | Vonnegut himself, appearing in a cameo role.
        
         | Metacelsus wrote:
         | I, as a reviewer, made a similar mistake once! The author's
         | initial version seemed to contradict one of their earlier
         | papers but I was missing some context.
        
           | j-krieger wrote:
           | I also made this mistake! I recommended the author to read an
           | adjacent work, which turned out to be by the very same
           | author. He had just forgot to include it his work.
        
             | adastra22 wrote:
             | I've had it happen to me. Paper rejected because it was
             | copying and not citing a prior message to a mailing list...
             | the message from the mailing list was mine, and the paper
             | was me turning it into a proper publication.
        
         | Cheer2171 wrote:
         | Fair warning: I don't know enough about mathematics to say if
         | this is the case here.
         | 
         | I hear this all the time, but this is actually a real
         | phenomenon that happens when well-known senior figures are
         | rightfully cautious about over-citing their own work and/or are
         | just so familiar with their own work that they don't include
         | much of it in their literature review. For everybody else in
         | the field, it's obvious that the work of famous person X should
         | make up a substantial chunk of the lit review and be explicit
         | about how the new work builds on X's prior literally paradigm
         | shifting work. You can do a bad job at writing about your own
         | past work for a given audience for so many different reasons,
         | and many senior academics do all the time, making their work
         | literally indistinguishable from that of graduate students ---
         | hence the rejection.
        
           | bumby wrote:
           | I totally understand the case when an author doesn't
           | sufficiently give context because they are so close to their
           | previous work that they take it for granted that it's obvious
           | (or, like you said, they are wary of auto-citation).
           | 
           | I may be misremembering, but I believe the case with Grant
           | was that the referee was using his own work to discredit his
           | submission. Ie "If the author was aware of the work of Adam
           | Grant, they would understand why the submitted work is
           | wrong."
        
         | GuB-42 wrote:
         | Play devil's advocate here. Human memory is not flawless, and
         | people makes mistakes, so maybe Adam Grant should have read one
         | of his previous work as a refresher. Even if not wrong, it is
         | possible that he missed some stuff he thought he had published,
         | but hadn't.
         | 
         | If, as a developer, you had the experience of looking at some
         | terrible code, angrily searching for whoever wrote that
         | monstrosity, only to realize that you did, that's the idea.
        
       | TZubiri wrote:
       | "Rejection is actually a relatively common occurrence for me,
       | happening once or twice a year on average."
       | 
       | This feels like a superhuman trying to empathize with a regular
       | person.
        
       | ziofill wrote:
       | This is his main point, and I wholeheartedly agree: _...a
       | perception can be created that all of one 's peers are achieving
       | either success or controversy, with one's own personal career
       | ending up becoming the only known source of examples of "mundane"
       | failure. I speculate that this may be a contributor to the
       | "impostor syndrome"..._
        
       | ndesaulniers wrote:
       | The master has failed more than the beginner has tried.
        
       | 23B1 wrote:
       | A similar story.
       | 
       | I actively blogged about my thesis and it somehow came up in one
       | of those older-model plagarism detectors (this was years and
       | years ago, it might have been just some hamfisted google search).
       | 
       | The (boomer) profs convened a 'panel' without my knowledge and
       | decided I had in fact plagiarized, and informed me I was in deep
       | doo doo. I was pretty much ready to lose my mind, my career was
       | over, years wasted, etc.
       | 
       | Luckily I was buddy with a Princeton prof. who had dealt with
       | this sort of thing and he guided me through the minefield. I came
       | out fine, but my school never apologized.
       | 
       | Failure is often just temporary and might not even be real
       | failure.
        
       | tetha wrote:
       | > Because of this, a perception can be created that all of one's
       | peers are achieving either success or controversy, with one's own
       | personal career ending up becoming the only known source of
       | examples of "mundane" failure.
       | 
       | I've found similar insights when I joined a community of
       | musicians and also discovered twitch / youtube presences of
       | musicians I listen to. Some of Dragonforces corona streams are
       | absolutely worth a watch.
       | 
       | It's easy to listen to mixed and finished albums and... despair
       | to a degree. How could anyone learn to become that good? It must
       | be impossible, giving up seems the only rational choice.
       | 
       | But in reality, people struggle and fumble along at their level.
       | Sure enough, the level of someone playing guitar professionally
       | for 20 years is a tad higher than mine, but that really, really
       | perfect album take? That's the one take out of a couple dozen.
       | 
       | This really helped me "ground" or "calibrate" my sense of how
       | good or how bad I am and gave me a better appreciation of how
       | much of a marathon an instrument can be.
        
       | cess11 wrote:
       | Journals are typically for-profit, and science is not, so they
       | don't always align and we should not expect journals to serve
       | science except incidentally.
        
       | justinl33 wrote:
       | It's okay Terence, it happens to the best of us.
        
       | slackr wrote:
       | Reminds me--I wish someone would make an anti-LinkedIn, where the
       | norm is to announce only setbacks and mistakes, disappointments
       | etc.
        
         | remoquete wrote:
         | Folks already do. They often turn them into inspirational
         | tales.
        
         | omoikane wrote:
         | There was a site where people posted company failures:
         | 
         | https://en.wikipedia.org/wiki/Fucked_Company
        
         | 77pt77 wrote:
         | Just like in academia, no one cares about negative results in
         | professional settings.
        
       | cperciva wrote:
       | In 2005, my paper on breaking RSA by observing a single private-
       | key operation from a different hyperthread sharing the same L1
       | cache -- literally the first publication of a cryptographic
       | attack exploiting shared caches -- was rejected from the
       | cryptology preprint archive on the grounds that "it was about CPU
       | architecture, not cryptography". Rejection from journals is like
       | rejection from VCs -- it happens all the time and often not for
       | any good reason.
       | 
       | (That paper has now been cited 971 times according to Google
       | Scholar, despite never appearing in a journal.)
        
         | davrosthedalek wrote:
         | Is it on the arxiv? If not, please put it there.
        
           | ilya_m wrote:
           | The paper is here: http://www.daemonology.net/hyperthreading-
           | considered-harmful...
           | 
           | As its author noted, the paper has done fine ciation- and
           | impact-wise.
        
             | cperciva wrote:
             | Paper is here:
             | https://www.daemonology.net/papers/cachemissing.pdf
             | 
             | Your link is the website I put up for non-experts when I
             | announced the issue.
        
             | davrosthedalek wrote:
             | In this case, it's less about discoverability, but more
             | about long term archival. Will daemonology.net continue to
             | exist forever? Arxiv.org might perish, but I am sure the
             | community will make sure the data will be preserved.
        
               | cperciva wrote:
               | I'm not too worried about that -- this paper is
               | "mirrored" on hundreds of university websites since it's
               | a common reference for graduate courses in computer
               | security.
        
               | ht_th wrote:
               | In my experience, once teachers retire or move on, or a
               | course gets mothballed, it's only a matter of time for
               | course websites disappear or become non-functional.
               | 
               | If the course website was even on the open web to begin
               | with. If they're in some university content management
               | system (CMS), chances are that access is limited to
               | students and teachers of that university and the CMS gets
               | "cleaned" regularly by removing old and "unused" content.
               | Let alone what will happen when the CMS is replaced by
               | another after a couple of years.
        
               | pabs3 wrote:
               | ArchiveTeam is trying to save some of that stuff to
               | archive.org, obviously it can't get the non-public stuff
               | though.
               | 
               | https://wiki.archiveteam.org/index.php/University_Web_Hos
               | tin...
        
         | fl4tul4 wrote:
         | The journal lost, as it would have increased their h-index and
         | reputation significantly.
        
         | informal007 wrote:
         | Time is always a better evaluater than anyone in any journal
        
           | fliesskomma wrote:
           | [Add:controversity] Q: If 'Time is an 'always better'
           | evaluater", why do i see nobody out there, writing about
           | "compressed-time" ?
           | 
           | regards...
        
       | kzz102 wrote:
       | In academic publishing, there is an implicit agreement between
       | the authors and the journal to roughly match the importance of
       | the paper to the prestige of the journal. Since there is no
       | universal standard on either the prestige of the journal or the
       | importance of the paper, mismatches happen regularly, and
       | rejection is the natural result. In fact, the only way to avoid
       | rejections is to submit a paper to a journal of lower prestige
       | than your estimate, which is clearly not what authors want to do.
        
         | directevolve wrote:
         | It's not an accident - if academics underestimated the quality
         | of their own work or overestimated that of the journal, this
         | would increase acceptance rates.
         | 
         | Authors start at an attainable stretch goal, hope for a quick
         | rejection if that's the outcome, and work their way down the
         | list. That's why rejection is inevitable.
        
       | kittikitti wrote:
       | Academia is a paper tiger. The Internet means you don't need a
       | publisher for your work. Ironically, this self published blog
       | might be one of his most read works yet.
        
         | snowwrestler wrote:
         | You never needed a publisher; before the Internet you could
         | write up your findings and mail them to relevant people in your
         | field. Quite a lot of scientists did this, actually.
         | 
         | What publication in a journal gives you is context, social
         | proof, and structured placement in public archives like
         | libraries. This remains true in the age of the Internet.
        
       | lupire wrote:
       | It's important to remember there a journal's reputation is built
       | by the authors who publish there, and not vice versa.
        
       | kizer wrote:
       | Whether it's a journal, a university, a tech company... never
       | take it personally because there's bureaucracy, policies, etc and
       | information lost in the operation of the whole process. Cast a
       | wide net and believe in the value you've created or bring.
        
       | jongjong wrote:
       | The high standards of those academic journals sound incredible in
       | this day and age when media is full of misinformation and
       | irrelevant information.
       | 
       | The anecdote about the highly reputable journal rejecting the
       | second of a 2-part paper which (presumably) would have been
       | accepted as a 1-part paper is telling.
        
       | iamnotsure wrote:
       | Please note that despite much work being done in the equality
       | department being famous is nowadays still a requirement for
       | acquiring the status of impostor syndrome achiever. Persons who
       | are not really famous do not have impostor syndrome but are just
       | a simple copycats in this respect.
        
         | lcnPylGDnU4H9OF wrote:
         | So the non-famous people who claim to have impostor syndrome
         | are actual impostors _because_ they claim to have impostor
         | syndrome. Honestly, that seems like a bit of a weird take but
         | to each their own.
        
       | j7ake wrote:
       | We can laugh at academia but we know of these similar rejection
       | stories nearly in all domains.
       | 
       | AirBnB being rejected for funding, musicians like Schubert
       | struggling their entire life, writers like Rowling in poverty.
       | 
       | Rejection will always be the norm in competitive winner take all
       | dynamics.
        
       | SergeAx wrote:
       | We often talk about how important it is to be a platform for
       | oneself, self-host blog under own domain etc. Why it is not the
       | case for science papers, articles, issues? Like, isn't the whole
       | World Wide Web was invented specifically for that?
        
       | soheil wrote:
       | Should we therefore also publicize everything else that lies
       | between success and failure?
        
       | drumhead wrote:
       | Saw the title and thought, nothing unusual in that really, then
       | saw the domain was maths based, it's not Terrence Tao is it?! It
       | was Terrence Tao. If one of the greats can get rejected then
       | there's no shame in you getting rejected.
        
       | PaulHoule wrote:
       | If you stick around in physics long enough you will submit a
       | paper to _Physical Review Letters_ (which is limited to about
       | four pages) that gets rejected because it isn 't of general
       | enough interest, then you resubmit to some other section of _The
       | Physical Review_ and get in.
       | 
       | These days I read a lot of CS papers with an eye on solving the
       | problems and personally I tend to find the short ones useless.
       | (e.g. pay $30 for a 4-page paper because it supposedly has a good
       | ranking function for named entity recognition except... it isn't
       | a good ranking function)
        
       ___________________________________________________________________
       (page generated 2025-01-02 23:02 UTC)