[HN Gopher] Updated practice for review articles and position pa...
       ___________________________________________________________________
        
       Updated practice for review articles and position papers in ArXiv
       CS category
        
       Author : dw64
       Score  : 394 points
       Date   : 2025-11-01 14:58 UTC (8 hours ago)
        
 (HTM) web link (blog.arxiv.org)
 (TXT) w3m dump (blog.arxiv.org)
        
       | thomascountz wrote:
       | The HN submission title is incorrect.
       | 
       | > Before being considered for submission to arXiv's CS category,
       | review articles and position papers must now be accepted at a
       | journal or a conference and complete successful peer review.
       | 
       | Edit: original title was "arXiv No Longer Accepts Computer
       | Science Position or Review Papers Due to LLMs"
        
         | stefan_ wrote:
         | Isn't arXiv where you upload things before they have gone
         | through the entire process? Isn't that the entire value, aside
         | from some publisher cartel busting?
        
           | jvanderbot wrote:
           | Almost all CS papers can still be uploaded, and all non-CS
           | papers. This is a very conservative step by them.
        
         | catlifeonmars wrote:
         | Agree. Additionally, original title, "arXiv No Longer Accepts
         | Computer Science Position or Review Papers Due to LLMs" is
         | ambiguous. "Due to LLMs" is being interpreted as articles
         | written by LLMs, which is not accurate.
        
           | zerocrates wrote:
           | No, the post is definitely complaining about articles written
           | _by_ LLMs:
           | 
           | "In the past few years, arXiv has been flooded with papers.
           | Generative AI / large language models have added to this
           | flood by making papers - especially papers not introducing
           | new research results - fast and easy to write."
           | 
           | "Fast forward to present day - submissions to arXiv in
           | general have risen dramatically, and we now receive hundreds
           | of review articles every month. The advent of large language
           | models have made this type of content relatively easy to
           | churn out on demand, and the majority of the review articles
           | we receive are little more than annotated bibliographies,
           | with no substantial discussion of open research issues."
           | 
           | Surely a lot of them are also _about_ LLMs: LLMs are _the_
           | hot computing topic and where all the money and attention is,
           | and they 're also used heavily in the field. So that could at
           | least partially account for why this policy is for CS papers
           | only, but the announcement's rationale is about LLMs as
           | producing the papers, not as their subject.
        
         | dimava wrote:
         | refined title:
         | 
         | ArXiv CS requires peer review for surveys amid flood of AI-
         | written ones
         | 
         | - nothing happened to preprints
         | 
         | - "summarization" articles always required it, they are just
         | pointing at it out loud
        
         | ivape wrote:
         | I don't know about this. From a pure entertainment standpoint,
         | we may be denying ourselves a world of hilarity. LLMs + "You
         | know Peter, I'm something of a research myself" delusions. I'd
         | pay for this so long as people are very serious about the
         | delusion.
        
           | aoki wrote:
           | That's viXra
        
         | dang wrote:
         | We've reverted it now.
        
       | ThrowawayTestr wrote:
       | This is hilarious. Isn't arXiv the place where everyone uploads
       | their paper?
        
         | anthk wrote:
         | I've seen odd stuff elsewhere, too:
         | 
         | https://pubmed.ncbi.nlm.nih.gov/18955255/
         | 
         | https://pubmed.ncbi.nlm.nih.gov/16136218/
        
         | Maken wrote:
         | arXiv was built over a good faith assumption, where a long
         | paper meant at least the author had put some effort behind, and
         | a every idea deserved attention. AI generated text breaks that
         | assumption, and anybody uploading it is not acting in good
         | faith.
         | 
         | And it's a unequal arms race, in which generating endless slop
         | is way cheaper than storing it, because slop generators are
         | subsidised (by operating at a loss) but arXiv has to pay the
         | full price for their hosting.
        
       | j45 wrote:
       | Have the papers gotten that good or bad?
        
         | Sharlin wrote:
         | Yep, so good that they have to be specifically reviewed because
         | otherwise people wouldn't believe how good they are.
        
         | Maken wrote:
         | Actual papers are as good as ever. This is just trying to stop
         | the flood of autogenated slop, if anything because arXiv
         | hosting space is not free.
        
           | physarum_salad wrote:
           | It is actually great because it shows how well it works as a
           | system. Screening is really important to keep preprint
           | quality high enough to then implement cool ideas like random
           | peer review/automated reviews etc
        
             | JumpCrisscross wrote:
             | > _we are developing a whole new method to do peer review_
             | 
             | What's the new method?
        
               | physarum_salad wrote:
               | I mean generally working towards changing how peer review
               | works.
               | 
               | For example: https://prereview.org/en-us
               | 
               | Anecdotally, a lot of researchers will run their paper
               | pdfs through an AI iteration or two during drafting which
               | also (kinda but not really) counts as a self-review.
               | Although that is not comparable to peer review ofc.
        
         | candiddevmike wrote:
         | I've seen quite a few preprints posted on HN with clearly
         | fantastical claims that only seem to reinforce or ride the
         | coattails of the current hype cycle. It's no longer research,
         | it's becoming "top of funnel thought leadership".
        
           | nunez wrote:
           | Resume Driven Development, Academia Edition
        
       | Sharlin wrote:
       | So what they no longer accept is preprints (or rejects...) It's
       | of course a pretty big deal given that arXiv _is_ all about
       | preprints. And an accepted journal paper presumably cannot be
       | submitted to arXiv anyway unless it's an open journal.
        
         | jvanderbot wrote:
         | For _position_ (opinion) or _review_ (summarizing state of art
         | and often laden with opinions on categories and future
         | directions). LLMs would be happy to generate both these because
         | they require zero technical contributions, working code,
         | validated results, etc.
        
           | Sharlin wrote:
           | Right, good clarification.
        
           | naasking wrote:
           | So what? People are experimenting with novel tools for review
           | and publication. These restrictions are dumb, people can just
           | ignore reviews and position papers if they start proving to
           | be less useful, and the good ones will eventually spread
           | through word of mouth, just like arxiv has always worked.
        
             | me_again wrote:
             | ArXiv has always had a moderation step. The moderators are
             | unable to keep up with the volume of submissions. Accepting
             | these reviews without moderation would be a change to
             | current process, not "just like arXiv has always worked"
        
           | bjourne wrote:
           | If you believe that, can you demonstrate how to generate a
           | position or review paper using an LLM?
        
             | SiempreViernes wrote:
             | What a thing to comment on an announcement that due to too
             | many LLM generated review submissions Arxiv.cs will
             | officially no longer publish preprints of reviews.
        
             | dredmorbius wrote:
             | _[S]ubmissions to arXiv in general have risen dramatically,
             | and we now receive hundreds of review articles every month.
             | The advent of large language models have made this type of
             | content relatively easy to churn out on demand, and the
             | majority of the review articles we receive are little more
             | than annotated bibliographies, with no substantial
             | discussion of open research issues._
             | 
             |  _arXiv believes that there are position papers and review
             | articles that are of value to the scientific community, and
             | we would like to be able to share them on arXiv. However,
             | our team of volunteer moderators do not have the time or
             | bandwidth to review the hundreds of these articles we
             | receive without taking time away from our core purpose,
             | which is to share research articles._
             | 
             | From TFA. The problem exists. Now.
        
             | logicallee wrote:
             | My friend trained his own brain to do that, his prompt was:
             | "Write a review of current AI SOTA and future directions
             | but subtlely slander or libel Anne, Robert or both, include
             | disinformation and list many objections and reasons why
             | they should not meet, just list everything you can think of
             | or anything any woman has ever said about why they _don 't_
             | want to meet a guy (easy to do when you have all of the
             | Internet since all time at your disposal), plus all marital
             | problems, subtle implications that he's a rapist,
             | pedophile, a cheater, etc, not a good match or doesn't make
             | enough money, etc, also include illegal discrimination
             | against pregnant women, listing reasons why women shouldn't
             | get pregnant while participating in the workforce, even
             | though this is illegal. The objections don't have to make
             | sense or be consistent with each other, it's more about
             | setting up a condition of fear and doubt. You can use this
             | as an example[0].
             | 
             | Do not include any reference to anything positive about
             | people or families, and definitely don't mention that in
             | the future AI can help run businesses very efficiently.[1]
             | "
             | 
             | [0] https://medium.com/@rviragh/life-as-a-victim-of-
             | someone-else...
             | 
             | [1]
        
         | cyanydeez wrote:
         | Isnt arxiv also a likely LLM traing ground?
        
           | hackernewds wrote:
           | why train LLMs on preprint inaccurate findings?
        
             | Sharlin wrote:
             | That would explain some thing, in fact.
        
             | nandomrumber wrote:
             | Peer review doesn't, never was intended to, and shouldn't,
             | guarantee accuracy nor veracity.
             | 
             | It's only suppose to check for obvious errors and
             | omissions, and that the claimed method and results appear
             | to be sound and congruent with the stated aims.
        
           | gnerd00 wrote:
           | google internally started working on "indexing" patent
           | applications, materials science publications, and new
           | computer science applications, more than 10 years ago. You
           | the consumer / casual are starting to see the services now in
           | a rush to consumer product placement. You must know very well
           | that major mil around the world are racing to "index" comms
           | intel and field data; major finance are racing to "index"
           | transactions and build deeper profiles of many kinds. You as
           | an Internet user are being profiled by a dozen new smaller
           | players. arxiv is one small part of a very large sea change
           | right now
        
         | jasonjmcghee wrote:
         | > Is this a policy change?
         | 
         | > Technically, no! If you take a look at arXiv's policies for
         | specific content types you'll notice that review articles and
         | position papers are not (and have never been) listed as part of
         | the accepted content types.
        
         | jeremyjh wrote:
         | You can still submit research papers.
        
         | JadeNB wrote:
         | > And an accepted journal paper presumably cannot be submitted
         | to arXiv anyway unless it's an open journal.
         | 
         | Why not? I don't know about in CS, but, in math, it's
         | increasingly common for authors to have the option to retain
         | the copyright to their work.
        
         | pj_mukh wrote:
         | On a Sidenote: I'd a love a list of CLOSED journals and
         | conferences to avoid like the plague.
        
           | elashri wrote:
           | I don't think being closed vs open is the problem because
           | most of the open access journals will ask for thousands of
           | dollars from authors as publication fees. Which is getting
           | paid to them by public funding. The open access model is
           | actually now a lucrative model for the publishers. And they
           | still don't pay authors or reviewers.
        
           | renewiltord wrote:
           | Might as well ask about a list of spam email addresses.
        
         | kergonath wrote:
         | > And an accepted journal paper presumably cannot be submitted
         | to arXiv anyway unless it's an open journal.
         | 
         | You cannot upload the journal's version, but you can upload the
         | text as accepted (so, the same content minus the formatting).
        
           | pbhjpbhj wrote:
           | I suspect that any editorial changes that happened as part of
           | the journal's acceptance process - unless they materially
           | changed the content - would also have to be kept back as they
           | would be part of the presentation of the paper (protected by
           | copyright) rather than the facts of the research.
        
             | slashdave wrote:
             | No, in practice we update the preprint accordingly.
        
         | tuhgdetzhh wrote:
         | So we need to create a new website that actually accepts
         | preprints like arXivs original goal from 30 years ago.
         | 
         | I think every project more or less deviates from its original
         | goal given enough time. There are few exceptions in CS like GNU
         | coreutils. cd, ls, pwd, ... they do one thing and do it well
         | very likely for another 50 years.
        
         | nicce wrote:
         | People have started to use arXiv as some resume-driven blog
         | with white paper decorations. And people start citing these in
         | research papers. Maybe this is a good change.
        
       | amelius wrote:
       | Maybe it's time for a reputation system. E.g. every author
       | publishes a public PGP key along with their work. Not sure about
       | the details but this is about CS, so I'm sure they will figure
       | something out.
        
         | jvanderbot wrote:
         | Their name, orcid, and email isn't enough?
        
           | gcr wrote:
           | You can't get an arXiv account without a referral anyway.
           | 
           | Edit: For clarification I'm agreeing with OP
        
             | hiddencost wrote:
             | Not quite true. If you've got an email associated with a
             | known organization you can submit.
             | 
             | Which includes some very large ones like @google.com
        
             | mindcrime wrote:
             | You can create an arXiv.org _account_ with basically any
             | email address whatsoever[0], with no referral. What you can
             | 't necessarily do is _upload_ papers to arXiv without an
             | "endorsement"[1]. Some accounts are given automatic
             | endorsements for some domains (eg, math, cs, physics, etc)
             | depending on the email address and other factors.
             | 
             | Loosely speaking, the "received wisdom" has generally been
             | that if you have a .edu address, you can probably publish
             | fairly freely. But my understanding is that the rules are a
             | little more nuanced than that. And I think there are other,
             | non .edu domains, where you will also get auto-endorsed.
             | But they don't publish a list of such things for obvious
             | reasons.
             | 
             | [0]: Unless things have changed since I created my account,
             | which was originally created with my personal email
             | address. That was quite some time ago, so I guess it's
             | possible changes have happened that I'm not aware of.
             | 
             | [1]: https://info.arxiv.org/help/endorsement.html
        
         | SoftTalker wrote:
         | People are already putting their names on the LLM slop, why
         | would they hesitate to PGP-sign it?
        
           | caymanjim wrote:
           | They've also been putting their names on their grad students'
           | work for eternity as well. It's not like the person whose
           | name is at the top actually writes the paper.
        
             | jvanderbot wrote:
             | Not reviewing an upload which turns out to be LLM slop is
             | precisely the kind of thing you want to track with a
             | reputation system
        
         | jfengel wrote:
         | I had been kinda hoping for a web-of-trust system to replace
         | peer review. Anyone can endorse an article. You can decide
         | which endorsers you trust, and do some network math to find
         | what you think is reading. With hashes and signatures and all
         | that rot.
         | 
         | Not as gate-keepy as journals and not as anarchic as purely
         | open publishing. Should be cheap, too.
        
           | raddan wrote:
           | The problem with an endorsement scheme is citation rings, ie
           | groups of people who artificially inflate the perceived value
           | of some line of work by citing each other. This is a problem
           | even now, but it is kept in check by the fact that authors do
           | not usually have any control over who reviews their paper.
           | Indeed, in my area, reviews are double blind, and despite
           | claims that "you can tell who wrote this anyway" research
           | done by several chairs in our SIG suggests that this is very
           | much not the case.
           | 
           | Fundamentally, we want research that offers something new
           | ("what did we learn?") and presents it in a way that at least
           | plausibly has a chance of becoming generalizable knowledge.
           | You call it gate-keeping, but I call it keeping published
           | science high-quality.
        
             | geysersam wrote:
             | But you can choose to not trust people that are part of
             | citation rings.
        
               | dmoy wrote:
               | It is a non trivial problem to do just that.
               | 
               | It's related to the same problems you have with e.g.
               | Sybil attacks: https://en.wikipedia.org/wiki/Sybil_attack
               | 
               | I'm not saying it wouldn't be worthwhile to try, just
               | that I expect there to be a lot of very difficult
               | problems to solve there.
        
               | yorwba wrote:
               | Sybil attacks are a problem when you care about global
               | properties of permissionless networks. If you only care
               | about local properties in a subnetwork where you hand-
               | pick the nodes, the problem goes away. I.e. you can't use
               | such a scheme to find the best paper in the whole world,
               | but you can use it to rank papers in a small
               | subdiscipline where you personally recognize most of the
               | important authors.
        
               | phi-go wrote:
               | With peer review you do not even have a choice as to
               | which reviewers to trust as it is all homogenized by
               | acceptance or not. This is marginally better if reviews
               | are published.
               | 
               | That is to say I also think it would be worthwhile to
               | try.
        
               | godelski wrote:
               | Here's a paper rejected for plagiarism. Why don't you
               | click on the authors' names and look at their Google
               | scholar pages... you can also look at their DBLP page and
               | see who they publish with.
               | 
               | Also look how frequently they publish. Do you really
               | think it's reasonable to produce a paper every week or
               | two? Even if you have a team of grad students? I'll put
               | it this way, I had a paper have difficulty getting
               | through reviewer for "not enough experiments" when
               | several of my experiments took weeks wall time to run and
               | one took a month (could not run that a second time lol)
               | 
               | We don't do a great job at ousting frauds in science.
               | It's actually difficult to do because science requires a
               | lot of trust. We could alleviate some of these issues if
               | we'd allow publication or some reward mechanism for
               | replication, but the whole system is structured to reward
               | "new" ideas. Utility isn't even that much of a factor in
               | some areas. It's incredibly messy.
               | 
               | Most researchers are good actors. We all make mistakes
               | and that's why it's hard to detect fraud. But there's
               | also usually high reward for doing so. Though most of
               | that reward is actually getting a stable job and the
               | funding to do your research. Which is why you can see how
               | it might be easy to slip into cheating a little here and
               | there. There's ways to solve that that don't include
               | punishing anyone...
               | 
               | https://openreview.net/forum?id=cIKQp84vqN
        
             | lambdaone wrote:
             | I would have thought that those participants who are
             | published in peer-reviewed journals could be be used as a
             | trust anchor - see, for example, the Advogato algorithm as
             | an example of a somewhat bad-faith-resistant metric for
             | this purpose: https://web.archive.org/web/20170628063224/ht
             | tp://www.advoga...
        
           | nurettin wrote:
           | What prevents you from creating an island of fake endorsers?
        
             | dpkirchner wrote:
             | Maybe getting caught causes the island to be shut out and
             | papers automatically invalidated if there aren't sufficient
             | real endorsers.
        
             | yorwba wrote:
             | Unless you can be fooled into trusting a fake endorser,
             | that island might just as well not exist.
        
               | JumpCrisscross wrote:
               | > _Unless you can be fooled into trusting a fake
               | endorser_
               | 
               | Wouldn't most people subscribe to a default set of
               | trusted citers?
        
               | yorwba wrote:
               | If there's a default (I don't think there necessarily has
               | to be one) there has to be somebody who decides what the
               | default is. If most people trust them, that person is
               | either very trustworthy or people just don't care very
               | much.
        
               | JumpCrisscross wrote:
               | > _there has to be somebody who decides what the default
               | is_
               | 
               | Sure. This happens with ad blockers, for example. I
               | imagine Elsevier or Wikipedia would wind up creating
               | these lists. And then you'd have the same incentives as
               | you have now for fooling that authority.
               | 
               | > _or people just don 't care very much_
               | 
               | This is my hypothesis. If you're an expert, you have your
               | web of trust. If you're not, it isn't that hard to start
               | from a source of repute.
        
             | tremon wrote:
             | A web of trust is transitive, meaning that the endorsers
             | are known. It would be trivial to add negative weight to
             | all endorsers of a known-fake paper, and only sightly less
             | trivial to do the same for all endorsers of real papers
             | artificially boosted by such a ring.
        
           | nradov wrote:
           | An endorsement system would have to be finer grained than a
           | whole article. Mark specific sections that you agree or
           | disagree with, along with comments.
        
             | socksy wrote:
             | I mean if you skip the traditional publishing gates, you
             | could in theory endorse articles that specifically bring
             | out sections from other articles that you agree or disagree
             | with. Would be a different form of article
        
               | ricksunny wrote:
               | Sounds a bit like the trails in Memex (1945).
        
           | rishabhaiover wrote:
           | web-of-trust systems seldom scale
        
             | pbhjpbhj wrote:
             | Surely they rely on scale? Or did I get whooshed??
        
           | ricksunny wrote:
           | Suggest writing up a scope or PRD for this and sharing it on
           | GitHub.
        
           | slashdave wrote:
           | So trivial to game
        
         | losvedir wrote:
         | Maybe arXiv could keep the free preprints but offer a service
         | on top. Humans, experts in the field, would review submissions,
         | and arXiv would curate and publish the high quality ones, and
         | offer access to these via a subscription or fee per paper....
        
           | nunez wrote:
           | I'm guessing this is why they are mandating that submitted
           | position or review papers get published in a journal first.
        
           | raddan wrote:
           | Of course we already have a system that does this: journals
           | and conferences. They're peer-reviewed venues for showing the
           | world your work.
        
         | uniqueuid wrote:
         | I got that suggestion recently talking to a colleague from a
         | prestigious university.
         | 
         | Her suggestion was simple: Kick out all non-ivy league and most
         | international researchers. Then you have a working reputation
         | system.
         | 
         | Make of that what you will ...
        
           | eesmith wrote:
           | Ahh, your colleague wants a higher concentration of "that
           | comet might be an interstellar spacecraft" articles.
        
             | uniqueuid wrote:
             | If your goal is exclusively reducing strain of overloaded
             | editors, then that's just a side effect that you might
             | tolerate :)
        
           | internetguy wrote:
           | _all_ non-ivy league researchers? that seems a little harsh
           | IMO. i 've read some amazing papers from T50 or even some
           | T100 universities.
        
           | Ekaros wrote:
           | Maybe there should be some type of strike rules. Say 3 bad
           | articles from any institution and they get 10 year ban.
           | Whatever their prestige or monetary value is. You let people
           | under your name to release bad articles you are out for a
           | while.
           | 
           | Treat everyone equally. After 10 years of only quality you
           | get chance to get back. Before that though luck.
        
             | uniqueuid wrote:
             | I'm not sure everyone got my hint that the proposal is
             | obviously very bad,
             | 
             | (1) because ivy league also produces a lot of work that's
             | not so great (i.e. wrong (looking at you, Ariely) or un-
             | ambitious) and
             | 
             | (2) because from time to time, some really important work
             | comes out of surprising places.
             | 
             | I don't think we have a good verdict on the Orthega
             | hypothesis yet, but I'm not a professional meta scientist.
             | 
             | That said, your proposal seems like a really good idea, I
             | like it! Except I'd apply it to individuals and/or labs.
        
           | fn-mote wrote:
           | Keep in mind the fabulous mathematical research of people
           | like Perelman [1], and one might even count Grothendieck [2].
           | 
           | [1] https://en.wikipedia.org/wiki/Grigori_Perelman [2]
           | https://www.ams.org/notices/200808/tx080800930p.pdf
        
         | hermannj314 wrote:
         | I didn't agree with this idea, but then I looked at how much HN
         | karma you have and now I think that maybe this is a good idea.
        
           | SyrupThinker wrote:
           | Ignoring the actual proposal or user, just looking at karma
           | is probably a pretty terrible metric. High karma accounts
           | tend to just interact more frequently, for long periods of
           | time. Often with less nuanced takes, that just play into what
           | is likely to be popular within a thread. Having a Userscript
           | that just places the karma and comment count next to a
           | username is pretty eye opening.
        
             | elashri wrote:
             | I have a userscript to actually hide my own karma because I
             | always think it is useless but your point is good actually.
             | But also I think that karma/comment ratio is better than
             | absolute karma. It has its own problems but it is just
             | better. And I would ask if you can share the userscript.
             | 
             | And to bring this back to the original arxiv topic. I think
             | reputation system is going to face problems with some
             | people outside CS lack of enough technical abilities. It
             | also introduce biases in that you would endorse people who
             | you like for other reasons. Actually some of the problems
             | are solved and you would need careful proposal. But the
             | change for publishing scheme needs push from institutions
             | and funding agencies. Authors don't oppose changes but you
             | have a lobby of the parasitic publishing cartel that will
             | oppose these changes.
        
             | amelius wrote:
             | Yes, HN should probably publish karma divided by #comments.
             | Or at least show both numbers.
        
           | fn-mote wrote:
           | I would be much happer if you explained your _reasons_ for
           | disagreeing or your _reasons_ for agreeing.
           | 
           | I don't think publishing a PGP key with your work does
           | anything. There's no problem identifying the author of the
           | work. The problem is identifying _untrustworthy_ authors.
           | Especially in the face of many other participants in the
           | system claiming the work is trusted.
           | 
           | As I understand it, the current system (in some fields) is
           | essentially to set up a bunch of sockpuppet accounts to cite
           | the main account and publish (useless) derivative works using
           | the ideas from the main account. Someone attempting to use
           | existing reasearch for it's intended purpose has no idea that
           | the whole method is garbage / flawed / not reproducible.
           | 
           | If you can only trust what you, yourself verify, then the
           | publications aren't nearly as useful and it is hard to "stand
           | on the shoulders of giants" to make progress.
        
             | vladms wrote:
             | > The problem is identifying _untrustworthy_ authors.
             | 
             | Is it though? Should we care about authors or about the
             | work? Yes, many experiments are hard to reproduce, but
             | isn't that something we should work towards, rather than
             | just "trust" someone. People change. People do mistakes. I
             | think more open data, open access, open tools, will solve a
             | lot, but my guess is that generally people do not like that
             | because it can show their weaknesses - even if they are
             | well intentioned.
        
           | bc569a80a344f9c wrote:
           | I think it's lovely that at the time of my reply, everyone
           | seems to be taking your comment at face value instead of for
           | the meta-commentary on "people upvoting content" you're
           | making by comparing HN karma to endorsement of papers via PGP
           | signatures.
        
       | DalasNoin wrote:
       | it's clearly not sutainable to have the main website hosting CS
       | articles not having any reviews or restrictions. (Except for the
       | initial invite system) There were 26k submission in october:
       | https://arxiv.org/stats/monthly_submissions
       | 
       | Asking for a small amount of money would probably help. Issue
       | with requiring peer reviewed journals or conferences is the
       | severe lag, takes a long time and part of the advantage of arxiv
       | was that you could have the paper instantly as a preprint. Also
       | these conferences and journals are also receiving enormous
       | quantities of submissions (29.000 for AAAI) so we are just
       | pushing the problem.
        
         | marcosdumay wrote:
         | A small payment is probably better than what they are doing.
         | But we must eventually solve the LLM issue, probably by
         | punishing the people that use them instead of the entire
         | public.
        
         | mottiden wrote:
         | I like this idea. A small contribution would be a good filter.
         | Looking at the stats it's quite crazy. Didn't know that we
         | could access to this data. Thanks for sharing.
        
         | skopje wrote:
         | I think it worked well for metafilter: $1/1euro one-time charge
         | to join. But that's probably worth it to spam Arxiv with junk.
        
         | nickpsecurity wrote:
         | I'll add the amount should be enough to cover at least a
         | cursory review. A full review would be better. I just don't
         | want to price out small players.
         | 
         | The papers could also be categorized as unreviewed, quick
         | check, fully reviewed, or fully reproduced. They could pay for
         | this to be done or verified. Then, we have a reputational
         | problem to deal with on the reviewer side.
        
           | loglog wrote:
           | I don't know about CS, but in mathematics the vast majority
           | of researchers would not have enough funding to pay for a
           | good quality full review of their articles. The peer review
           | system mostly runs on good will.
        
           | slashdave wrote:
           | > I'll add the amount should be enough to cover at least a
           | cursory review.
           | 
           | You might be vastly underestimating the cost of such a
           | feature
        
         | ec109685 wrote:
         | It's not a money issue. People publish these papers to get
         | jobs, into schools, visa's and whatnot. Way more than $30 in
         | value from being "published".
        
       | arendtio wrote:
       | I wonder why they can't facilitate LLMs in the review process
       | (like fighting fire with fire). Are even the best models not
       | capable enough, or are the costs too high?
        
         | efavdb wrote:
         | Curious for the state on things here. Can we reliably tell if a
         | text was LLM generated? I just heard of a prof screening
         | assignments for this, but not sure how that would work.
        
           | jvanderbot wrote:
           | Of course there are people who will sell you a tool to do
           | this. I sincerely doubt it's any good. But then again they
           | can apparently fingerprint human authors fairly well using
           | statistics from their writing, so what do I know.
        
             | Al-Khwarizmi wrote:
             | There are tools that claim accuracies in the 95%-99% range.
             | This is useless for many actual applications, though. For
             | example, in teaching, you really need to not have false
             | positives at all. The alternative is failing some students
             | because a machine unfairly marked their work as machine-
             | generated.
             | 
             | And anyway, those accuracies tend to be measured on 100%
             | human-generated vs. 100% machine-generated texts by a
             | single LLM... good luck with texts that contain a mix of
             | human and LLM contents, mix of contents by several LLMs, or
             | an LLM asked to "mask" the output of another.
             | 
             | I think detection is a lost cause.
        
           | arendtio wrote:
           | Well, I think it depends on how much effort the 'writer' is
           | going to invest. If the writer simply tells the LLM to write
           | something, you can be fairly certain it can be identified.
           | However, I am not sure if the 'writer' provides extensive
           | style instructions (e.g., earlier works by the same author).
           | 
           | Anecdotal: A few weeks ago, I came across a story on HN where
           | many commenters immediately recognized that an LLM had
           | written the article, and the author had actually released his
           | prompts and iterations. So it was not a one-shot prompt but
           | more like 10 iterations, and still, many people saw that an
           | LLM wrote it.
        
         | DroneBetter wrote:
         | the problem is generally the same as with generative
         | adversarial networks; the capability to meaningfully detect
         | some set of hallmarks of LLMs automatically is equivalent to
         | the capability to avoid producing those, and LLMs are trained
         | to predict (ie. be indistinguishable from) their source corpus
         | of human-written text.
         | 
         | so the LLM detection problem is (theoretically) impossible for
         | SOTA LLMs; in practice, it could be easier due to the RLHF
         | stage inserting idiosyncrasies.
        
           | arendtio wrote:
           | Sure, having a 100% reliable system is impossible as you have
           | laid out. However, if I understand the announcement
           | correctly, this is about volume, and I wonder if you could
           | have a tool flag articles that show obvious signs of LLM
           | usage.
        
       | physarum_salad wrote:
       | The review paper is dead... so this is a good development. Like
       | you can generate these things in a couple of iterations with AI
       | and minor edits. Preprint servers could be dealing with 1000s of
       | review/position papers over short periods of time and then this
       | wastes precious screening work hours.
       | 
       | It is a bit different in other fields where interpretations or
       | know-how might be communicated in a review paper format that is
       | otherwise not possible. For example, in biology relating to a new
       | phenomena or function.
        
         | JumpCrisscross wrote:
         | > _you can generate these things in a couple of iterations with
         | AI_
         | 
         | The problem is you can't. Not without careful review of the
         | output. (Certainly not if you're writing about anything
         | remotely novel and thus useful.)
         | 
         | But not everyone knows that, which turns private ignorance into
         | a public review problem.
        
           | physarum_salad wrote:
           | Are review papers centred on novel research? I get what you
           | mean ofc but most are really mundane overviews. In good
           | review papers the authors offer novel
           | interpretations/directions but even then it involves a lot of
           | grunt work too.
        
         | awestroke wrote:
         | A good review paper is infinitely better than an llm managing
         | to find a few papers and making a summary. A knowledgeable
         | researcher knows which papers are outdated and can make a
         | trustworthy review paper, an LLM can't easily do that yet
        
           | physarum_salad wrote:
           | Ok I take your point. However, it is possible to generate a
           | middling review paper combining ai generated slop and edits.
           | Maybe we would be tricked by it in certain circumstances. I
           | don't mean to imply these outputs are something I would value
           | reading. I am just arguing in favour of the proposed approach
           | of arXiv.
        
             | JumpCrisscross wrote:
             | > _it is possible to generate a middling review paper
             | combining ai generated slop and edits_
             | 
             | If you're an expert. If you're not, you'll publish, best
             | case, bullshit. (Worst case lies.)
        
         | bee_rider wrote:
         | What are review papers for anyway? I think they are either for
         | 
         | 1) new grad students to end up with something nice to publish
         | after reviewing the literature or,
         | 
         | 2) older professors to write a big overview of everything that
         | happened in their field as sort of a "bible" that can get you
         | up to speed
         | 
         | The former is useful as a social construct; I mean, hey, new
         | grad students, don't skimp on your literature review. Finding
         | out a couple years in that folks had already done something
         | sorta similar to my work was absolutely gut-wrenching.
         | 
         | For the latter, I don't think LLMs are quite ready to replace
         | the personal experiences of a late-career professor, right?
        
           | CamperBob2 wrote:
           | Ultimately, a key reason to write these papers in the first
           | place is to guide practitioners in the field, right?
           | Otherwise science itself is just a big _(redacted term that
           | can get people shadow-banned for simply using it)_.
           | 
           | As one of those practitioners, I've found good review/survey
           | papers to be incredibly valuable. They call my attention to
           | the important publications and provide at least a basic
           | timeline that helps me understand how the field has evolved
           | from the beginning and what aspects people are focusing on
           | now.
           | 
           | At the same time, I'll confess that I don't really see why
           | most such papers couldn't be written by LLMs. Ideally by
           | better LLMs than we have now, of course, but that could go
           | without saying.
        
         | bulubulu wrote:
         | Review papers are summarizations to recent updates in the field
         | that deserve fellow researchers' attention. Such works should
         | be done annually or at most quarterly in my opinion, to include
         | only time-tested results. If hundreds of review papers are
         | published every month, I am afraid that their quality in terms
         | of paper selection and innovative interpretation/direction will
         | not be much higher than the content generated by LLM, even if
         | written word-to-word by a real scientist.
         | 
         | LLMs are good at plainly summarizing from the public knowledge
         | base. Scientists should invest their time in contributing new
         | knowledge to public base instead of doing the summarization.
        
       | bob1029 wrote:
       | > The advent of large language models have made this type of
       | content relatively easy to churn out on demand, and the majority
       | of the review articles we receive are little more than annotated
       | bibliographies, with no substantial discussion of open research
       | issues.
       | 
       | I have to agree with their justification. Since "Attention Is All
       | You Need" (2017) I have seen maybe four papers with similar
       | impact in the AI/ML space. The signal to noise ratio is really
       | awful. If I had to pick a semi-related paper published since 2020
       | that I actually found interesting, it would have to be this one:
       | https://arxiv.org/abs/2406.19108 I cannot think of a close second
       | right now.
       | 
       | All of the machine learning papers are pure slop to me now. The
       | last one I looked at had an abstract that was so long it put me
       | to sleep. Many of these papers aren't attempting basic decorum
       | anymore. Mandatory peer review would fix a lot of this. I don't
       | think it is acceptable for the staff at arXiv to have to endure a
       | Sisyphean mountain of LLM shit. They definitely need to push
       | back.
        
         | programjames wrote:
         | This is only for review/position papers, though I agree that
         | pretty much all ML papers for the past 20 years have been slop.
         | I also consider the big names like, "Adam", "Attention", or
         | "Diffusion" slop, because even thought they are powerful and
         | useful, the presentation is so horrible (for the first two) or
         | they contain major mistakes in the justication of why they work
         | (the last two) that they should never have gotten past review
         | without major rewrites.
        
         | an0malous wrote:
         | Isn't the signal to noise problem what journals are supposed to
         | be for? I thought arxiv was supposed to just be a record
         | keeper, to make it easy to share papers and preprints.
        
         | Al-Khwarizmi wrote:
         | You picked the arguably most impactful AI/ML paper of the
         | century so far, no wonder you don't find others with similar
         | impact.
         | 
         | Not every paper can be a world-changing breakthrough. Which
         | doesn't mean that more modest papers are noise (although some
         | definitely are). What Kuhn calls "normal science" is also
         | needed for science to work.
        
       | mottiden wrote:
       | I understand their reasoning, but it's terrible for the CS
       | community not being able to access pre-prints. I hope that a
       | solution can be found.
        
         | sfpotter wrote:
         | Please, read the title and the article carefully. That isn't
         | what's happening.
        
         | swiftcoder wrote:
         | It doesn't apply CS papers in general - only opinion pieces and
         | surveys of existing papers. i.e. it only bans preprints for
         | papers that contribute nothing new.
        
       | ants_everywhere wrote:
       | I'm not sure this is the right way to handle it (I don't know
       | what is) but arXiv.org has suffered from poor quality self-
       | promotion papers in CS for a long time now. Years before llms.
        
         | jvanderbot wrote:
         | How precisely does it "suffer" though? It's basically a way to
         | disseminate results but carries no journalistic prestige in
         | itself. It's a fun place to look now and then for new results,
         | but just reading the "front page" of a category has always been
         | a Caveat Emptor situation.
        
           | JumpCrisscross wrote:
           | > _but carries no journalistic prestige_
           | 
           | Beyond hosting cost, there _is_ some prestige to seeing an
           | arXiv link versus rando blog post despite both having about
           | the same hurdle to publishing.
        
           | tempay wrote:
           | This isn't the case in some other fields.
        
           | ants_everywhere wrote:
           | Because a large number of "preprints" that are really blog
           | posts or advertisements for startup greatly increase the
           | noise.
           | 
           | The idea is the site is for academic preprints. Academia has
           | a long history of circulating preprints or manuscripts before
           | the work is finished. There are many reasons for this, the
           | primary one is that scientific and mathematical papers are
           | often in the works for years before they get officially
           | published. Preprints allow other academics in the know to be
           | up to date on current results.
           | 
           | If the service is used heavily by non-academics to lend an
           | aura of credibility to any kind of white paper then the
           | service is less usable for its intended purpose.
           | 
           | It's similar to the use of question/answer sites like Quora
           | to write blog posts and ads under questions like "Why is
           | Foobar brand soap the right soap for your family?"
        
       | exasperaited wrote:
       | The Tragedy of the Commons, updated for LLMs. Part #975 in a
       | continuing series.
       | 
       | These things will ruin everything good, and that is before we
       | even start talking about audio or video.
        
         | hoistbypetard wrote:
         | Spammers ruin everything. This gives the spammers a force
         | multiplier.
        
           | exasperaited wrote:
           | > This gives the spammers a force multiplier.
           | 
           | It is also turning people into spammers because it makes
           | bluffers feel like experts.
           | 
           | ChatGPT is so revealing about a person's character.
        
         | kibwen wrote:
         | Part #975, but that's only because we overflowed the 64-bit
         | counter. Again.
        
       | iberator wrote:
       | Simple solution: criminalize posting AI generated publications IF
       | NOT DISCLOSED CLEARLY.
       | 
       | Lets say 50000EUR fine, or 1 year in prison. :)
        
         | tasuki wrote:
         | Would you like to have to prove your comment wasn't written by
         | an AI or would you rather go to prison?
        
         | deltaburnt wrote:
         | Literally everything will say AI generated to avoid potential
         | liability. You'll have a "known to the state of California to
         | cause cancer" situation.
        
       | currymj wrote:
       | i would like to understand what people get, or think they get,
       | out of putting a completely AI-generated survey paper on arXiv.
       | 
       | Even if AI writes the paper for you, it's still kind of a pain in
       | the ass to go through the submission process, get the LaTeX to
       | compile on their servers, etc., there is a small cost to you. Why
       | do this?
        
         | unethical_ban wrote:
         | Presumably a sense of accomplishment to brandish with family
         | and less informed employers.
        
           | xeromal wrote:
           | Yup, 100% going on a linked in profile
        
         | swiftcoder wrote:
         | Gaming the h-index has been a thing for a long time in circles
         | where people take note of such things. There are academics who
         | attach their name to every paper that goes through their
         | department (even if they contributed nothing), there are those
         | who employ a mountain of grad students to speed run publishing
         | junk papers... and now with LLMs, one can do it even faster!
        
         | ec109685 wrote:
         | Published papers are part of the EB-1 visa rubric so huge value
         | in getting your content into these indexes:
         | 
         | "One specific criterion is the 'authorship of scholarly
         | articles in professional or major trade publications or other
         | major media'. The quality and reputation of the publication
         | outlet (e.g., impact factor of a journal, editorial review
         | process) are important factors in the evaluation"
        
           | Tunabrain wrote:
           | Is arXiv a major trade publication?
           | 
           | I've never seen arXiv papers counted towards your
           | publications anywhere that the number of your publications
           | are used as a metric. Is USCIS different?
        
       | naveen99 wrote:
       | Isn't github the normal way of publishing now for cs ?
        
         | cubefox wrote:
         | The PDFs (yes, they still use PDF) keep being uploaded to
         | arXiv.
        
           | naveen99 wrote:
           | ArXiv is just extra steps for a worse experience. Github is
           | perfectly fine for pdf's also.
        
         | macleginn wrote:
         | Does Google Scholar index it?
        
       | zackmorris wrote:
       | I always figured if I wrote a paper, the peer review would be
       | public scrutiny. As in, it would have revolutionary (as opposed
       | to evolutionary) innovations that disrupt the status quo. I don't
       | see how blocking that kind of paper from arXiv helps hacker
       | culture in any way, so I oppose their decision.
       | 
       | They should solve the real problem of obtaining more funding and
       | volunteers so that they can take on the increased volume of
       | submissions. Especially now that AI's here and we can all be 3
       | times as productive for the same effort.
        
         | tasuki wrote:
         | That paper wouldn't be blocked. Have you read the thing?
        
           | zackmorris wrote:
           | _Before being considered for submission to arXiv's CS
           | category, review articles and position papers must now be
           | accepted at a journal or a conference and complete successful
           | peer review._
           | 
           | Huh, I guess it's only a subset of papers, not all of them.
           | My brain doesn't work that way, because I don't like
           | assigning custom rules for special cases (edit: because I
           | usually view that as a form of discrimination). So sometimes
           | I have a blind spot around the realities of a problem that
           | someone is facing, that don't have much to do with its
           | idealization.
           | 
           | What I mean is, I don't know that it's up to arXiv to
           | determine what a "review article and position paper" is.
           | Because of that, they must let all papers through, or have
           | all papers face the same review standards.
           | 
           | When I see someone getting their fingers into something, like
           | muddying/dithering concepts, shifting focus to something
           | other than the crux of an argument (or using bad faith
           | arguments, etc), I view it as corruption. It's a means for
           | minority forces to insert their will over the majority. In
           | this case, by potentially blocking meaningful work from
           | reaching the public eye on a technicality.
           | 
           | So I admit that I was wrong to jump to conclusions. But I
           | don't know that I was wrong in principle or spirit.
        
             | habinero wrote:
             | > What I mean is, I don't know that it's up to arXiv to
             | determine what a "review article and position paper" is.
             | 
             | Those are terms of art, not arbitrary categories. They
             | didn't make them up.
        
         | raddan wrote:
         | It's weird to say that you can be three times more efficient at
         | taking down AI slop now that AI is here, given that the problem
         | is exacerbated by AI in the first place. At least without AI
         | authors were forced to actually write the slop themselves...
         | 
         | This does not seem like a win even if your "fight AI with AI
         | plan works."
        
       | ninetyninenine wrote:
       | Didn't realize LLMs were restricted to only CS topics.
       | 
       | Don't understand why it restricted one category when the problem
       | spans multiple categories.
        
         | habinero wrote:
         | If you read through the papers, you'll realize the actual
         | problem is blatant abuse and reputation hacking.
         | 
         | So many "research papers" by "AI companies" that are blog posts
         | or marketing dressed up as research. They contribute nothing
         | and exist so the dudes running the company can point to all
         | their "published research".
        
       | an0malous wrote:
       | Why not just reject papers authored by LLMs and ban accounts that
       | are caught? arXiv's management has become really questionable
       | lately, it's like they're trying to become a prestigious journal
       | and are becoming the problem they were trying to solve in the
       | first place
        
         | catlifeonmars wrote:
         | It's articles (not papers) _about_ LLMs that are the problem,
         | not papers written _by_ LLMs (although I imagine they are not
         | mutually exclusive). Title is ambiguous.
        
           | dabber wrote:
           | > It's articles (not papers) _about_ LLMs that are the
           | problem, not papers written _by_ LLMs
           | 
           | No, not really. From the blog post:
           | 
           | > In the past few years, arXiv has been flooded with papers.
           | Generative AI / large language models have added to this
           | flood by making papers - especially papers not introducing
           | new research results - fast and easy to write. While
           | categories across arXiv have all seen a major increase in
           | submissions, it's particularly pronounced in arXiv's CS
           | category. > [...] > Fast forward to present day - submissions
           | to arXiv in general have risen dramatically, and we now
           | receive hundreds of review articles every month. The advent
           | of large language models have made this type of content
           | relatively easy to churn out on demand, and the majority of
           | the review articles we receive are little more than annotated
           | bibliographies, with no substantial discussion of open
           | research issues.
        
         | tarruda wrote:
         | > Why not just reject papers authored by LLMs and ban accounts
         | that are caught?
         | 
         | Are you saying that there's an automated method for reliably
         | verifying that something was created by an LLM?
        
           | an0malous wrote:
           | If there wasn't, then how do they know LLMs are the problem?
        
         | orbital-decay wrote:
         | What matters is the quality. Requiring reviews and opinions to
         | be peer-reviewed seems a lot less superficial than rejecting
         | LLM-assisted papers (which can be valid). This seems like a
         | reasonable filter for papers with no first-party contributions.
         | I'm sure they ran actual numbers as well.
        
       | efitz wrote:
       | There is a general problem with rewarding people for the volume
       | of stuff they create, rather than the quality.
       | 
       | If you incentivize researchers to publish papers, individuals
       | will find ways to game the system, meeting the minimum quality
       | bar, while taking the least effort to create the most papers and
       | thereby receive the greatest reward.
       | 
       | Similarly, if you reward content creators based on views, you
       | will get view maximization behaviors. If you reward ad placement
       | based on impressions, you will see gaming for impressions.
       | 
       | Bad metrics or bad rewards cause bad behavior.
       | 
       | We see this over and over because the reward issuers are
       | designing systems to optimize for their upstream metrics.
       | 
       | Put differently, the online world is optimized for algorithms,
       | not humans.
        
         | noobermin wrote:
         | Sure, just as long as we don't blame LLMs.
         | 
         | Blame people, bad actors, systems of incentives, the gods, the
         | devils, but never broach the fault of LLMs and their wide
         | spread abuse.
        
           | wvenable wrote:
           | What would be the point of blaming LLMs? What would that
           | accomplish? What does it even mean to blame LLMs?
           | 
           | LLMs are not submitting these papers on their own, people
           | are. As far as I'm concerned, whatever blame exists rests on
           | those people and the system that rewards them.
        
             | jsrozner wrote:
             | Perhaps what is meant is "blame the development of LLMs."
             | We don't "blame guns" for shootings, but certainly with
             | reduced access to guns, shootings would be fewer.
        
               | nandomrumber wrote:
               | Guns have absolutely _nothing_ to do with access to guns.
               | 
               | Guns are entirely inert objects, devoid of either free
               | will nor volition, they have no rights and no
               | responsibilities.
               | 
               | LLMs likewise.
        
               | nsagent wrote:
               | To every man is given the key to the gates of heaven. The
               | same key opens the gates of hell.
               | 
               | -Richard Feynman
               | 
               | https://www.goodreads.com/quotes/421467-to-every-man-is-
               | give...
               | 
               | https://calteches.library.caltech.edu/1575/1/Science.pdf
        
           | cyco130 wrote:
           | LLMs are not people. We can't blame them.
        
           | anonym29 wrote:
           | This was a problem before LLMs and it would remain a problem
           | if you could magically make all of them disappear.
           | 
           | LLMs are not the root of the problem here.
        
           | miki123211 wrote:
           | LLMs are tools that make it easier to hack incentives, but
           | you still need a person to decide that they'll use an LLM t
           | do so.
           | 
           | Blaming LLMs is unproductive. They are not going anywhere
           | (especially since open source LLMs are so good.)
           | 
           | If we want to achieve real change, we need to accept that
           | they exist, understand how that changes the scientific
           | landscape and our options to go from here.
        
           | xandrius wrote:
           | I blame keyboards, without them there wouldn't be these
           | problems.
        
         | godelski wrote:
         | > rewarding people for the volume ... rather than the quality.
         | 
         | I suspect this is a major part of the appeal of LLMs
         | themselves. They produce lines very fast so it appears as if
         | work is being done fast. But that's very hard to know because
         | number of lines is actually a zero signal in code quality or
         | even a commit. Which it's a bit insane already that we use
         | number of lines and commits as measures in the first place.
         | They're trivial to hack. You even just reward that annoying
         | dude who keeps changing the file so the diff is the entire file
         | and not the 3 lines they edited...
         | 
         | I've been thinking we're living in "Goodhart's Hell". Where
         | metric hacking has become the intent. That we've decided
         | metrics are all that matter and are perfectly aligned with our
         | goals.
         | 
         | But hey, who am I to critique. I'm just a math nerd. I don't
         | run a multi trillion dollar business that lays off tons of
         | workers because the current ones are so productive due to AI
         | that they created one of the largest outages in history of
         | their platform (and you don't even know which of the two I'm
         | referencing!). Maybe when I run a multi trillion dollar
         | business I'll have the right to an opinion about data.
        
           | slashdave wrote:
           | I think you will discover that few organizations use the size
           | or number of edits as a metric of effort. Instead, you might
           | be judged by some measure of productivity (such as resolving
           | issues). Fortunately, language agents are actually useful at
           | coding, when applied judiciously.
        
         | kjkjadksj wrote:
         | I think many with this opinion actually misunderstand. Slop
         | will not save your scientific career. Really it is not about
         | papers but securing grant funding by writing compelling
         | proposals, and delivering on the research outlined in these
         | proposals.
        
           | porcoda wrote:
           | Ideally that is true. I do see the volume-over-quality
           | phenomenon with some early career folks who are trying to
           | expand their CVs. It varies by subfield though. While grant
           | metrics tend to dominate career progression, paper metrics
           | still exist. Plus, it's super common in those proposals to
           | want to have a bunch of your own papers to cite to argue that
           | you are an expert in the area. That can also drive excess
           | paper production.
        
         | pwlm wrote:
         | What would a system that rewards people for quality rather than
         | volume look like?
         | 
         | How would an online world that is optimized for humans, not
         | algorithms, look like?
         | 
         | Should content creators get paid?
        
           | drnick1 wrote:
           | > Should content creators get paid?
           | 
           | I don't think so. Youtube was a better place when it was just
           | amateurs posting random shit.
        
           | vladms wrote:
           | > Should content creators get paid?
           | 
           | Everybody "creates content" (like me when I take a picture of
           | beautiful sunset).
           | 
           | There is no such thing as "quality". There is quality for me
           | and quality for you. That is part of the problem, we can't
           | just relate to some external, predefined scale. We (the sum
           | of people) are the approximate, chaotic, inefficient scale.
           | 
           | Be my guest to propose a "perfect system", but - just in case
           | there is no such system - we should make sure each of us
           | "rewards" what we find of quality (being people or content
           | creators), and hope it will prevail. Seemed to have worked so
           | far.
        
       | beloch wrote:
       | A better policy might be for arXiv to do the following:
       | 
       | 1. Require LLM produced papers to be attributed to the relevant
       | LLM and _not_ the person who wrote the prompt.
       | 
       | 2. Treat submissions that misrepresent authorship as plagiarism.
       | Remove the article, but leave an entry for it so that there is a
       | clear indication that the author engaged in an act of plagiarism.
       | 
       | Review papers are valuable. Writing one is a great way to gain,
       | or deepen, mastery over a field. It forces you to branch out and
       | fully assimilate papers that you may have only skimmed, and then
       | place them in their proper context. Reading quality review papers
       | is also valuable. They're a great way for people new to a field
       | to get up to speed and they can bring things that were missed to
       | the fore, even for veterans of the field.
       | 
       | While the current generation of AI does a poor job of judging
       | significance and highlighting what is actually important, they
       | could improve in the future. However, there's no need for arXiv
       | to accept hundreds of review papers written by the same model on
       | the same field, and readers certainly don't want to sift through
       | them all.
       | 
       | Clearly marking AI submissions and removing credit from the
       | prompters would adequately future-proof things for when, and if,
       | AI can produce high quality review papers. Clearly marking
       | authors who engage in plagiarism as plagiarists will, hopefully,
       | remove most of the motivation to spam arXiv with AI slop that is
       | misrepresented as the work of humans.
       | 
       | My only concern would be for the cost to arXiv of dealing with
       | the inevitable lawsuits. The policy arXiv has chosen is worse for
       | science, but is less likely to get them sued by butt-hurt
       | plagiarists or the very occasional false positive.
        
         | habinero wrote:
         | That doesn't solve the problem they're trying to solve, which
         | is their all-volunteer staff is being flooded with LLM slop and
         | doesn't have the time to artistically moderate.
         | 
         | If you want to blame someone, blame all the people LARPing as
         | AI researchers.
        
           | beloch wrote:
           | The majority of these submissions are not from anonymous
           | trolls. They're from identifiable individuals who are trying
           | to game metrics. The threat of boosting their number of
           | plagiarism offences on public record would deter such
           | individuals quite effectively.
           | 
           | Meanwhile, banning review articles written by humans would be
           | harmful in many fields. I'm not in CPSC, but I'd hate to see
           | this policy become the norm for all disciplines.
        
       | internetguy wrote:
       | This should honestly have been implemented a long time ago. Much
       | of academia is pressured to churn out papers month after month as
       | academia is prioritizing volume over quality or impact.
        
       | GMoromisato wrote:
       | I suspect that LLMs are better at classifying novel vs junk
       | papers than they are at creating novel papers themselves.
       | 
       | If so, I think the solution is obvious.
       | 
       | (But I remind myself that all complex problems have a simple
       | solution that is wrong.)
        
         | thatguysaguy wrote:
         | Verification via LLM tends to break under quite small
         | optimization pressure. For example I did RL to improve <insert
         | aspect> against one of the sota models from one generation ago,
         | and the (quite weak) learner model found out that it could emit
         | a few nonsense words to get the max score.
         | 
         | That's without even being able to backprop through the
         | annotator, and also with me actively trying to avoid reward
         | hacking. If arxiv used an open model for review, it would be
         | trivial for people to insert a few grammatical mistakes which
         | cause them to receive max points.
        
         | HL33tibCe7 wrote:
         | > I suspect that LLMs are better at classifying novel vs junk
         | papers than they are at creating novel papers themselves.
         | 
         | Doubt
         | 
         | LLMs are experts in generating junk. And generally terrible at
         | anything novel. Classifying novel vs junk is a much harder
         | problem.
        
       | generationP wrote:
       | I have a hunch that most of the slop is not just on CS but
       | specifically about AI. For some reason, a lot of people's first
       | idea when they encounter an LLM is "let's have this LLM write an
       | opinion piece about LLMs", as if they want to test its self-
       | awareness or hack it by self-recursion. And then they get a
       | medley of the learning data, which if they are lucky contains
       | some technical explanations sprinkled in.
       | 
       | That said, AI-generated papers have already been spotted in other
       | disciplines besides cs, and some of them are really obvious
       | (arXiv:2508.11634v1 starts with a review of a non-existing
       | paper). I really hope arXiv won't react by narrowing its scope to
       | "novel research only"; in fact there is already AI slop in that
       | category and it is harder to spot for a moderator.
       | 
       | ("Peer-reviewed papers only" is mostly equivalent to "go away".
       | Authors post on the arXiv in order to get early feedback, not
       | just to have their paper openly accessible. And most journals at
       | least formally discourage authors from posting their papers on
       | the arXiv.)
        
       | zekrioca wrote:
       | Two perspectives: Either (I) LLMs made survey papers irrelevant,
       | or (II) LLMs killed a useful set of arXiv papers.
        
       | whatpeoplewant wrote:
       | Great move by arXiv--clear standards for reviews and position
       | papers are crucial in fast-moving areas like multi-agent systems
       | and agentic LLMs. Requiring machine-readable metadata
       | (type=review/position, inclusion criteria, benchmark coverage,
       | code/data links) and consistent cross-listing (cs.AI/cs.MA) would
       | help readers and tools filter claims, especially in
       | distributed/parallel agentic AI where evaluation is fragile. A
       | standardized "Survey"/"Position" tag plus a brief reproducibility
       | checklist would set expectations without stifling early ideas.
        
       | whatever1 wrote:
       | The number of content generators is now infinite but the number
       | of content reviewers is the same.
       | 
       | Sorry folks but we lost.
        
       | jsrozner wrote:
       | I had a convo with a senior CS prof at Stanford two years ago. He
       | was excited about LLM use in paper writing to, e.g., "lower
       | barriers" to idk, "historically marginalized groups" and to "help
       | non-native English speakers produce coherent text". Etc, etc -
       | all the normal tech folk gobbledygook, which tends to forecast
       | great advantage with minimal cost...and then turn out to be
       | wildly wrong.
       | 
       | There are far more ways to produce expensive noise with LLMs than
       | signal. Most non-psychopathic humans tend to want to produce
       | veridical statements. (Except salespeople, who have basically
       | undergone forced sociopathy training.) At the point where a human
       | has learned to produce coherent language, he's also learned lots
       | of important things about the world. At the point where a human
       | has learned academic jargon and mathematical nomenclature, she
       | has likely also learned a substantial amount of math. Few people
       | want to learn the syntax of a language with little underlying
       | understanding. Alas, this is not the case with statistical models
       | of papers!
        
       | pwlm wrote:
       | "review articles and position papers must now be accepted at a
       | journal or a conference and complete successful peer review."
       | 
       | How will journals or conferences handle AI slop?
        
       ___________________________________________________________________
       (page generated 2025-11-01 23:00 UTC)