https://lemire.me/blog/2021/01/01/peer-reviewed-papers-are-getting-increasingly-boring/ Skip to content Daniel Lemire's blog Daniel Lemire is a computer science professor at the University of Quebec (TELUQ) in Montreal. His research is focused on software performance and data engineering. He is a techno-optimist. Menu and widgets * My home page * My papers * My software Subscribe You can subscribe to this blog by email. Where to find me? I am on Twitter and GitHub: Follow @lemire You can also find Daniel Lemire on * on Google Scholar with 4k citations and over 75 peer-reviewed publications, * on Facebook, * and on LinkedIn. Before the pandemic of 2020, you could meet Daniel in person, as he was organizing regular talks open to the public in Montreal: tribalab and technolab . Search for: [ ] [Search] Support my work! I do not accept any advertisement. However, you can support the blog with donations through paypal. Please consider getting in touch if you are a supporter so that I can thank you. Recent Posts * Peer-reviewed papers are getting increasingly boring * My Science and Technology review for 2020 * Science and Technology links (December 26th 2020) * In 2011, I predicted that the iPhone would have 1TB of storage in 2020 * Science and Technology links (December 19th 2020) Recent Comments * Ben on Peer-reviewed papers are getting increasingly boring * Daniel Lemire on Peer-reviewed papers are getting increasingly boring * Daniel Lemire on Peer-reviewed papers are getting increasingly boring * Daniel Lemire on In 2011, I predicted that the iPhone would have 1TB of storage in 2020 * Daniel Lemire on My Science and Technology review for 2020 Pages * A short history of technology * About me * Book recommendations * Cognitive biases * Interviews and talks * My bets * My favorite articles * My readers * My sayings * Predictions * Recommended video games * Terms of use * Write good papers Archives Archives [Select Month ] Boring stuff * Log in * Entries feed * Comments feed * WordPress.org Peer-reviewed papers are getting increasingly boring The number of researchers and peer-review publications is growing exponentially. It has been estimated that the number of researchers in the world doubles every 16 years and the number of research outputs is increasing even faster. If you accept that published research papers are an accurate measure of our scientific output, then we should be quite happy. However, Cowen and Southwood take an opposing point of view and represent this growth as a growing cost without associated gains. (...) scientific inputs are being produced at a high and increasing rate, (...) It is a mistake, however, to infer that increases in these inputs are necessarily good news for progress in science (...) higher inputs are not valuable p er se , but instead they are a measure of cost, namely how much is being invested in scientific activity. The higher the inputs, or the steeper the advance in investment, presumably we might expect to see progress in science be all the more impressive. If not, then perhaps we should be worried all the more. So are these research papers that we are producing in greater numbers... the kind of research papers that represent real progress? Bhattacharya and Packalen conclude that though we produce more papers, science itself is stagnating because of the worse incentives which focuses the research on low-risk/no-reward ventures as opposed to genuine progress: This emphasis on citations in the measurement of scientific productivity shifted scientist rewards and behavior on the margin toward incremental science and away from exploratory projects that are more likely to fail, but which are the fuel for future breakthroughs. As attention given to new ideas decreased, science stagnated. Thurner et al. concur in the sense that they find that "out-of-the-box" papers are getting harder to find: over the past decades the fraction of mainstream papers increases, the fraction of out-of-the-box decreases Surely, the scientists themselves have incentives to course correct and encourage themselves to produce more important and exciting research papers? Collison and Nielsen challenge scientists and institutions to tackle this perceived diminishing scientific productivity: Most scientists strongly favor more research funding. They like to portray science in a positive light, emphasizing benefits and minimizing negatives. While understandable, the evidence is that science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities. Better understanding the cause of this phenomenon is important, and identifying ways to reverse it is one of the greatest opportunities to improve our future. If we believe that research papers are becoming worse, that fewer of them convey important information, then the rational approach is to downplay them. Whenever you encounter a scientist and they tell you about how many papers they have published or where they were published, or how many citations they got... you should not mock the scientist in question, but you ought to bring the conversation at another level. What is the scientist working on and why is it important work? Dig below the surface. Importantly, it does not mean that we should discourage people from publishing a lot of papers not anymore than we generally discourage programmer from writing many lines of code. Everything else being equal, people who love what they are doing, and who are good at it, will do more of it. But nobody would mistake someone who writes a lot as a good writer if they aren't. We need to challenge the conventional peer-reviewed research paper, by which I refer to a publication was reviewed by 2 to 5 peers before getting published. It is a relatively recent innovation that may not always be for the best. People like Einstein did not go through this process, at least not in their early years. Research used to be more more like "blogging". You would write up your ideas and share them. People could read them and criticize them. This communication process can be done with different means: some researchers broadcast their research meetings online. The peer-reviewed research papers allow you to "measure" productivity. How many papers in top-tier venues did research X produce? And that is why it grew so strong. There is nothing wrong with people seeking recognition. Incentives are good. But we should reward people for the content of their research, not for the shallow metadata we can derive from their resume. If you have not read and used someone's work, you have no business telling us whether they are good or bad. The other related problem is the incestious relationship between researchers and assessment. Is the work on theory X important? "Let us ask people who work on theory X". No. You have to have customers, users, people who have incentives to provide honest assessments. A customer is someone who uses your research in an objective way. If you design a mathematical theory or a machine-learning algorithm and an investment banker relies on it, they are your customer (whether they are paying you or not). If it fails, they will stop using it. It seems like the peer-review research papers establish this kind of customer-vendor relationship where you get a frank assessment. Unfortunately, it fails as you scale it up. The customers of the research paper are the independent readers, that is true, but they are the readers who have their own motivations. You cannot easily "fake" customers. We do so sometimes, with movie critics, say. But movie critics have an incentive to give your recommendations you can trust. We could try to emulate the movie critic model in science. I could start reviewing papers on my blog. I would have every incentive to be a good critic because, otherwise, my reputation might suffer. But it is an expensive process. Being a movie critic is a full time job. Being a research paper critic would also be a full time job. What about citations? Well, citations of often granted by your nearest peers. If they are doing work that resembles yours, they have no incentive to take it down. In conclusion, I do find it credible that science might be facing a sort of systemic stagnation brought forth by a set of poorly aligned incentives. The peer-reviewed paper accepted at a good venue as the ultimate metric seems to be at the core of the problem. Further, the whole web of assessment in modern science often seems broken. It seems that, on an individual basis, researchers ought to adopt the following principles: 1. Seek objective feedback regarding the quality of your own work using "customers": people who would tell you frankly if your work was not good. Do not mislead citations or "peer review" for such an assessment. 2. When assessing another research, try your best to behave as a customer who has some distance from the research. Do not count inputs and outputs as a quality metric. Nobody would describe Stephen King as a great writer because he published many books. If you are telling me that Mr Smith is a great researcher, then you should be able to tell me about the research and why it is important. Further reading: Does academic research cause economic growth? Published by [4b7361] Daniel Lemire A computer science professor at the University of Quebec (TELUQ). View all posts by Daniel Lemire Posted on January 1, 2021January 1, 2021Author Daniel Lemire Categories 16 thoughts on "Peer-reviewed papers are getting increasingly boring" 1. [fd55bf] Frank Astier says: January 1, 2021 at 8:09 pm We have a similar problem in the field of machine learning: since the field is really hot these days, many researchers want to gain visibility and many papers are just one more applicative paper with a tweak on an already existing algorithm that claims e.g. some minor improvement on accuracy. Reply 1. [4b7361] Daniel Lemire says: January 1, 2021 at 8:19 pm ... and, the be clear, there is nothing wrong with writing yet one more paper about a given tweak. However, if the system is such that it is more rewarding to write ten such papers than one more fundamental paper, then the incentives are wrong. Thankfully, the incentives can be changed. Reply 1. [fd55bf] Frank Astier says: January 1, 2021 at 9:08 pm Agreed. Reply 2. [331805] Maxime Chevalier-Boisvert says: January 1, 2021 at 9:55 pm I can tell you that at least in compiler research, conferences keep raising the bar to get published. I like to point to the papers on Smalltalk and Self that introduced inline caching. These are seminal papers with hundreds of citations, every modern JIT compiler depends on this technology. However, these papers would never be accepted as-is in today's conferences. They compare their compiler against itself, and they use a small set of benchmarks they wrote themselves. The paper has limitations in its methodology, but it doesn't take away from the validity of its findings. To publish the same paper today would require 5 to 10x more work... So people just aim less high. Have a smaller, simpler, more incremental contribution (2 to 4 weeks of work?), spend the bulk of the work on the paper. What's the point spending 1 to 2 years building something new when your submission might just get repeatedly shot down? Taking risks gets punished. IMO, we need to move away from a reject/accept model and towards a model where every paper get a score, and everything gets published online (including what the reviewers wrote, which gets de-anonymized). Maybe not everyone gets to give a talk at the conference, but all work is made available to the public, with its score and reviews. Reply 1. [4b7361] Daniel Lemire says: January 1, 2021 at 9:59 pm We need to be concerned about this problem: So people just aim less high. Have a smaller, simpler, more incremental contribution (2 to 4 weeks of work?), spend the bulk of the work on the paper. What's the point spending 1 to 2 years building something new when your submission might just get repeatedly shot down? Taking risks gets punished. Reply 3. [e1e334] Shaw says: January 1, 2021 at 11:06 pm I totally understand and agree with the general thesis of this post, but as a non-academic living in an academic world (computer vision and machine learning), the network effect of this explosion of papers, while maybe not so great for academia, has thoroughly changed the landscape in terms of source code and implementation availability. So while all of this may be bad for academic, we're hitting an information golden age on Github. The one-upmanship and steady progress means that usually if I can't get a specific implementation or understanding, I can find some research that has built upon that which I can easily make useful. Reply 1. [4b7361] Daniel Lemire says: January 1, 2021 at 11:23 pm the network effect of this explosion of papers (...) has thoroughly changed the landscape in terms of source code and implementation availability. Easier data and code sharing should certainly "accelerate" scientific progress. It is, after all, why the Web was built in the first place! (Not for Youtube!) we're hitting an information golden age on Github That might well be true, but does it contradict the thesis of the blog post? Reply 4. [43d0fc] Robert O'Callahan says: January 2, 2021 at 12:50 am Seems to me the biggest problem is something you addressed tangentially: organizations employing researchers need a way to measure their output, and "papers published in 'top' fora" and "citations" allow for quantitative measurement. Yes, those impact measures are far from ideal, but alternative models for producing and disseminating research need to support impact measurement at least as well as paper and citation counts -- from the point of view of those organizations. Reply 1. [4b7361] Daniel Lemire says: January 2, 2021 at 12:54 am To be clear, I am not advocating the end of "paper counting". I am merely saying that it needs to be seriously downgraded. Reply 5. [ab96eb] Akshat Mahajan says: January 2, 2021 at 12:59 am The solution for these problems already exist, in my opinion: registered reports. See https://www.cos.io/initiatives/ registered-reports The concept is simple: introduce peer review for the experiment or paper design before the experiment is ever carried out, and accept publication independent of what the results of the experiment are. Why is this better? It removes incentives for only reporting positive findings; it lets peers introduce directions that will make the paper's direction worthwhile; it eliminates the risk associated with doing risky science by ensuring publication isn't rejected. The Center for Open Science has other initatives to try and fix science's problems. Strongly recommend looking over them - I was blown away by Brian Nosek's seminar on it at Brown in March of last year. Reply 1. [4b7361] Daniel Lemire says: January 2, 2021 at 1:00 am This does sound like a great innovation. Reply 6. [8af88b] Jouni says: January 2, 2021 at 4:07 am There is another possible explanation: Fields that produced breakthroughs in the past may have run out of low-hanging fruit. They are now producing incremental improvements, because the remaining fundamental questions may not have conveniently human-sized answers. Or the answers may require breakthroughs in other fields. I like working as a computer scientist in bioinformatics. I get to work on a variety of topics from theoretical CS to developing software for life science researchers. There is a contant feeling of progress, as very few people have tried attacking the problems with a similar toolkit. Yet this kind of research does not fit in the academia outside well-funded research institutes. CS departments, biology departments, and medical schools all have their own standards for evaluating people, and I'm somewhere in the middle. Silo mentality is a common problem in large organizations, and the academia is clearly suffering from it. Reply 1. [4b7361] Daniel Lemire says: January 2, 2021 at 5:14 pm There is another possible explanation: Fields that produced breakthroughs in the past may have run out of low-hanging fruit. They are now producing incremental improvements, because the remaining fundamental questions may not have conveniently human-sized answers. It is a common explanation for scientific stagnation. The problem is... it does not have a good track record. Whenever someone states that all the low-hanging fruits are gone, someone soon finds a new patch where there are many new low-hanging fruits. Reply 7. [cc889f] Florian Habermacher says: January 2, 2021 at 3:56 pm I wonder whether a stackoverflow-style forum for contributions (i.e. for what used to be papers) and commenting could work, one way or another. Or whether instead it is doomed to mostly fail, as fruitful discussion/commenting breaks down more easily/ argumentation becomes less clearcut on 'general' topics (say, e.g., social science) than in the case of stackoverflow's computer code discussions. One problem of peer review without ready commenting possibility by the general public means only 2-3 persons can check for an (often difficult to spot) flaw; if they don't detect it, the flaw may thus remain ignored by (most of) the community for long/ forever. Or with re-submission to multiple journals, eager authors, theoretically at least, can have multiple go's until they find a pair of reviewers not spotting the weakness. Reply 1. [4b7361] Daniel Lemire says: January 2, 2021 at 5:18 pm There are already research-level questions on "stackoverflow-style forums". Go to mathoverflow. Some of the questions there are at the level of research papers (at least short ones). Reply 8. [648cbb] Ben says: January 2, 2021 at 8:45 pm My experience fwiw: I got my PhD in 2010. Research has been a fairly small part of my career since then. I don't know if I had whatever it takes for a more research focused career, but by the end of grad school I didn't want it. I found most of the papers published in top conferences in my areas to be some combination of deadly dull and incremental or wildly impractical (obviously so to anyone who spent time out of academia). I like the idea of trying harder to make the (potential) usefulness of research a bigger part of researcher evaluation. I have latterly been exposed to bits of the math research world where my impression is that staggering quantities of high caliber intellect are spent on obscure puzzles that have a snowball's chance in hell of ever having substantial impact. I would hate to see more of science follow math's path. Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * To create code blocks or other preformatted text, indent by four spaces: This will be displayed in a monospaced font. The first four spaces will be stripped off, but all other whitespace will be preserved. Markdown is turned off in code blocks: [This is not a link](http://example.com) To create not a block, but an inline code span, use backticks: Here is some inline `code`. For more help see http://daringfireball.net/projects/markdown/syntax [ ] [ ] [ ] [ ] [ ] [ ] [ ] Comment [ ] Name * [ ] Email * [ ] Website [ ] [ ] Save my name, email, and website in this browser for the next time I comment. Receive Email Notifications? [no, do not subscribe ] [instantly ] Or, you can subscribe without commenting. [Post Comment] Post navigation Previous Previous post: My Science and Technology review for 2020 Proudly powered by WordPress