https://lemire.me/blog/2021/01/01/peer-reviewed-papers-are-getting-increasingly-boring/ Skip to content Daniel Lemire's blog Daniel Lemire is a computer science professor at the University of Quebec (TELUQ) in Montreal. His research is focused on software performance and data engineering. He is a techno-optimist. Menu and widgets * My home page * My papers * My software Subscribe You can subscribe to this blog by email. Where to find me? I am on Twitter and GitHub: Follow @lemire You can also find Daniel Lemire on * on Google Scholar with 4k citations and over 75 peer-reviewed publications, * on Facebook, * and on LinkedIn. Before the pandemic of 2020, you could meet Daniel in person, as he was organizing regular talks open to the public in Montreal: tribalab and technolab . Search for: [ ] [Search] Support my work! I do not accept any advertisement. However, you can support the blog with donations through paypal. Please consider getting in touch if you are a supporter so that I can thank you. Recent Posts * Peer-reviewed papers are getting increasingly boring * My Science and Technology review for 2020 * Science and Technology links (December 26th 2020) * In 2011, I predicted that the iPhone would have 1TB of storage in 2020 * Science and Technology links (December 19th 2020) Recent Comments * Daniel Lemire on Peer-reviewed papers are getting increasingly boring * Maxime Chevalier-Boisvert on Peer-reviewed papers are getting increasingly boring * Frank Astier on Peer-reviewed papers are getting increasingly boring * Daniel Lemire on Peer-reviewed papers are getting increasingly boring * Frank Astier on Peer-reviewed papers are getting increasingly boring Pages * A short history of technology * About me * Book recommendations * Cognitive biases * Interviews and talks * My bets * My favorite articles * My readers * My sayings * Predictions * Recommended video games * Terms of use * Write good papers Archives Archives [Select Month ] Boring stuff * Log in * Entries feed * Comments feed * WordPress.org Peer-reviewed papers are getting increasingly boring The number of researchers and peer-review publications is growing exponentially. It has been estimated that the number of researchers in the world doubles every 16 years and the number of research outputs is increasing even faster. If you accept that published research papers are an accurate measure of our scientific output, then we should be quite happy. However, Cowen and Southwood take an opposing point of view and represent this growth as a growing cost without associated gains. (...) scientific inputs are being produced at a high and increasing rate, (...) It is a mistake, however, to infer that increases in these inputs are necessarily good news for progress in science (...) higher inputs are not valuable p er se , but instead they are a measure of cost, namely how much is being invested in scientific activity. The higher the inputs, or the steeper the advance in investment, presumably we might expect to see progress in science be all the more impressive. If not, then perhaps we should be worried all the more. So are these research papers that we are producing in greater numbers... the kind of research papers that represent real progress? Bhattacharya and Packalen conclude that though we produce more papers, science itself is stagnating because of the worse incentives which focuses the research on low-risk/no-reward ventures as opposed to genuine progress: This emphasis on citations in the measurement of scientific productivity shifted scientist rewards and behavior on the margin toward incremental science and away from exploratory projects that are more likely to fail, but which are the fuel for future breakthroughs. As attention given to new ideas decreased, science stagnated. Thurner et al. concur in the sense that they find that "out-of-the-box" papers are getting harder to find: over the past decades the fraction of mainstream papers increases, the fraction of out-of-the-box decreases Surely, the scientists themselves have incentives to course correct and encourage themselves to produce more important and exciting research papers? Collison and Nielsen challenge scientists and institutions to tackle this perceived diminishing scientific productivity: Most scientists strongly favor more research funding. They like to portray science in a positive light, emphasizing benefits and minimizing negatives. While understandable, the evidence is that science has slowed enormously per dollar or hour spent. That evidence demands a large-scale institutional response. It should be a major subject in public policy, and at grant agencies and universities. Better understanding the cause of this phenomenon is important, and identifying ways to reverse it is one of the greatest opportunities to improve our future. If we believe that research papers are becoming worse, that fewer of them convey important information, then the rational approach is to downplay them. Whenever you encounter a scientist and they tell you about how many papers they have published or where they were published, or how many citations they got... you should not mock the scientist in question, but you ought to bring the conversation at another level. What is the scientist working on and why is it important work? Dig below the surface. Importantly, it does not mean that we should discourage people from publishing a lot of papers not anymore than we generally discourage programmer from writing many lines of code. Everything else being equal, people who love what they are doing, and who are good at it, will do more of it. But nobody would mistake someone who writes a lot as a good writer if they aren't. We need to challenge the conventional peer-reviewed research paper, by which I refer to a publication was reviewed by 2 to 5 peers before getting published. It is a relatively recent innovation that may not always be for the best. People like Einstein did not go through this process, at least not in their early years. Research used to be more more like "blogging". You would write up your ideas and share them. People could read them and criticize them. The peer-reviewed research papers allow you to "measure" productivity. How many papers in top-tier venues did research X produce? And that is why it grew so strong. There is nothing wrong with people seeking recognition. Incentives are good. But we should reward people for the content of their research, not for the shallow metadata we can derive from their resume. If you have not read and used someone's work, you have no business telling us whether they are good or bad. The other related problem is the incestious relationship between researchers and assessment. Is the work on theory X important? "Let us ask people who work on theory X". No. You have to have customers, users, people who have incentives to provide honest assessments. A customer is someone who uses your research in an objective way. If you design a mathematical theory or a machine-learning algorithm and an investment banker relies on it, they are your customer (whether they are paying you or not). If it fails, they will stop using it. It seems like the peer-review research papers establish this kind of customer-vendor relationship where you get a frank assessment. Unfortunately, it fails as you scale it up. The customers of the research paper are the independent readers, that is true, but they are the readers who have their own motivations. You cannot easily "fake" customers. We do so sometimes, with movie critics, say. But movie critics have an incentive to give your recommendations you can trust. We could try to emulate the movie critic model in science. I could start reviewing papers on my blog. I would have every incentive to be a good critic because, otherwise, my reputation might suffer. But it is an expensive process. Being a movie critic is a full time job. Being a research paper critic would also be a full time job. What about citations? Well, citations of often granted by your nearest peers. If they are doing work that resembles yours, they have no incentive to take it down. In conclusion, I do find it credible that science might be facing a sort of systemic stagnation brought forth by a set of poorly aligned incentives. The peer-reviewed paper accepted at a good venue as the ultimate metric seems to be at the core of the problem. Further, the whole web of assessment in modern science often seems broken. It seems that, on an individual basis, researchers ought to adopt the following principles: 1. Seek objective feedback regarding the quality of your own work using "customers": people who would tell you frankly if your work was not good. Do not mislead citations or "peer review" for such an assessment. 2. When assessing another research, try your best to behave as a customer who has some distance from the research. Do not count inputs and outputs as a quality metric. Nobody would describe Stephen King as a great writer because he published many books. If you are telling me that Mr Smith is a great researcher, then you should be able to tell me about the research and why it is important. Published by [4b7361] Daniel Lemire A computer science professor at the University of Quebec (TELUQ). View all posts by Daniel Lemire Posted on January 1, 2021January 1, 2021Author Daniel Lemire Categories 5 thoughts on "Peer-reviewed papers are getting increasingly boring" 1. [fd55bf] Frank Astier says: January 1, 2021 at 8:09 pm We have a similar problem in the field of machine learning: since the field is really hot these days, many researchers want to gain visibility and many papers are just one more applicative paper with a tweak on an already existing algorithm that claims e.g. some minor improvement on accuracy. Reply 1. [4b7361] Daniel Lemire says: January 1, 2021 at 8:19 pm ... and, the be clear, there is nothing wrong with writing yet one more paper about a given tweak. However, if the system is such that it is more rewarding to write ten such papers than one more fundamental paper, then the incentives are wrong. Thankfully, the incentives can be changed. Reply 1. [fd55bf] Frank Astier says: January 1, 2021 at 9:08 pm Agreed. Reply 2. [331805] Maxime Chevalier-Boisvert says: January 1, 2021 at 9:55 pm I can tell you that at least in compiler research, conferences keep raising the bar to get published. I like to point to the papers on Smalltalk and Self that introduced inline caching. These are seminal papers with hundreds of citations, every modern JIT compiler depends on this technology. However, these papers would never be accepted as-is in today's conferences. They compare their compiler against itself, and they use a small set of benchmarks they wrote themselves. The paper has limitations in its methodology, but it doesn't take away from the validity of its findings. To publish the same paper today would require 5 to 10x more work... So people just aim less high. Have a smaller, simpler, more incremental contribution (2 to 4 weeks of work?), spend the bulk of the work on the paper. What's the point spending 1 to 2 years building something new when your submission might just get repeatedly shot down? Taking risks gets punished. IMO, we need to move away from a reject/accept model and towards a model where every paper get a score, and everything gets published online (including what the reviewers wrote, which gets de-anonymized). Maybe not everyone gets to give a talk at the conference, but all work is made available to the public, with its score and reviews. Reply 1. [4b7361] Daniel Lemire says: January 1, 2021 at 9:59 pm We need to be concerned about this problem: So people just aim less high. Have a smaller, simpler, more incremental contribution (2 to 4 weeks of work?), spend the bulk of the work on the paper. What's the point spending 1 to 2 years building something new when your submission might just get repeatedly shot down? Taking risks gets punished. Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked * To create code blocks or other preformatted text, indent by four spaces: This will be displayed in a monospaced font. The first four spaces will be stripped off, but all other whitespace will be preserved. Markdown is turned off in code blocks: [This is not a link](http://example.com) To create not a block, but an inline code span, use backticks: Here is some inline `code`. For more help see http://daringfireball.net/projects/markdown/syntax [ ] [ ] [ ] [ ] [ ] [ ] [ ] Comment [ ] Name * [ ] Email * [ ] Website [ ] [ ] Save my name, email, and website in this browser for the next time I comment. Receive Email Notifications? [no, do not subscribe ] [instantly ] Or, you can subscribe without commenting. [Post Comment] Post navigation Previous Previous post: My Science and Technology review for 2020 Proudly powered by WordPress