[HN Gopher] Researchers value null results, but struggle to publ...
___________________________________________________________________
Researchers value null results, but struggle to publish them
Author : Bluestein
Score : 62 points
Date : 2025-07-23 09:16 UTC (2 days ago)
(HTM) web link (www.nature.com)
(TXT) w3m dump (www.nature.com)
| BrenBarn wrote:
| They avoid mentioning the elephant in the room: jobs and tenure.
| When you can get hired for a tenure-track job based on your null-
| result publications, and can get tenure for your null-result
| publications, then people will publish null results. Until then,
| they won't hit the mainstream.
| antithesizer wrote:
| It's fascinating how utterly dominated science is by economics.
| Even truth itself needs an angle.
| youainti wrote:
| Imagine being an economist... You can't get away from it.
| Bluestein wrote:
| > Even truth itself needs an angle.
|
| "When even truth itself needs an angle ...
|
| ... every lie looks like a viable alternative".-
| pixl97 wrote:
| I mean, science has always been in one way or another. All
| the 'scientists' of olde were either wealthy or given some
| kind of grant by those that were. Science itself won't be
| exempt from the freeloader problem either.
|
| Not that I'm saying all science has to economic purposes.
| autoexec wrote:
| The influence of money really does hold back scientific
| progress and is often specifically used to prevent some
| truths from being known or to reduce the confidence we have
| in those truths.
|
| Obviously it takes money to do pretty much anything in our
| society but it does seem like it has way more influence that
| is necessary. Greed seems to corrupt everything, and even
| though we can identify areas where things can be improved
| nobody seems to be wiling or able to change course.
| MITSardine wrote:
| If people are so interested, they'd presumably read and cite
| null-result publications, and their authors would get the same
| boons as if having published a positive result.
|
| There's some issues, though. Firstly, how do you enforce citing
| negative results? In the case of positive results, reviewers
| can ask that work be cited if it had already introduced things
| present in the article. This is because a publication is a
| claim to originality.
|
| But how do you define originality in not following a given
| approach? Anyone can _not_ have the idea of doing something.
| You can 't well cite all the paths not followed in your work,
| considering you might not even be aware of a negative result
| publication regarding these ideas you discarded or didn't have.
| Bibliography is time consuming enough as it is, without having
| to also cite all things irrelevant.
|
| Another issue is that the effort to write an article and get it
| published and, on the other side, to review it, makes it hard
| to justify publishing negative results. I'd say an issue is
| rather that many positive results are already not getting
| published... There's a lot of informal knowledge, as people
| don't have time to write 100 page papers with all the tricks
| and details regularly, nor reviewers to read them.
|
| Also, I could see a larger acceptance of negative result
| publications bringing perverse incentives. Currently, you have
| to get somewhere _eventually_. If negative results become
| legitimate publications, what would e.g. PhD theses become? Oh,
| we tried to reinvent everything but nothing worked, here 's 200
| pages of negative results no-one would have reasonably tried
| anyways. While the current state of affairs favours incremental
| research, I think that is still better than no serious research
| at all.
| throwawaymaths wrote:
| This. And the incentives can be even more perverse: If you find
| a null result you might not want to let your competitors know,
| because they'll get stuck in the same sand trap.
| kurthr wrote:
| There are a few different realities here. First, it's not
| really whether you can get tenure with the publications,
| because almost none of the major respected journals accept
| simple null/negative results for publication. It's too
| "boring". Now, they do occasionally publish "surprising"
| null/negative results, but that's usually do to rivalry or
| scandal.
|
| The counter example to some extent is medical/drug control
| trials, but those are pharma driven, and gov published though
| an academic could be on the paper, and it might find its way
| onto a tenure review.
|
| Second, in the beginning there is funding. If you don't have a
| grant for it, you don't do the research. Most grants are for
| "discoveries" and those only come about from "positive"
| scientific results. So the first path to this is to pay people
| to run the experiments (that nobody wants to see "fail"). Then,
| you have to trust that the people running them don't screw up
| the actual experiment, because there are an almost infinite
| number of ways to do things wrong, and only experts can even
| make things work at all for difficult modern science. Then you
| hope that the statistics are done well and not skewed, and hope
| a major journal publishes.
|
| Third, could a Journal of Negative Results that only published
| well run experiments, by respected experts, with good
| statistics and minimal bias be profitable? I don't know, a few
| exist, but I think it would take government or charity to get
| it off the ground, and a few big names to get people reading it
| for prestige. Otherwise, we're just talking about something on
| par with arXiv.org. It can't just be a journal that publishes
| every negative result or somehow reviewers have to experts in
| everything, since properly reviewing negative results from many
| fields is a HUGE challenge.
|
| My experience writing, and getting grants/research funded, is
| that there's a lot of bootstrapping where you use some initial
| funding to do research on some interesting topics and get some
| initial results, before you then propose to "do" that research
| (which you have high confidence will succeed) so that you can
| get funding to finish the next phase of research (and confirm
| the original work) to get the next grant. It's a cycle, and you
| don't dare break it, because if you "fail" to get "good"
| results from your research, and you don't get published, then
| your proposals for the next set of grants will be viewed very
| negatively!
| j7ake wrote:
| There is zero incentive for the researcher personally to publish
| null results.
|
| Null results are the foundations on which "glossy" results are
| produced. Researchers would be wasting time giving away their
| competitive advantage by publishing null results.
| zeroCalories wrote:
| What do you mean glossy results? Wouldn't it be to your
| advantage to take down another researcher? Or do you mean they
| use null results to construct a better theory for more credit?
| j7ake wrote:
| Glossy results are memeable stories that editors of journals
| would like to have in their next edition.
|
| There is very little incentive in publicly criticising a
| paper, there is incentive to show why others are wrong and
| why your new technically superior method solves the problem
| and finds new insights to the field.
| bkanuka wrote:
| I studied physics in university, and found it challenging to find
| null-result publications to cite, which can be useful when
| proposing a new experiment or as background info for a non-null
| paper.
|
| I promised myself if I became ultra-wealthy I would start a
| "Journal of Null Science" to collect these publications. (this
| journal still doesn't exist)
| esafak wrote:
| https://journal.trialanderror.org/
|
| http://arjournals.com/
| jonah-archive wrote:
| https://www.jasnh.com , introduced in 2002:
| https://web.archive.org/web/20020601214717/https://www.apa.o...
| Bluestein wrote:
| This is really really so necessary ...
|
| If _really_ pro science, some non-profit should really fund
| this sort of research.-
|
| PS. Heck, if nothing else, it'd give synthetic intellection
| systems somewhere to _not go_ with their research, and their
| agency and such ...
| timr wrote:
| Before tackling that, a non-profit should fund well-
| designed randomized controlled trials in areas where none
| exist. Which is most of them. Commit to funding _and
| publishing_ the trial, regardless of outcome, once a cross-
| disciplinary panel of disinterested experts on trial
| statistics approve the pre-registered methodology. If there
| are too many qualified studies to fund, choose randomly.
|
| This alone would probably kill off a lot of fraudulent
| science in areas like nutrition and psychology. It's what
| the government _should_ be doing with NIH and NSF funding,
| but is not.
|
| If you manage to get a good RCT through execution &
| publication, that should make your career, regardless of
| outcome.
| Bluestein wrote:
| > should fund well-designed randomized controlled trials
| in areas where none exist.
|
| Indeed. _That_ is the "baseline"-setting science, you
| are much correct.-
| fleischhauf wrote:
| could just be online for a start, then it's just time for the
| organization that you'd need. sounds like a fun project to be
| honest
| andix wrote:
| Journals could fix that. They could create a category null
| results and dedicate it a fixed amount of pages (like 20%).
| Researchers want to be in journals, if this category doesn't have
| a lot of submissions it would be much easier to get published.
| Bluestein wrote:
| This is great idea.-
|
| Heck, "we tried, and did not get there _but_ ... " should be a
| category unto itself.-
| vouaobrasil wrote:
| Journals are mainly interested in profit, not fixing anything.
| pacbard wrote:
| Not all null results are created equal.
|
| There are interesting null results that get published and are
| well known. For example, Card & Kruger (1994) was a null result
| paper showing that increasing the minimum wage has a null effect
| on employment rates. This result went against the common
| assumption that increasing wages will decrease employment at the
| time.
|
| Other null results are either dirty (e.g., big standard errors)
| or due to process problems (e.g., experimental failure). These
| are more difficult to publish because it's difficult to learn
| anything new from these results.
|
| The challenge is that researchers do not know if they are going
| to get a "good" null or a "bad" one. Most of the time, you have
| to invest significant effort and time into a project, only to get
| a null result at the end. These results are difficult to publish
| in most cases and can lead to the end of careers if someone is
| pre-tenure or lead to funding problems for anyone.
| kazinator wrote:
| What matters for publication is a surprising result, not
| whether it confirms the main hypothesis or the null one.
|
| The "psychological null hypothesis" is that which follows the
| common assumption, whether that assumption states that there is
| a relationship between the variables or that there is not.
| MITSardine wrote:
| Is that actually a null result though? That sounds like a
| standard positive result: "We managed to show that minimum wage
| has no effect on employment rate".
|
| A null result would have been: "We tried to apply Famous Theory
| to showing that minimum wage has no effect on employment rate
| but we failed because this and that".
| Analemma_ wrote:
| No, because in theory a minimum wage increase could
| _decrease_ the unemployment rate. If it does neither, that's
| a null result.
| simpaticoder wrote:
| You could publish them as a listicle "10 falsehoods organic
| chemists believe!" Because behind most every null result was an
| hypothesis that sounded like it was probably true. Most likely,
| it would sound probably true to most people in the field, so
| publishing the result is of real value to others.
|
| The problem arises that null results are cheap and easy to "find"
| for things no-one thinks sound plausible, and therefore a trivial
| way to game the publish or perish system. I suspect that this
| alone explains the bias against publishing null results.
___________________________________________________________________
(page generated 2025-07-25 23:01 UTC)