[HN Gopher] You got a null result. Will anyone publish it?
       ___________________________________________________________________
        
       You got a null result. Will anyone publish it?
        
       Author : sohkamyung
       Score  : 202 points
       Date   : 2024-07-24 12:35 UTC (10 hours ago)
        
 (HTM) web link (www.nature.com)
 (TXT) w3m dump (www.nature.com)
        
       | transcriptase wrote:
       | The irony of this appearing on the Nature site... when their own
       | editors routinely reject even remarkable results that go on to
       | become highly cited seminal papers in "lesser" journals.
        
       | xpe wrote:
       | > found that 75% were willing to publish null results they had
       | produced, but only 12.5% were able to do so
       | 
       | What are the corresponding statistics for researchers that find
       | positive results? Closer to 100% are willing to publish? And how
       | many succeed?
        
       | glial wrote:
       | The article hints at this, but not publishing null results (at
       | least in a database - somewhere!) goes hand-in-hand with the
       | replication crisis. An experimental outcome is always a single
       | sample from a _distribution_ of outcomes that you would obtain if
       | you repeated the experiment many times.
       | 
       | Choosing to only publish the most extreme positive values means
       | that when the experiment is replicated, "regression to the mean"
       | makes it very likely that the measured effect will be weaker, and
       | possibly not statistically significant. This is not an evidence
       | of scientific fraud -- rather, it is a predictable outcome of a
       | publishing incentive scheme that rewards hype and novelty over
       | robust science.
       | 
       | I've said it before but it bears repeating - replicating
       | published results, and adding the findings to a database, should
       | be a standard part of PhD training programs.
        
         | setopt wrote:
         | Relevant XKCD: https://xkcd.com/882/
         | 
         | Do 20 experiments with a p<5% criterion, and it's likely that
         | one will be a false positive. Only publish positive results,
         | and someone will eventually publish a false positive result
         | without fraud.
        
           | etrautmann wrote:
           | Very few studies report a single statistical test as the sole
           | conclusion. Most papers should assess some outcome in
           | multiple ways using complementary data, multiple analyses
           | etc. not always of course, but there are lots of ways of
           | making sure your conclusions a robust without relying on a
           | single analysis result.
        
             | kerkeslager wrote:
             | That doesn't fix the problem at all. No matter how many
             | statistical tests you run on a sample, you can't get around
             | the fact that the sample may not be representative of the
             | population or the underlying phenomenon.
             | 
             | You need different samples. There isn't a statistical trick
             | that gets around this.
             | 
             | For example: let's say there's a cancer with 20% survival
             | rate. You test a treatment with 25 experimental and 25
             | control patients, 40% in the experimental group survive[1].
             | 
             | You can analyze this with a bunch of statistical methods.
             | You can ask different questions about the patients,
             | focusing on well-being rather than simple carcinogenic
             | remission. But ultimately, the thing that happened in this
             | study is that 50% got better and no fiddling with numbers
             | or changing the questions you ask is going to change that
             | underlying phenomenon. You can check for blood markers of
             | cancer: you get 50% have no blood markers. You ask them
             | questions about how they feel: you get 50% feel better. You
             | body scan the area where tumors were: you get 50% no longer
             | have tumors.
             | 
             |  _You have only tested one phenomenon in one sample_ , and
             | that essentially amounts to 5 people getting better.
             | 
             | [1] I know this is not how cancer treatment studies work
             | exactly, this is a simplified hypothetical.
        
           | nathell wrote:
           | > Do 20 experiments with a p<5% criterion, and it's likely
           | that one will be a false positive.
           | 
           | That would be true if p were the probability of null
           | hypothesis being true given the data observed, but that's not
           | what it is.
        
             | lucianbr wrote:
             | I really have no clue what p is, but I also really believe
             | Randall Munroe does. Not that he's above making mistakes,
             | but come on.
        
               | mtts wrote:
               | p is the chance that you would have gotten this result if
               | the null hypothesis were true.
        
             | stonemetal12 wrote:
             | >For example, if 1,000,000 tests are carried out, then 5%
             | of them (that is, 50,000 tests) are expected to lead to p <
             | 0.05 by chance when the null hypothesis is actually true
             | for all these tests.
             | 
             | https://eurradiolexp.springeropen.com/articles/10.1186/s417
             | 4....
        
             | kerkeslager wrote:
             | Even if you are right, how would this response be helpful?
             | You're not giving the right answer, you're just saying the
             | answer given is wrong. Nobody is coming out of this
             | interaction with corrected knowledge in their head.
             | 
             | This is the sort of thing I used to do when I was younger,
             | and looking back, the reason I did it was because I was
             | basing my sense of self-worth in being smarter than other
             | people. Ironically, this made me dumber, because I was less
             | open to the possibility of being wrong and therefore was
             | slower to learn. And, I found doing this made people
             | dislike me.
        
           | humansareok1 wrote:
           | >someone will eventually publish a false positive result
           | without fraud.
           | 
           | I think most people correctly intuit that this is actually a
           | type of very pernicious fraud.
        
             | whack wrote:
             | If 20 different people all conduct the same experiment, and
             | the 19 negative results are never published, there is no
             | fraud involved when the 20th person publishes his positive
             | result without realizing it is a 5% statistical anomaly.
             | That person probably has no idea that 19 other people tried
             | and failed at near-identical experiments.
             | 
             | This seems like such an obvious problem in the way science
             | is currently done. Are people so focused on their own
             | individual fields that they aren't thinking about and
             | fixing such glaring meta problems?
        
               | prerok wrote:
               | As others already pointed out, there is no incentive to
               | do so.
               | 
               | Consider: "Hey, look, I went on top of tower of Pisa and
               | threw down two identically shaped balls, one iron and one
               | wooden. They dropped at the same time!"
               | 
               | The above is the expected result and would only be
               | interesting if the result is different from expectation.
               | Now, if 1000 scientists did this and each published the
               | confirmation of what we knew would happen then who would
               | read that? But, if one scientist said: "I tried and they
               | drop at different times!" that would be different. The
               | 999 scientists would then try to replicate again and then
               | the papers of the 999 would be interesting again.
        
               | humansareok1 wrote:
               | This is completely different from one lab running the
               | same experiment 20 times and publishing the one positive
               | result.
        
           | some_random wrote:
           | It's a great xkcd but it's really wrong on one count, it's
           | not some outside media/popular force compelling scientists to
           | investigate something. Most of the time, researchers are
           | looking to prove something they "know" to be true. They truly
           | believe that jelly beans cause acne, they just need to prove
           | it. When they get a negative result, they simply don't
           | believe it. Something must have gone wrong, obviously,
           | because jelly beans obviously cause acne, so maybe it's a
           | color thing? Ah hah it's the green ones, now that we have our
           | results we can construct other metrics to support this
           | correct data!
           | 
           | Eventually if we (the public) are lucky someone else in the
           | field will disagree and run the trial again, which is how you
           | get the alt text.
        
             | aeternum wrote:
             | Yes, I find that when reading a paper I think to myself "Do
             | the authors really want this to be true?" and if the answer
             | is yes as it often is, I boost my own acceptance criteria
             | to p<.03
             | 
             | Particle physics still uses five sigma as the significance
             | threshold.
        
         | mistermann wrote:
         | > This is not an evidence of scientific fraud...
         | 
         | In its scriptures/philosophy, science describes extremely
         | thorough and sound principles and guidelines...but in on the
         | ground practice (by scientists, _which are a part of
         | "science"_), they are often not achieved[1]. However, this
         | distinction is not only not advertised broadly and without
         | aversion, it is usually (in my experience) not mentioned at
         | all, if not outright denied using _persuasive_ rhetorical
         | language (like, for example, when an object level instance of
         | not achieving it is pointed to in the wild, such as in forum
         | conversations). This may not be _fraud_ (that requires intent I
         | think?), but it achieves the same end: misinforming people.
         | 
         | I absolutely agree with your database idea, and if science
         | would like me to take them seriously (something near how
         | seriously they take themselves) they'd also have to go much
         | further.
         | 
         | [1] Not unlike in religion, a competing metaphysical framework
         | (model of reality) to science.
        
           | 3np wrote:
           | > Not unlike in religion, a competing metaphysical framework
           | (model of reality) to science.
           | 
           | No. Correlation fallacy.
        
             | mistermann wrote:
             | Fallacy fallacy.
             | 
             | Naive Realism fallacy.
        
           | kerkeslager wrote:
           | This sort of comment is why I think a lot of philosophy is
           | just communicating poorly to make yourself sound smart.
           | 
           | In your footnote, for example, you translated your
           | philosophy-speak into English (metaphysical framework ->
           | model of reality). Why not just say that? Your entire comment
           | goes into "philosophy mode" and communicates a few very
           | simple ideas in overcomplicated language.
           | 
           | Science and religion are pretty poorly understood as
           | competing models of reality. Religion originates when people
           | make up answers to other people's questions to gain social
           | standing, and religion continues due to (among other things)
           | anchoring bias--the bias people have toward continuing to
           | believe what they already believe. While religion does result
           | in those people having a model of reality, there is no
           | attempt being made at any point to relate the model to
           | reality. When religious people and scientists disagree, it's
           | not because the religious person is trying to model reality
           | differently--the religious person isn't even trying to model
           | reality--it's because the religious person is biased in favor
           | of their existing belief.
           | 
           | You said:
           | 
           | > In its scriptures/philosophy, science describes extremely
           | thorough and sound principles and guidelines...but in on the
           | ground practice (by scientists, which are a part of
           | "science"), they are often not achieved[1].
           | 
           | This is presented as some sort of gotcha, but it's not: few
           | scientists will claim that science is being practiced
           | perfectly or even well. Outside of a few areas such as
           | particle physics, we're quite aware that our ability to
           | practice scientific ideals is hampered by funding,
           | publication incentives, availability of test subjects in
           | human studies, data privacy, etc. And we're aware that this
           | means that our conclusions need to be understood as
           | probabilities rather than 100%-confidence facts.
           | 
           | There are certainly some people who treat scientific
           | conclusions with religious absolute confidence, but doing
           | that is fundamentally against scientific principles. The
           | accusation you are leveling against _science_ would be better
           | targeted toward _people_ : generally science journalists and
           | the science-illiterate public rather than scientists
           | themselves. The entire reproducibility crisis is scientists
           | _using science_ to show that our practice of science is too
           | imperfect to result in high-confidence conclusions.
           | 
           | Religious people jumping on the replication crisis because
           | they think it disproves science is rich. The replication
           | crisis isn't a disproof of science, it's an application of
           | science. The reason we know that there's a replication crisis
           | is because scientists asked "How confident can we be in the
           | conclusions of existing studies?" and applied science to
           | answer that question. If you really think science is invalid,
           | then you can't use science to prove that.
           | 
           | And the fact remains that _any confidence in conclusions at
           | all_ is more than religion has to offer, because again,
           | religion isn 't trying to model reality--the fact that
           | religion produces a model of reality is merely an unfortunate
           | side-effect.
        
             | mistermann wrote:
             | > While religion does result in those people having a model
             | of reality, there is no attempt being made at any point to
             | relate the model to reality. When religious people and
             | scientists disagree, it's not because the religious person
             | is trying to model reality differently--the religious
             | person isn't even trying to model reality--it's because the
             | religious person is biased in favor of their existing
             | belief.
             | 
             | A problem: you're talking to one right now, and you (your
             | mind's model of reality, technically - you do not have
             | access to the state of the things you claim to) _could
             | hardly be more wrong_.
             | 
             | From large quantities of experience, I am confident I would
             | have no success tackling your disagreements on a _careful_
             | , strict, point by point basis. Instead, I will simply
             | present two links (I have many others, but let's see what
             | happens with these) and ask: do you believe these have some
             | substantial relevance here, related to the truth value (
             | _and appearance of_ ) of our respective claims?
             | 
             | https://en.wikipedia.org/wiki/Theory_of_mind
             | 
             | https://en.wikipedia.org/wiki/Direct_and_indirect_realism
             | 
             | I can appreciate that this approach may seem unworthy of
             | anything more than a rhetorical response that dodges the
             | question (and the importance of the phenomena these links
             | discuss), so hopefully you can take the challenge
             | seriously. I am more than happy to offer a more substantive
             | reply later, but if you declare victory by fiat[1] it's a
             | bit tough to have a serious conversation.
             | 
             | [1] Roughly: declaring that one's opinion of _the
             | unknowable_ is necessarily correct, and that it is(!)
             | contrary to my (actual) stance.
        
               | kerkeslager wrote:
               | > A problem: you're talking to one right now, and you
               | (your mind's model of reality, technically - you do not
               | have access to the state of the things you claim to)
               | could hardly be more wrong.
               | 
               | A problem: you're talking to someone who used to be
               | religious, so I have as much access to the internal
               | thinking of a religious person as you do.
               | 
               | A second problem: people's self-perceptions of their own
               | internal processes are quite often measurably wrong.
               | 
               | > From large quantities of experience, I am confident I
               | would have no success tackling your disagreements on a
               | careful, strict, point by point basis. Instead, I will
               | simply present two links (I have many others, but let's
               | see what happens with these) and ask: do you believe
               | these have some substantial relevance here, related to
               | the truth value (and appearance of) of our respective
               | claims?
               | 
               | > https://en.wikipedia.org/wiki/Theory_of_mind
               | 
               | >
               | https://en.wikipedia.org/wiki/Direct_and_indirect_realism
               | 
               | Short answer: not in any interesting way.
               | 
               | Long answer:
               | 
               | From large quantities of experience, I would guess that
               | you're about to make a special pleading argument that
               | based on convenience beliefs that you yourself don't
               | believe in any other context, as evidenced by the fact
               | that you don't practice them.
               | 
               | Those pages, particularly the latter, are another example
               | of poor communication being presented as intelligence. If
               | we translate to English instead of philosophy-speak, it
               | boils down to an argument about whether perception is
               | reality or not.
               | 
               | Let's cut to the chase with a relevant parable:
               | 
               | The Buddha and his disciple were walking down the road.
               | Suddenly, the disciple drew his sword and cut the Buddha
               | in half at the waist. The Buddha turned to his disciple
               | and said, "Now you're beginning to understand!"
               | 
               | Would you be willing to reproduce this parable
               | experimentally with you as the Buddha? After all,
               | perception is reality, so if you're the Buddha and you
               | perceive being cut in half with a sword as no big deal,
               | that will be just fine, right?
               | 
               | The thing is, science is perfectly capable of answering
               | this question--it's not unknowable. The experiment of
               | cutting someone in half with a sword has sadly already
               | been performed too many times in history: we don't need
               | to perform it again. The scientific answer, which we
               | already have, is that no amount of changing our
               | perception prevents the person cut in half with a sword
               | from dying in horrible agony. And when you're not
               | speaking philosophese, you already believe the scientific
               | answer just like every philosopher who believes
               | perception is reality until faced with the prospect of
               | being cut in half with a sword. So if you're about to
               | make an argument about direct and indirect realism, I'd
               | have to ask, why do you believe that _reality_ is reality
               | when it comes to swords (and everything else in your day-
               | to-day life), but you suddenly you want me to believe
               | that perception is reality when it comes to your
               | invisible friend?
               | 
               | My only opinion of the unknowable relevant to this
               | conversation is that by definition, neither of us knows
               | it.
               | 
               | More parts of philosophy I think we can discard without
               | losing anything of worth:
               | 
               | 1. Arguing that perception=reality when it's convenient
               | and refusing to practice it in any other context.
               | 
               | 2. Talking about the unknowable as if we know it.
        
         | hcks wrote:
         | "We thought academia was not soul crushing enough so from now
         | on you will additionally spend 10 hours a week replicating dumb
         | papers from 1993"
        
           | bumby wrote:
           | Snark aside, replication is a cornerstone of science. If
           | someone doesn't want to be involved in science because they
           | think it's soul-crushing, perhaps academia isn't the right
           | place for them.
        
         | matthewdgreen wrote:
         | There is a major issue of limited resources to replicate
         | results, in terms of both time and funding. For example: I
         | would assume that most important results are replicated. As a
         | concrete example, if someone identifies a medication that (in
         | one small trial) shows a statistically significant effect in
         | curing some serious medical condition, then this will drive
         | further replication attempts. On the other hand, if someone
         | publishes a study showing that holding a pen in your mouth
         | makes you 1% likelier to do well on the PSATs, this study will
         | probably languish without replication for a decade -- because
         | honestly who cares? It's basically a curiosity. I can't help
         | but notice that many of the headline results that characterize
         | the "replication crisis" were small-effect-size social science
         | experiments that fundamentally weren't that important outside
         | of popular science news.
         | 
         | I'm not saying that our current allocation of resources is
         | optimal. I am pointing out that our resources are finite and
         | "replicate everything" is not even a remotely practical
         | allocation of those resources.
        
           | nostrademons wrote:
           | > I would assume that most important results are replicated.
           | 
           | GP is pointing out that the incentive structure makes this an
           | invalid assumption. If publications reward hype and novelty
           | when deciding what to publish, then there is no point
           | spending your limited resources replicating other peoples'
           | results, they won't get published anyway. And experiments
           | that give a null result won't be published anyway. What's
           | left are one-off results that showed something surprising
           | simply by chance and don't replicate...but then, we generally
           | will never know that they don't replicate, because the
           | replication experiment is not novel, has a low chance of
           | being published, and hence isn't worth spending limited
           | resources on.
           | 
           | Basically the publication process introduces selection bias
           | into the types of research that are even attempted, which
           | then filters down into the conclusions we take from it. A
           | cornerstone of the scientific method is _random sampling_ ,
           | but as long as the results that get disseminated are chosen
           | by a non-random process, it introduces bias.
        
           | freestyle24147 wrote:
           | > For example: I would assume that most important results are
           | replicated.
           | 
           | The example you provide is solely your assumption? Seems
           | pretty odd to provide a baseless assumption as an "example".
        
           | bumby wrote:
           | > _"replication crisis" were small-effect-size social science
           | experiments that fundamentally weren't that important outside
           | of popular science news._
           | 
           | I don't know that this is accurate. Some of these make their
           | way to large-scale public policy, or give bona fides to
           | people who craft far-reaching policy. This includes changes
           | to 401k allocations to car-insurance rates and other mundane,
           | but consequential policies.
           | 
           | The truth is most of science is not important outside of
           | popular science news. So we shouldn't be surprised that the
           | bulk of replication crises are also in the same category.
           | Claiming this means the replication crises is not really
           | impactful may be a case of base rate neglect.
           | 
           | It's also important to note that your example of medical
           | replication is a relatively highly regulated area, where most
           | other science is much less so.
        
           | bachmeier wrote:
           | > There is a major issue of limited resources to replicate
           | results, in terms of both time and funding.
           | 
           | Undergraduate students love being involved in research. It's
           | one of the selling points of many top universities. Grad
           | students replicate research all the time. Maybe funding is an
           | issue in grant fields (some research is extraordinarily
           | expensive) but that doesn't excuse the lack of replication
           | across the board.
        
         | YeGoblynQueenne wrote:
         | >> I've said it before but it bears repeating - replicating
         | published results, and adding the findings to a database,
         | should be a standard part of PhD training programs.
         | 
         | Wait, why should PhD students do that work? That just sounds
         | like pushing more grunt work to the lower rung of the academic
         | hierarchy.
         | 
         | Nope. If you want people to do that kind of work that is
         | important to everyone but is not directly conducive to
         | promoting one's research career then the solution is simple:
         | _pay them_.
        
           | bumby wrote:
           | > _why should PhD students do that work?_
           | 
           | I think there is some reasonable argument that replicating
           | research is the first step to learning how to do good
           | research on your own. In an ideal world, PhD students should
           | probably be trying to replicate similar work anyway and
           | applying existing approaches own pet problem. In practice,
           | many gloss over this because they are narrowly focused on
           | doing something "new" so it can get published.
        
           | glial wrote:
           | PhD students are in training, and replicating a published
           | result is a great training exercise. PhD students ARE paid.
           | But this work won't be prioritized by their PI unless it's
           | also a requirement of the program.
        
         | aeternum wrote:
         | It's kind of amazing that we discovered the scientific method,
         | used it to invent the transistor and bring the information
         | revolution.
         | 
         | Yet we still pool scientific results using only the printing
         | press.
         | 
         | It's like we unlocked the tech tree but then got so caught up
         | in chasing citations and peer review that we forgot to use the
         | new tech we invented.
        
           | glial wrote:
           | Yes, so-called "social technology" sometimes doesn't feel
           | very advanced.
        
         | Gormo wrote:
         | Richard Feynman was complaining about exactly this phenomenon
         | fifty years ago:
         | https://sites.cs.ucsb.edu/~ravenben/cargocult.html
        
       | calibas wrote:
       | The process for publishing, how a study becomes "legitimate"
       | science, is not very scientific. Same with the process for
       | getting funding for studies.
        
         | nick238 wrote:
         | My thesis committee chair bristled when I said I hated the
         | marketing aspect of academic science and I implied that he was
         | a very good marketer because he played the game so well. After
         | I kept making comparison after comparison, he didn't have much
         | to say in response. I would say he begrudgingly accepted my
         | view, but I don't think accepted it at all, he just couldn't
         | refute any point I made.
        
       | cactusfrog wrote:
       | Just write a preprint or a blog post
        
         | zug_zug wrote:
         | A blog post won't be indexed on google scholar for other
         | academics to reference though. Maybe a preprint would be?
         | 
         | In my opinion the goal is to get a record of the information
         | and the dataset out there.
        
           | tokai wrote:
           | >A blog post won't be indexed on google scholar It will if
           | its on an edu domain.
        
         | cydodon wrote:
         | A preprint will still give less value to the negative result.
         | No peer review (being a broken system or not) and not being
         | published in a "proper" journal will make it less likely that
         | the results will be recognised / accepted. The whole point is,
         | that a negative result can have as much value as a positive
         | one...
        
           | smcin wrote:
           | Why was this comment flagged, it's reasonable?
        
         | vsuperpower2021 wrote:
         | The whole point of publishing studies is to be able to brag
         | about how many impressions you have, which is good for your
         | career. Who is going to care that your blog got views?
        
       | parpfish wrote:
       | Publishing null results would be great, but I'm worried about how
       | that could also be gamed.
       | 
       | How do you distinguish a 'real' null result from one done in a
       | sloppy study?
       | 
       | Would people run shoddy experiments to get null results to
       | undermine their rivals?
       | 
       | Could somebody pump out dozens of null publications to pad their
       | CV and screw up h-indexes?
        
         | vharuck wrote:
         | Currently, the system can be gamed exactly as you say by
         | publishing sloppy studies that falsely find a "real" effect.
         | But editors and readers of the article will look at and judge
         | the methodology section. Sloppy experiments risk not being
         | printed or cited.
         | 
         | >Would people run shoddy experiments to get null results to
         | undermine their rivals?
         | 
         | In this case, the rival would be very much inclined to recreate
         | the "null" experiment.
         | 
         | >Could somebody pump out dozens of null publications to pad
         | their CV and screw up h-indexes?
         | 
         | Possibly, but would null publications be cited as often? Also,
         | who's going to keep funding a researcher that mostly publishes
         | null results?[0]
         | 
         | [0] Besides agenda-driven "think tanks". Which is worrying
         | itself.
        
           | parpfish wrote:
           | > Possibly, but would null publications be cited as often?
           | 
           | Don't underestimate an academics ability to cite ALL of their
           | previous publications each time they publish
        
         | tefkah wrote:
         | I run a journal where we publish both "real" null results and
         | experiments were something practical went wrong, so i have a
         | few thoughts:
         | 
         | 1. Ideally peer review would catch this. A badly setup study
         | should be critiqued in peer review. Forcing scientists to first
         | publish their methods before doing the experiment also helps,
         | as it validates the experimental setup before hand.
         | 
         | I also think it's worth publishing studies where a null result
         | was reached due to some error in experimental setup or other
         | factors, as long as it's presented as such and reflected upon.
         | This can still be valuable information for future experiments.
         | Offering scientists social capital for that (an "official"
         | publication, citations) might also incentivize scientists to
         | publish the results as is, rather than making it appear as a
         | "true" null result, or even as a non-null one (eg through p
         | hacking).
         | 
         | 2. While obviously possible, given the amount of effort
         | scientists have to go through to raise funding for an
         | experiment nowadays, i find it highly unlikely that people
         | would go through this effort.
         | 
         | 3. This is already possible and a problem. This is a problem of
         | academic misconduct, has very little to do with null results.
         | 
         | The current publishing system is of course already set up to be
         | gamed, so I understand your worries. But null results should be
         | published, as they are just science. Even if someone were to
         | "game" the system by publishing a ton of null results, those
         | publications should be held to the same level of scrutiny as
         | any other publication. If someone is extremely prolific in
         | replicating existing studies and comes up with a ton of null
         | results, that should be lauded and those papers should be
         | published, no?
         | 
         | I do believe the entire idea of a researchers output only being
         | recognized by being allowed to be published in a journal is
         | terrible and should be abolished, but baby steps I guess.
        
       | mlhpdx wrote:
       | This seems like a tough problem to solve, but given the article
       | states 75% of researchers are _willing_ to publish the null
       | results at least that's something to build on. Making publishing
       | compulsory could lead to other, worse problems given humans are
       | involved.
       | 
       | I realize as I'm writing this that I don't really understand what
       | "publishing" means. It's more than just making the paper
       | available, right? Is there a formal definition, or just a
       | colloquial one in science?
        
         | michaelt wrote:
         | _> I realize as I 'm writing this that I don't really
         | understand what "publishing" means._
         | 
         | 1. Fully gather and analyse the data, no stopping early when
         | you realise it isn't working
         | 
         | 2. Write the paper, read those background papers you hadn't got
         | to yet so you can cite them, chase down references for things
         | you know from memory.
         | 
         | 3. Realise there's a gap in your table because you tested A, B
         | and D at three levels each but C you only tested at the low and
         | high level, not at the medium level. Go set up your test
         | equipment again to fill in the blank space.
         | 
         | 4. Run the paper by your collaborators and your boss, all of
         | whom will feel obliged to suggest at least some improvements,
         | which you'll make.
         | 
         | 5. Choose a journal, apply the journal's template and style,
         | send it in.
         | 
         | 6. Wait for as much as several months for peer review.
         | 
         | 7. The first peer reviewer suggests you retest with a slightly
         | different protocol for cleaning your equipment before the test.
         | You do so.
         | 
         | 8. The second peer reviewer replies suggesting you test
         | _combinations_ of A, B, C and D, not just one at a time....
        
         | eyeundersand wrote:
         | From my experience, there's no definition as such but having
         | your study "published" implies that it went through a peer-
         | review process featuring at least two qualified referees and an
         | editor. The implication being that the claims from the study
         | are valid as reference for future studies, to varying extent
         | depending on the quality of the journal etc.
        
         | elashri wrote:
         | It does not work as standard anyway. The publishing practices
         | are different from field to field. Also journals have different
         | policies and practices. So it is hard to get a real
         | representative definition other than making paper available. In
         | which case arxiv will be a publishing mechanism that does not
         | provide editor, peer-reviewed and does not cost money.
        
         | setopt wrote:
         | "Publishing" in academia usually refers to being accepted in a
         | "peer-reviewed" journal or conference. So your paper is not
         | just uploaded somewhere - it is sent to 2-3 domain experts, who
         | criticize your work and force you to jump through lots hoops
         | before it's either "accepted for publication" or "rejected".
         | 
         | When this works well, it's a good filter to prevent spam,
         | fraud, methodological errors, etc. from being published, while
         | improving the quality of the accepted research papers via
         | feedback from other domain experts.
         | 
         | When it doesn't work well, the referees can take it upon
         | themselves to reject papers for subjective reasons, including
         | that the work is "not novel enough", that they don't like the
         | model you used, or that they are just not excited by the
         | research field you work in. It also happens that they require
         | you extend your work in a way that takes an order of magnitude
         | more time before they'll accept it. For the authors, it's often
         | difficult to defend themselves from this kind of attacks, since
         | the referees in many journals don't need to justify their
         | claims much, and often feel free to be extra harsh since they
         | tend to be anonymous.
         | 
         | Since going through the publication process can take months to
         | years of work depending on your field, some researchers would
         | not be willing to put in that effort for a negative result
         | (which is unlikely to be cited and thus doesn't help your
         | career).
         | 
         | It is however possible to just upload a paper (e.g. to arXiv).
         | These "manuscripts" are often useful and can be cited normally,
         | but researchers tend to be a bit more wary of citing them
         | unless the authors are well-respected due to the lack of peer
         | review.
        
       | proof_by_vibes wrote:
       | I recall reading someone who proposed the need for what they
       | dubbed "meta-science," and I think it's clear that this concept
       | is becoming more needed as time goes on. Our publishing process,
       | and the incentives therein, are obviously faulty and we are aware
       | of it. We can do the math: I believe it's time we do away with
       | playing speculative games with science.
        
         | shishy wrote:
         | "Science of Science" is a good book / overview of this field!
        
       | shishy wrote:
       | Research should require pre-registration like clinical trials so
       | others have visibility into failed outcomes
        
         | elashri wrote:
         | Who would track this registration and would it require
         | approvals now?. Then what if you changed your current research
         | because you have personal reasons, change of plans, didn't see
         | it fitting....etc. How would you handle these situations? And
         | why are you introducing MITMs.
        
           | _flux wrote:
           | In basic level, maybe some publications that print these
           | papers would take the registrations, as a precondition to
           | publishing them?
           | 
           | It's not like it couldn't be gamed, but maybe it would
           | incentivize people to also publish null results.
        
             | elashri wrote:
             | >incentivize people to also publish null results
             | 
             | This will hardly achieve this goal as you basically make
             | things harder, and now you are introducing more overhead.
             | The main reason why people don't like publishing null
             | results is that it hurts them in funding applications. The
             | current system works with mentality, we shouldn't fund
             | someone who don't get positive results. It is better to
             | allocate this somewhere else. Most of the problems with
             | research can be tracked down to funding issues and
             | practices. But these are political issues, so people try to
             | argue about other things because it is easy.
        
               | shishy wrote:
               | All good points- I was imagining that from a funder's
               | perspective, knowing null results is actually important
               | to guaranteeing future positive results / being strategic
               | about what is worth funding in (versus what is destined
               | to fail because others failed and they just didn't know).
               | Not sure how it would work in practice; might be bumpy
               | but certainly seems worthwhile / not impossible.
        
         | setopt wrote:
         | As a researcher that does mostly numerics, I really hope not.
         | This would be a huge bureaucratization of scientific
         | exploration and would slow down progress. I understand why it's
         | necessary in some fields like medicine, but I don't think it's
         | worth the trade off in say theoretical physics.
         | 
         | Imagine the corresponding concept for programmers: you are not
         | allowed to sell or share any software you create unless you
         | pre-register a detailed plan for what code you will write and
         | how it will be used before you write the first line of code.
         | Pretty sure that would reduce the innovation going on in public
         | GitHub repos a lot :)
        
           | shishy wrote:
           | I think that not knowing the null results of others also
           | slows down innovation, because you never know if an area of
           | interest / whitespace in a field is 1) because there's
           | something there or 2) because others tried, failed, and
           | didn't publish.
           | 
           | Maybe pre-registering isn't the right answer; I'm sure there
           | are practical hurdles, but the problem to be solved still
           | remains the same (visibility into the graveyard of failed
           | experiments to improve the rate of innovation).
        
       | hcks wrote:
       | The introductory example is quite illuminating. No theory behind,
       | just a random hypothesis tested (by comparing the preferences of
       | 10 fishes from populations separated from as little as 50 meters,
       | what effect size did the authors expect?), with claims of
       | generalisation (climate change ok bad?? Or climate change bad
       | bad??)
        
       | jjmarr wrote:
       | Would recommend people interested in this start following
       | "metascience".
       | 
       | https://en.wikipedia.org/wiki/Metascience
       | 
       | And read "Why Most Published Research Findings Are False".
       | 
       | https://en.wikipedia.org/wiki/Why_Most_Published_Research_Fi...
        
         | sunshinesnacks wrote:
         | There's a bit of irony in the second reference, as the author
         | ended up with some controversial work related to COVID-19. He
         | co-authored a study that was widely cited to downplay severity
         | of the pandemic, but was also heavily criticized for poor
         | methodology (and later I think firmly found to be very wrong).
         | He also published a paper with personal attacks of a grad
         | student that had disagreed with him, which is probably not in
         | the spirit of encouraging constructive science.
        
       | tefkah wrote:
       | You could publish it in the Journal of Trial and Error
       | (https://journal.trialanderror.org), which I created with a
       | number of colleagues a couple years ago!
       | 
       | Our editor-in-chief was interviewed for this related Nature
       | article a couple months ago
       | (https://www.nature.com/articles/d41586-024-01389-7).
       | 
       | While it's easy pickings, it's still always worth pointing out
       | the hypocrisy of Nature publishing pieces like this, given that
       | they are key drivers of this phenomenon by rarely publishing null
       | results in their mainline journals. They are have extremely
       | little incentive to change anything about the way scientific
       | publishing works, as they are currently profiting the most from
       | the existing structures, so them publishing something like this
       | always leaves a bit of a sour taste.
        
         | MostlyStable wrote:
         | Providing _places_ to publish the result is only part of the
         | problem. The other part is incentivizing scientists to do so.
         | And similarly, Nature itself is responding to incentives. The
         | core problem is that scientists themselves, the individuals, do
         | not mostly display much interest (in a revealed preference kind
         | of way) for null results. If scientists were interested and
         | wanted to read articles about null results, then either
         | journals like Nature would do so, or the numerous examples of
         | journals like yours that have come and gone over the years
         | would have been more succesful and widespread.
         | 
         | Because of this revealed lack of interest, high tier journals
         | don't tend to take of them (correctly responding to the lack of
         | demand), and journals like your that specifically target these
         | kinds of articles A) struggle to succeed and B) remain
         | relatively "low impact", which means that the professional
         | rewards to publishing in them are not very high, which means
         | that the return on effort of publishing such a work is lower.
         | 
         | Don't get me wrong, the scientific community could do a lot
         | more to combat this issue, but the core problem is that right
         | now, the "market" is just following the incentives, and the
         | incentives show that, despite what the stream of articles like
         | this one over the past few decades is that most scientists
         | don't seem to _actually_ have an interest in reading null-
         | result papers.
        
           | dleeftink wrote:
           | What if to each publication of a non-null result, academics
           | are given the opportunity to publish their nulls as well, if
           | only as a appendix or better, a counterpublication to their
           | main conclusions? I don't buy the argument that papers need
           | be of max-n length, now that documents and journals can be
           | easily stored and distributed.
           | 
           | I would love something like Living Papers [0][1] to take off,
           | where the null an non-non results could be compared
           | interactively on similar footing.
           | 
           | [0]: https://github.com/uwdata/living-papers
           | 
           | [1]: https://idl.uw.edu/living-papers-template/
        
             | bluGill wrote:
             | A null result may be a dead end and so there is no related
             | paper worth publishing it in.
             | 
             | A null result should be published right away in a
             | searchable place, but probably isn't worth a lot of effort
             | in general. I tried X, it didn't work, here is the raw
             | data.
        
               | dleeftink wrote:
               | That's my thought exactly--not a related paper but simply
               | providing additional room for discussing the less shiny
               | bits of the same experiment.
               | 
               | Even if the whole thing is a null, the setup,
               | instruments, dependencies and what methods worked/didn't
               | work is worth describing by itself.
        
               | acchow wrote:
               | All of that - the setup, the instruments, dependencies,
               | methods - should be pre-submitted to the journal before
               | the experimental results arrive. The journal should be
               | the one that uses the data from the experiment and runs
               | your pre-submitted program over the data to produce a
               | result.
               | 
               | Papers need to be published backwards.
        
               | bluGill wrote:
               | Right now you don't even know who will publish you paper
               | until all that is done. Your experiment might be try some
               | promising molecule/drug in a petri dish, and see what
               | happens, if the results are amazing you will get in a
               | different journal than if the results are something
               | happens but the control molecule/drug is better.
        
               | bumby wrote:
               | I agree that in an idealized way, this would be much
               | better. But what do you do about going through all this
               | process and ending up with a bad reviewer?* In those
               | cases, how would you handle re-submitting to a different
               | journal without looking like you're creating those
               | artifacts after-the-fact to suit your outcome? Would the
               | pre-submittals need to be handled by some third party?
               | 
               | * the current process still has a lot of luck in terms of
               | getting assigned referees. Sometimes you just plain get a
               | bad reviewer who just can't be bothered to read the
               | submission carefully and is quick to reject it. I would
               | hate to see a system that only allows for a single shot
               | at publication
        
           | c-linkage wrote:
           | In the old days, Science Weekly[1] used to print 4-5
           | paragraph summaries of published research in a three-column
           | layout. The magazine was dense with information across a huge
           | number of topics.
           | 
           | And in the very old days, newspapers used to publish in
           | tabular form local election results and sports games.
           | 
           | I feel that Nature could dedicate one to two pages of one
           | paragraph summaries of null results with links to the
           | published papers.
           | 
           | It's amazingly easy to skim such pages to find interesting
           | interesting things!
           | 
           | [1] I think that was the name; I canceled my subscription
           | when they changed to a Scientific American wannabe. I was
           | looking for breadth not depth! I could always get the
           | original paper if I wanted more information.
        
           | kkylin wrote:
           | I agree incentivization is definitely a big part of the
           | problem, but I think in general a bigger issue is that as a
           | society we tend to reward people who are the first to arrive
           | at a non-null result. This is as true in science as much as
           | in any other area of human endeavor.
        
           | borski wrote:
           | From the article: "A 2022 survey of scientists in France, for
           | instance, found that 75% were willing to publish null results
           | they had produced, but only 12.5% were able to do so."
        
             | EvgeniyZh wrote:
             | The question is how many of them are willing to review and
             | read these publications. Of course as an individual
             | scientist (not me, but someone who does experiments), I'd
             | love to capitalize on my work, even if it is unsuccessful
             | (in the sense of null result), by publishing it. But do I,
             | and scientific community in general, care about null
             | results? I'd say mostly no. Null results, if universally
             | published, would overwhelm already overwhelmed publication
             | system.
             | 
             | If you think it will be helpful to others to know about
             | specific failure, put it in a blogpost or even on arxiv. Or
             | talk about it at conference (for CS, workshop).
             | 
             | Also, if we use publications as a measure of scientists
             | success, and we do, is a scientist with a lot of null
             | results really successful?
        
               | caddemon wrote:
               | Obviously most scientists are not going to be interested
               | in null results from adjacent subfields, but when it
               | comes to specific questions of interest it is absolutely
               | useful to know what has been tried before and how it was
               | done/what was observed. I know a lab that had
               | documentation not only on their own historical null
               | results but also various anecdotes from colleagues' labs
               | about specific papers that were difficult to replicate,
               | reagents that were often problematic, etc.
               | 
               | That is a non-ideal way for the scientific community at
               | large to maintain such info. Trying to go through
               | traditional peer review process is probably also non-
               | ideal for this type of work though, for reasons you
               | cited. We need to be willing to look at publication as
               | something more broadly defined in order to incentivize
               | the creation of and contribution to that sort of
               | knowledge base. It shouldn't be implemented as a normal
               | journal just meant for null results - there's really no
               | need for this sort of thing to be peer reviewed
               | specifically at the prepub stage. But it should still
               | count as a meaningful type of scientific contribution.
        
         | Waterluvian wrote:
         | Omg I love this. For like 20 years I've joked about "The
         | Journal of Null Results and Failed Experimenrs" and it looks
         | like you and your friends are actually doing it.
         | 
         | There's so much to learn from these cases.
        
         | greenavocado wrote:
         | If you publish null results you accelerate the development of
         | competing hypotheses by your competition. It's best to make
         | sure they waste as much time as possible so you can maintain an
         | edge and your reputation. /s
        
         | bumby wrote:
         | Years ago, I came across _SURE: Series of Unsurprising Results
         | in Economics_ with the goal of publishing good, but
         | statistically insignificant, research.
         | 
         | https://blogs.canterbury.ac.nz/surejournal/
        
           | jraph wrote:
           | I thought "statistically insignificant" meant we couldn't
           | conclude anything. So I was surprised.
           | 
           | [1] says:
           | 
           | > In statistical hypothesis testing,[1][2] a result has
           | statistical significance when a result at least as "extreme"
           | would be very infrequent if the null hypothesis were true
           | 
           | So I understand this journal publishes results for which a
           | hypothesis was tested, found to give insignificant results,
           | which would rule out the hypothesis assuming the research was
           | correctly conducted, without biases in the methodology, with
           | a big enough sample, etc. Which would be worthy to know but
           | no journal usually takes this research because it doesn't
           | make the headlines (which yes, I've always found was a
           | shame).
           | 
           | Do I get this right?
           | 
           | [1] https://en.wikipedia.org/wiki/Statistical_significance
        
             | bumby wrote:
             | Yes, statistical insignificance doesn't "prove" the null
             | hypothesis, it just fails to reject it. It's a subtle, but
             | sometime misunderstood distinction. It's a measure of how
             | big the effect size is and how often you'd expect to see it
             | just by chance rather than due to the variables you're
             | measuring. If it's a really extreme difference, we expect
             | it to happen less often just by chance alone than if it's a
             | really miniscule difference.
             | 
             | > _Which would be worthy to know but no journal usually
             | takes this research because it doesn 't make the
             | headlines._
             | 
             | That's usually correct, which gives rise to all kinds of
             | issues like the article talks about. It can result in a lot
             | of wasted time (when you're conducting what you "think" is
             | a new experiment, but it's been done many times but
             | unpublished because it doesn't provide statistically
             | significant results). It provides little incentive for
             | replication, which can lead to stronger conclusions about
             | the results than may be warranted, etc.
        
               | foldr wrote:
               | The flip side of this is that there is almost always a
               | very small effect, even if you are testing a crazy
               | hypothesis (there are very weak correlations between all
               | sorts of things). So you can often get a 'significant'
               | result just by using a huge sample, even though the
               | effect size is too small to matter practically.
        
       | EricE wrote:
       | Remember all the cries of "the science is settled"!
       | 
       | Yeah, that's not science - it's the exact opposite of science.
       | This is the perfect example of why reasoned skepticism is more
       | necessary than ever. Blind trust in any institution is a recipe
       | for disaster.
        
       | __MatrixMan__ wrote:
       | I wish we would disentangle publication from endorsement. Making
       | the bits available and saying that they're useful are different
       | things. Your null result could contain data which is relevant to
       | some other inquiry.
       | 
       | All results should be published, some should be celebrated.
        
         | Brian_K_White wrote:
         | The example null correlation sure sounds as significant as any
         | correlation.
        
         | tmalsburg2 wrote:
         | Publication servers like arXiv + overlay journals. I'd love
         | that.
        
         | KeplerBoy wrote:
         | This goes both ways.
         | 
         | Some people publish fantastic papers without data nor code.
         | Sometimes annoying, other times a complete waste of everyone's
         | time.
        
           | __MatrixMan__ wrote:
           | Does it? I mean there's a lot of trash on the internet that
           | just doesn't get looked at. If you waste your time reading
           | it, that's on you.
        
       | SeanLuke wrote:
       | I think the fundamental problem is that for every claim yielding
       | a positive result there are many more, perhaps infinitely more,
       | related claims yielding negative results.
       | 
       | Positive result claim: the sun comes up in the morning.
       | 
       | Negative result claims: the sun moves sideways in the morning.
       | The sun was always there. The sun peeks up in the morning and
       | immediately goes back down. And so on.
       | 
       | Positive result claim: aspirin is an effective pain reliever.
       | 
       | Negative result claims: eating sawdust is an effective pain
       | reliever. Snorting water is an effective pain reliever. Crystal
       | Healing is an effective pain reliever. Etc.
       | 
       | Because there are so many negative results, it's trivial to
       | construct an experiment which produces one. So why should that be
       | published?
       | 
       | Negative results should be published when people in the community
       | are asking that question, or have a wrong belief in the answer
       | (hence the replication crisis). But if nobody cares about the
       | question, it's hard to argue for why a given negative result
       | would be preferred over any other negative result for purposes of
       | publication.
        
         | dfgtyu65r wrote:
         | None of these are negative results in the sense of being a
         | 'null' hypothesis?
         | 
         | In the language of hypothesis testing you have your null and
         | alternative hypotheses.
         | 
         | So for alternative hypothesis that the sun comes up in the
         | morning, the null hypothesis would simply be that the sun does
         | not come up in the morning.
         | 
         | Each of the negative results, reads to me like a separate
         | 'alternative' hypothesis.
        
           | SeanLuke wrote:
           | Sure they are.
           | 
           | So let's say I claim that the sun goes in a circle in the sky
           | in the morning. The null hypothesis is that it doesn't do
           | that. Perform experiment. Null hypothesis wins. Write up
           | paper! This is a negative result.
           | 
           | The point is that for every result where the alternative
           | hypothesis wins, there are a massive, if not infinite, number
           | of results where the null hypothesis will win. Are these
           | publishable?
        
             | nick238 wrote:
             | The idea is that some null hypotheses being true is
             | actually interesting because it challenges an assumed
             | belief. From the first paragraph of the article, the
             | immediate feedback from the postdoc's supervisor was 'you
             | did it wrong [because _everyone knows_ that fish do like
             | warmer water] '.
             | 
             | > It ain't what you don't know that gets you into trouble.
             | It's what you know for sure that just ain't so.
        
       | mettamage wrote:
       | Is it an idea to publish null results in appendix when a sexy
       | result will be published? Kinda like a Thomas Edison thing. How
       | many ways are there to not make a lightbulb, included with the
       | ways that do make it possible to create one
        
       | bandrami wrote:
       | There is a psychology journal specifically for null hypothesis
       | results: https://www.jasnh.com/
        
       | bowsamic wrote:
       | Researchers are expected to publish papers at such a frequency
       | today that spending time writing a paper for a null result would
       | be considered a bad career move
        
       | yzydserd wrote:
       | It's a shame to see no mention of https://opentrials.net/ in the
       | article.
        
       | ketanmaheshwari wrote:
       | Relevant, I run a workshop for negative results: https://error-
       | workshop.org/
        
       | fharding wrote:
       | In cryptology there's something called CFail, which is a bit like
       | this. https://www.cfail.org/call-for-papers
        
       | gtmitchell wrote:
       | As someone whose early scientific career was destroyed by null
       | results, no. No one will publish your negative results. Unless
       | you win the lottery and stumble across a once-in-a-generation
       | negative result (e.g. the Michelson-Morley experiment), any time
       | you spend working on research that yields negative results is
       | essentially wasted.
       | 
       | This article completely glosses over the fact that to publish a
       | typical negative result, you need to have progressed your
       | scientific career to the point where you are able to do so. To
       | get there, you need piles of publications, and since publishing
       | positive results is vastly easier than publishing negative ones,
       | everyone is incentivized to not waste time on the negative ones.
       | You either publish or you perish, after all.
       | 
       | Simply put, within the current framework of how people actually
       | become scientists and do research, there is no way to solve the
       | 'file drawer' problem. You might see an occasional graduate
       | student find something unusual enough to publish, or an already-
       | tenured professor with enough freedom to spend the time
       | submitting their manuscript to 20 different journals, but the
       | vast majority of scientists are going to drop any research avenue
       | that doesn't immediately yield positive results.
        
       | gzer0 wrote:
       | The main issue with publishing research is the cost. Some time
       | ago, I worked in a research lab studying Lupus. Our results were
       | negative, and my initial inclination was not to publish them.
       | However, my Principal Investigator (PI) emphasized that all
       | results, whether positive or negative, should be published.
       | Fortunately, we had the funds to do so. At that time, publishing
       | in a reputable journal cost $2,300.
       | 
       | Not everyone is so fortunate. This lesson has stuck with me, as I
       | have seen or heard from different labs where, unfortunately, they
       | couldn't afford to publish their findings.
        
       | nick238 wrote:
       | What's the last null result Nature, Science, and Cell published?
        
       | SubiculumCode wrote:
       | Perhaps my most prominent paper was a null result. If the
       | question is important enough, the work to test it non-trivial, it
       | will find an audience and a likely a reputable journal. What is
       | the value in reporting a null result other than reducing file
       | drawer effects? Well one, even though the null does not tell us
       | whether the effect is absent, it does help suggest bounds on how
       | large an effect could be if it did exist. In my case, a
       | particularly large effects were observed in cross-sectional
       | comparisons (different people at different ages), but our
       | research showed that longitudinal changes within individuals were
       | generally negligible, suggesting systematic bias in the cross-
       | sectional sampling.
        
       | ok123456 wrote:
       | Why not self-publish it on Arvix?
        
       | bachmeier wrote:
       | The problem is that peer review is primarily focused on results.
       | Peer review should be done up to but not including the results.
       | Provide motivation, explain your methodology, explain how it will
       | resolve issues in the literature, but don't say anything about
       | your results. Papers should be conditionally accepted, subject to
       | confirmation that the results you report are the results of the
       | proposal that went through peer review.
        
       | joemazerino wrote:
       | Crowdstrike will.
        
       | analogwzrd wrote:
       | Some others have mentioned this in their comments and I agree
       | that once you succeed in getting a non-null result, publishing
       | the null results (all the things you tried that didn't work)
       | could be included as appendices or something.
       | 
       | Also, just because you get a null result doesn't mean that
       | nothing was learned, that something new (and unexpected) wasn't
       | stumbled on, or that some innovation didn't happen.
       | 
       | There are tiers of publications and journals. Even if you get a
       | null result and you're not going to get it accepted in Nature,
       | it's very possible that you can get a conference paper (sometimes
       | peer reviewed) out of something that was learned.
        
       | lokimedes wrote:
       | We published a load of null results in particle physics. Simply
       | go to arxiv.org and look for papers beginnig with "search for..."
       | that would be a null result. Well technically a sigma<3 result.
        
       | diffxx wrote:
       | True story: I wrote my ph.d. thesis on a special case of a
       | general problem. After about one year of work, I realized that
       | the approach would never work for the general problem for
       | intractable reasons. But I also really wanted to finish my ph.d.
       | within 5 years, so I spent the next two years refining the work
       | enough to be able to write a dissertation on it and ignored the
       | fact that it would never really work for what it was intended. I
       | did do some interesting work and learned a lot, but I couldn't
       | really bring myself to try and publish the results (beyond my
       | thesis) because I very clearly had not made an advance in the
       | field. Of course, I do think it would have been useful to publish
       | why I thought that essentially the entire field of inquiry was a
       | dead end, but that would not have made me very popular with my
       | collaborators or others in the field and it wouldn't likely have
       | ingratiated me with anyone else.
        
       | jimmar wrote:
       | Not all null results are created equal. To get your null results
       | published, the null result must shed light on some phenomena. And
       | of course, the study must be sound. E.g., everybody takes for
       | granted that X causes Y, but in a well-designed study, X did not
       | in fact cause Y, which reveals an error in what we assumed was
       | true.
        
       ___________________________________________________________________
       (page generated 2024-07-24 23:06 UTC)