[HN Gopher] Slowed canonical progress in large fields of science
       ___________________________________________________________________
        
       Slowed canonical progress in large fields of science
        
       Author : phreeza
       Score  : 61 points
       Date   : 2021-10-10 20:01 UTC (2 hours ago)
        
 (HTM) web link (www.pnas.org)
 (TXT) w3m dump (www.pnas.org)
        
       | tomlockwood wrote:
       | Maybe Sokal Squared shouldn't be our primary concern.
        
       | amelius wrote:
       | Low hanging fruit has been picked.
       | 
       | And scientists now work within the walls of corporations.
        
         | chadcmulligan wrote:
         | And administrators have taken over universities
        
       | kkoncevicius wrote:
       | The article is basically saying that the quantity of papers
       | published per year has a negative influence on the quality.
       | 
       | Quite off-topic, but it reminded me of an interesting book I read
       | some time ago titled "The Reign of Quantity and the Signs of the
       | Times". It is a bit obscure and metaphysical, but it develops the
       | same idea across all aspects of society: economics, society,
       | politics, religion, etc, etc.
       | 
       | Of science it basically proposes that science itself is a product
       | of "quantitative thinking":
       | 
       | > The founding of a science more or less on the notion of
       | repetition brings in its train yet another delusion of a
       | quantitative kind, the delusion that consists in thinking that
       | the accumulation of a large number of facts can be of use by
       | itself as 'proof' of a theory; nevertheless, even a little
       | reflection will make it evident that facts of the same kind are
       | always indefinite in multitude, so that they can never all be
       | taken into account, quite apart from the consideration that the
       | same facts usually fit several different theories equally well.
       | It will be said that the establishment of a greater number of
       | facts does at least give more 'probability' to a theory; but to
       | say so is to admit that no certitude can be arrived at in that
       | way, and that therefore the conclusions promulgated have nothing
       | 'exact' about them;
       | 
       | Anyway, quite an interesting book.
        
       | cycomanic wrote:
       | Why there is many things wrong in science and large number of
       | publications is probably amongst them the premise of the argument
       | here is weak at best. The authors essentially say because in
       | established fields people cite established papers, therefore no
       | progress is made. That is quite a leap, just because people
       | continue to cite Newton does not mean that no progress is being
       | made.
       | 
       | The argument seems to be that new "disruptive" science needs to
       | replace the old, but that is hardly ever the case. Instead of
       | replacing it it often extends it (see my Newton example which was
       | extended by e.g. quantum mechanics).
        
       | shoo wrote:
       | Smaldino & McElreath (2016) -- The natural selection of bad
       | science
       | 
       | https://royalsocietypublishing.org/doi/pdf/10.1098/rsos.1603...
       | 
       | > Poor research design and data analysis encourage false-positive
       | findings. Such poor methods persist despite perennial calls for
       | improvement, suggesting that they result from something more than
       | just misunderstanding. The persistence of poor methods results
       | partly from incentives that favour them, leading to the natural
       | selection of bad science. [...] Some normative methods of
       | analysis have almost certainly been selected to further
       | publication instead of discovery. In order to improve the culture
       | of science, a shift must be made away from correcting
       | misunderstandings and towards rewarding understanding. [...] To
       | demonstrate the logical consequences of structural incentives, we
       | then present a dynamic model of scientific communities in which
       | competing laboratories investigate novel or previously published
       | hypotheses using culturally transmitted research methods. As in
       | the real world, successful labs produce more 'progeny,' such that
       | their methods are more often copied and their students are more
       | likely to start labs of their own. Selection for high output
       | leads to poorer methods and increasingly high false discovery
       | rates. We additionally show that replication slows but does not
       | stop the process of methodological deterioration. Improving the
       | quality of research requires change at the institutional level.
        
       | seoaeu wrote:
       | A lot of the comments are about the rate of scientific progress
       | being unnecessarily low in general, but I understood the article
       | to be about the _relative_ rate of progress in different fields.
       | The authors findings suggest that not only is the marginal impact
       | of individual publications greater in fields with fewer papers,
       | but even the field as a whole moves forward faster with a slower
       | publication rate!
        
       | lettergram wrote:
       | I recommend the book:
       | 
       | Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes
       | Hope, and Wastes Billions
       | 
       | Generally, all scientific fields have immense issues related to
       | the walled gardens and limitations created by their peers.
       | 
       | I also recommend this discussion between Eric and Brett Weinstein
       | which highlights just one example of the major issues:
       | 
       | https://m.youtube.com/watch?v=JLb5hZLw44s
        
       | wombatmobile wrote:
       | > Scholars in fields where many papers are published annually
       | face difficulty getting published, read, and cited unless their
       | work references already widely cited articles. New papers
       | containing potentially important contributions cannot garner
       | field-wide attention through gradual processes of diffusion.
       | 
       | That's why change in the sciences, and similarly in commerce,
       | industry, the arts and most human endeavours, often has to take
       | the form of _generational change_ , even when the raw facts would
       | suggest otherwise to the naive observer.
       | 
       | It is difficult to get a man to understand something when his
       | salary depends on him not understanding it.
        
         | AussieWog93 wrote:
         | >It is difficult to get a man to understand something when his
         | salary depends on him not understanding it.
         | 
         | Wow, that basically sums up why I quit my PhD in one pithy
         | quote. The entire literature was made up of folks being
         | deliberately obtuse in order to secure grant money.
        
           | civilized wrote:
           | In what field may I ask?
        
           | wombatmobile wrote:
           | > one pithy quote.
           | 
           | from Upton Sinclair.
           | 
           | https://en.wikipedia.org/wiki/Upton_Sinclair
        
       | hn_throwaway_99 wrote:
       | I've also found very similar issues with extremely "data driven"
       | organizations that live and die by A/B test performance. It's not
       | that there is anything fundamentally wrong with the science
       | behind A/B testing, it's just that individuals are incentivized
       | to run tests on things that are easily measured. Things where
       | results will take a long time to show, or things that may be
       | initially disruptive but then beneficial, are discounted.
       | 
       | I see this theme in many, many areas: business, sports, politics,
       | academia, etc. When we have tons of data and a desire to make
       | things as "objective" as possible, it's easy to get stuck in
       | homogeneous "local maxima" because we just grade by the things
       | that are easiest to measure.
        
         | bpodgursky wrote:
         | A good role of thumb I use is -- if you need a P-test to tell
         | you your result is significant, there's basically no chance
         | have found something revolutionary.
         | 
         | Real breakthroughs are... obvious, both qualitatively and
         | quantitatively.
        
         | dnautics wrote:
         | I think this is categorically different. What you are talking
         | about is a bias to work on things that are measurable followed
         | by goodharts law; the linked paper is more like "if you throw
         | money at a problem and increase participation in the field it
         | becomes more difficult to separate wheat from chaff due to
         | sheer volume".
        
       ___________________________________________________________________
       (page generated 2021-10-10 23:00 UTC)