[HN Gopher] Peer-reviewed papers are getting increasingly boring
___________________________________________________________________
Peer-reviewed papers are getting increasingly boring
Author : ingve
Score : 347 points
Date : 2021-01-01 18:05 UTC (1 days ago)
(HTM) web link (lemire.me)
(TXT) w3m dump (lemire.me)
| samch93 wrote:
| Doug Altman actually wrote a paper about this issue already in
| 1994 [1]. A great quote: ,,We need less research, better
| research, and research done for the right reasons"
|
| [1]
| https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2539276/pdf/bmj...
| jabl wrote:
| One big issue is that grants are short time (meaning researchers
| spend a stupid fraction of their time writing grant
| applications), and incredibly competitive. Trying a slightly more
| ambitious project and failing means the end of your research
| career. So researchers do the logical thing which is the kind of
| boring incremental research the article complains about.
|
| Combine this with the focus on bibliometrics in evaluating
| research output, and we have the mess we have today.
| s0rce wrote:
| I read a good study that suggested that so much time and effort
| (and hence money) is wasted on writing proposals for non-funded
| grants we should just assign a sizable fraction of the money by
| lottery instead, without needing lengthy proposals. Everyone
| knows you use the money for other stuff not specific to the
| grant regardless. The opportunity cost of writing all the
| rejected proposals would be saved and could be spent on actual
| science, even if some of it isn't top notch that otherwise
| would have been funded (and thats assuming that the grant
| proposal system actually selects the stuff mostly likely to be
| ground breaking).
| rpedela wrote:
| Yeah I agree, I would like to see some percentage be lottery
| based. I think you should still need to write a proposal, but
| if the proposal isn't selected then it goes in the lottery
| pool. There are two reasons to still write the proposal:
| helps organize the researcher's thoughts and shows the
| researcher is serious.
| jabl wrote:
| You'd still want some screening for your proposal to be
| admitted into the lottery pool to weed out the crackpots,
| but yes.
| pasttense01 wrote:
| One thing to keep in mind is that some of the papers which
| reviewers consider very marginal turn out to be breakthroughs.
| For example Jim Allison, who later won the Nobel Prize:
|
| "Allison was hoping to be published by one of the leading peer-
| reviewed research journals. But nobody at Cell or Nature or any
| of the A-list, peer-reviewed journals was willing to publish the
| findings of this junior academic from Smithville, Texas.
| "Finally, I ended up publishing the results in a new journal
| called The Journal of Immunology." It wasn't Science or the New
| England Journal of Medicine, but it was in print, and in the
| world.
|
| "At the end of the paper, I said, 'This might be the cell antigen
| receptor, and here are the reasons why I think that it is the
| T-cell antigen receptor,' and I just listed it out, all the
| reasons." It was a bold announcement regarding the biggest topic
| in immunology. "And nobody noticed it," Allison says. "Except in
| one lab.""
|
| https://www.wired.com/story/meet-jim-allison-the-texan-who-j...
|
| So you really need to publish the marginal stuff to make sure the
| breakthrough stuff gets published as well.
| inglor_cz wrote:
| Katalin Kariko, the Hungarian scientist that pioneered the mRNA
| research, was academically demoted at UPenn because the
| university considered her research as "impractical" and "waste
| of time" [0]. That was in 1995.
|
| In 2021, she is a hot candidate for the Nobel Prize for the
| very same work, which led to mRNA vaccines.
|
| [0] https://www.wired.co.uk/article/mrna-coronavirus-vaccine-
| pfi...
| mattkrause wrote:
| And in a similar vein...
|
| Douglas Prasher did a lot of the foundational work on GFP
| (2010 Nobel Prize) was driving a courtesy shuttle at a Toyota
| dealership in 2008 because he ran out of funding.
|
| Barry Marshall and Robin Warren had a really tough time
| convincing people that H. pylori caused stomachs ulcers (but
| won the 2005 Nobel Prize).
|
| Number theory was famously nothing more than an intellectual
| curiosity, but makes stuff like this website possible.
| Microbial opsins were also a weird intellectual backwater
| before becoming vital to neuroscience. Paul Lauterbur
| famously quipped that "you could write the entire history of
| science in the last 50 years in terms of papers rejected by
| Science or Nature."
|
| We are _really bad_ at predicting the future impact of
| projects and people, and it 's important to recognize that as
| a counterweight to schemes that try to select for
| "excellence." Things that seem laughable at the time can also
| be the basis for future breakthroughs, so take stupid things
| like Rand Pauls' ranting about silly-sounding projects with a
| mountain of salt.
| vorhemus wrote:
| Isn't it a sign of system failure when a researcher has to beg
| and plead to get his results published and no one is interested
| in his publication until someone who understands the break
| through happens to stumble across it?
| ip26 wrote:
| Maybe? Sometimes a breakthrough isn't obvious, or can only be
| understood by a few people on the planet. The story of polar
| codes comes to mind. How is a journal committee to
| distinguish such papers from all the other junk?
| derbOac wrote:
| I think what the parent post is suggesting is almost the
| opposite maybe? That those few people who should understand
| the significance are dismissing it.
| remus wrote:
| > How is a journal committee to distinguish such papers
| from all the other junk?
|
| Isn't this the job of the journal, it's editors and the
| peer reviewers?
| kome wrote:
| I just got my first paper accepted, after 8 months and 2 round of
| r&r (please clap): the reviewers and editors try very very hard
| to kill any joy, ambition and big idea. papers should be written
| with a lot of accademic mannerism. The goal should be to make the
| paper clear, but it's not true. It just a style to decide if you
| are part of the clan or not.
| tgv wrote:
| TBH, big ideas should not be in a paper, unless you can really
| prove it. I've read too many papers with grand new theories
| that hinged on an experiment testing the tiniest of tiny
| facets. All of these have been forgotten now, even by their
| creators.
|
| Once you have enough experience, know the field inside out, and
| have gotten a bit of a name, you can write a book or edit a
| special edition of a journal in which you may set out your
| ideas.
| burlesona wrote:
| Congrats! What is your research about? What drew you to this
| field, and why do you think it's important?
| kome wrote:
| Thank you! your comment and interest are quite heartwarming!
|
| My research is about household debt: I try to understand why
| some countries have more private indebtedness than others.
| Growing up during the American crisis of 2008, and the
| subsequent global crisis, made me realize that debt can be
| dangerous, and I wanted to understand what pushes people to
| borrow money.
|
| For a website, I wrote a short introduction to my research: h
| ttps://progressive.international/blueprint/3596cc12-0128-4f..
| . - soon the full research will be published in a peer
| reviewed journal. :)
| ArnoVW wrote:
| Interesting intro. It's even translated into French I
| noticed. So northern Europe is borrowing three times more
| (per capita) than the US? That goes against everything I
| know.
|
| One question: did you separate out consumptive credit from
| mortgages? The amount is so high that it seems to me that
| it includes mortgages.
|
| Because in that case, is it not the cases that it is simply
| because norther Europe is wealthy and has more expensive
| houses and more home ownership.
|
| Those mortgages are backed by houses, and are a sign of
| good financial practices, allowing gradual increase of
| wealth. Where consumptive credit is backed by nothing and
| is generally a sign of bad financial practices.
| skybrian wrote:
| Well, it's unsecured debt, but a credit card is
| informally backed by your future income. You might say
| that all personal debt is backed primarily by your future
| income.
|
| Even mortgages are backed primarily by future income and
| secondarily by the house, and this is why the bank wants
| to know how much money you make.
|
| Whether or not credit card debt results in future wealth
| depends on what you buy.
| ArnoVW wrote:
| I'm not an expert in finance, so don't hold it against me
| if I'm wrong, but does "backed by" not rather mean that
| the bank can have your house if you can't make payments?
|
| This in opposition to a credit card debt where they have
| no come-back. Other than the courts and collections
| agencies.
|
| I was under the impression that this was the reason that
| consumptive loans generally are capped far far lower than
| mortgages. And have triple their interest rates.
|
| While it is true that I can put my university on my
| credit card, it is very expensive and very rare. I would
| wager that the majority of these loans are for cars.
| Which, I agree can create wealth, but it would have been
| 'better practise' to buy a cheaper car, and switch when
| that anticipated wealth has been created.
|
| Moreover, those cars explain only part of the loans. The
| rest is 'bigger tv' or the likes. And those do not even
| create wealth.
| skybrian wrote:
| It's ambiguous. I'm pointing out some grey areas. When
| speaking loosely, we can talk about what a loan is
| "backed by" meaning how people expect to pay it back. Or
| you could talk about what property secures the loan,
| legally.
|
| Regarding investment, there are startups that got off the
| ground by running up credit card debt. It's not what I
| would do, but it's more like investment than consumption.
|
| Or you could pay for your car to be repaired using a
| credit card, and if you need the car to get to work,
| well...
| kome wrote:
| Thank you for taking the time to read!
|
| Yes, in the full paper I separated consumer credit from
| mortgages. In the intro I linked, indeed they are
| aggregated.
|
| Long story short: in northern and continental Europe
| consumer credit is almost non-existent. Very tiny
| numbers. Notable exception: the UK. But more mortgages
| don't necessarily mean more home ownership eh.
|
| "Those mortgages are backed by houses, and are a sign of
| good financial practices, allowing gradual increase of
| wealth. Where consumptive credit is backed by nothing and
| is generally a sign of bad financial practices."
|
| That's almost my conclusion as well, but I put more
| emphasis on what social policies encourage you to do,
| more than on good or bad practices. In the US they really
| encourage going for consumer credit, also their
| bankruptcy legislation is more lenient and encouraging a
| fresh start; while in Europe we are much more draconian
| with debt.
| fancyfredbot wrote:
| The style of academic papers is intended to encourage
| objectivity I think. All the same I would agree it can indeed
| obfuscate at times and a more informal style could be both more
| enjoyable and easier to read.
| rleigh wrote:
| And this was how papers were written historically. If you go
| back and read papers from well known researchers from the
| early 20th century, they can be a delight to read. We have
| gone down a road of increased formalism and a removal of any
| character in the name of "objectivity" and "dispassionate
| impartiality". This is, of course, utter bunk. Researchers
| are often not objective or impartial about their work, but
| it's all carefully phrased to be as dry as possible.
|
| If you read some older papers, you have the author
| interjecting with their thoughts and opinions directly in the
| first person, and it can make for much more compelling
| reading. So long as the data presented is objective and
| accurate, I can't say I have much problem with it. Today we
| use the "discussion" section for this, but it's just not the
| same.
|
| Put it this way, I've fallen asleep reading more papers than
| I care to admit. But some of those older papers were so
| enjoyable and interesting to read I'd devour them and go
| looking for more.
|
| Objectivity is of course important, but I think modern
| publishing has lost something which is also important: the
| excitement and interest of the authors in their own work.
| wott wrote:
| I occasionally read Geography articles from the first half
| of the XXth century, they are a joy to read, until at least
| the 50s or 60s. It reads like a book: the vocabulary is
| very accessible (the few words/concepts which do not belong
| to the common language are introduced and defined) and
| while the writing flows pleasantly like a fiction or
| documentary, it conveys much useful information.
|
| The recent ones? They're horrible piles of pretentious
| jargon, jargon which oscillates between hyper-technical and
| utter-bullshit. They read like those technical marketing
| fluff pieces. Philosophico-technocratic verbiage all along;
| when you're done with it (assuming you didn't give up in
| despair or anger), you haven't learned anything, basically.
| Anyway, they usually don't bother really describing
| concrete things any more, they rather talk about fancy
| social constructs. The synergy of the Promethean dynamic of
| the actants of transformative innovation of mountainity.
| Right...
|
| (I made up the example, but from real pieces from my last 2
| attempts. I unfortunately can't remember or find the
| previous one, it was 10 times worse, I wouldn't even have
| needed to make up this example, I could just have copy-
| pasted its abstract.)
|
| The difference is like between reading a page-turner, and
| trying to read the latest ISO standard about development
| methodology in safety-related domains (perfect for falling
| asleep before the second page).
| rleigh wrote:
| I've had some limited experience with academic writing,
| and overall I've not been happy with it.
|
| As a PhD student, my supervisor would want to take my
| writing and reword it to sound "more impressive". That
| was basically adding in all the pretentious jargon you
| are referring to. I don't think it aids at all in
| conveying facts in a clear and simple manner. It's
| completely unnecessary. More often than not it is
| deliberately aimed to be non-committal and ambiguous so
| it ends up saying nothing of any substance. That's
| completely intentional. There seems to be a school of
| thought that you should never say you aren't sure or
| don't understand something, and so you couch that in soft
| language rather than being direct. I utterly despise it,
| and regard it as a form of intellectual dishonesty.
|
| Later, as a scientific software developer, I tried to
| write and submit a technical paper for the software I was
| developing at the time. I spent weeks writing in detail
| about what it did, along with lots of figures
| demonstrating its performance and behaviour. But again,
| after my supervisor was done "revising" it, it become a
| lot of pretentious waffle that said almost nothing--all
| of the detail and simple description was reworded
| ambiguously or removed entirely. If you wanted to use the
| software, reading the paper would tell you almost nothing
| of importance. It seems to me that the primary purpose of
| papers--to be read and to inform others for their work--
| has not been the case for several decades now.
|
| In the end the technical paper mentioned above was
| rejected, and the reason was utterly ridiculous. The
| software used a 5D data model inherited as part of its
| fundamental design from earlier software. Deliberately
| done for interoperability. The reviewer was from some
| competitor group who had a thing about the software being
| unusable unless it supported arbitrary numbers of
| dimensions. The paper was rejected as being
| "controversial" as a result, despite the fact that it was
| an evolution of something that had been in production use
| for over 15 years in institutions the world over, and
| worked perfectly as intended. That ultimately led to
| losing funding and being made redundant. Such is the
| fickle state of academia where you aren't judged in the
| quality of your work, but subject to arbitrary and
| capricious action like this. And peer-reviewed academic
| publishing being used as an assessment metric for
| promotion and funding has led to what it is today: a
| bizarre meta-game played by academics which has little to
| do with high-quality research or high-quality
| publications.
|
| I'm afraid I became sorely disillusioned with the whole
| thing.
| guicho271828 wrote:
| Congraturations on paper acceptance, but joy is very optional
| in research. Joy is emotional and it clobbers logical insight.
| Peer review is an attempt to guarantee a steady firm progress.
| It is a matter of taste --- do you want progress or just want
| to be comfortable? If you want joy, better to be a SF writer...
| Still, strong results with good execution are allowed to enjoy
| some freedom.
| adwn wrote:
| > _[...] but joy is very optional in research. Joy is
| emotional and it clobbers logical insight._
|
| I'm pretty sure OP meant "joy" as in "motivation/enjoyment of
| one's work", not as in "hedonistic pleasure".
| awillen wrote:
| But on the other hand, with a severe replication crisis in
| science (https://www.vox.com/future-perfect/21504366/science-
| replicat...), isn't it a good thing that we have a bunch of
| boring papers that are close to stuff that's already happened?
| It's not people replicating others' findings exactly, but it at
| least gives points of comparison that can help to weed out some
| of the problematic work out there.
| lisper wrote:
| Boring is often a good thing, not just in science. There's a
| reason that "May you live in interesting times" is considered a
| curse.
|
| Think about the year we just had. Whatever else you might say
| about it, it wasn't boring.
|
| [Clarification] Individual local experiences may have been
| boring at times, but the year as a whole, from a historical
| point of view, was not. And the things that made it non-boring
| also made it suck big fat honking weenies. IMHO of course, but
| I'm pretty sure most people on earth would agree with that
| assessment.
| hutzlibu wrote:
| "Think about the year we just had. Whatever else you might
| say about it, it wasn't boring. "
|
| Well, from a global perspective maybe not. But subjective
| there were lots of freaking times this year that were just
| boring, because I wanted to go traveling and on festivals and
| just see something else and not be locked down (even though
| in my area the lockdown was quite light)
| the-dude wrote:
| I can assure you : lots of people found last year extremely
| boring ( not me ).
| amelius wrote:
| > Think about the year we just had. Whatever else you might
| say about it, it wasn't boring.
|
| Being restrained to the house, watching the same news over
| and over again, not being able to see friends, no evenings
| out, parties, restaurant visits, &c. and you call this "not
| boring"?
| TeMPOraL wrote:
| ...because of an unexpected, persistent, global threat to
| the lives of everyone, whose appearance led to economies
| worldwide shutting down, getting everyone glued to the news
| you mention, and putting a lot of people in precarious
| economical situation? Yeah, I wouldn't consider that
| boring. Tiring, for sure, but not boring.
| amelius wrote:
| You may call the problems "interesting", but the
| political solutions I've seen so far are far from
| interesting ...
|
| Yes, we're in economically difficult times (well, most
| people who are not in IT), but this is from an
| intellectual viewpoint not a much more intriguing problem
| than, say, global poverty, or climate change. And so it's
| not like there is a shortage of intellectual challenges.
| This crisis, therefore, doesn't add much in that sense.
| Yet it does take away a lot of freedoms that make life
| interesting.
| awillen wrote:
| I disagree - boring is critical in science. Good science
| means detailed record keeping and precise experimentation.
| It's good if the results are exciting, but the process of
| doing good science is typically quite boring when it's done
| right.
|
| Edit: This comment makes no sense, because I misread the
| comment I was responding to as saying "just not in science"
| not "not just in science"
| [deleted]
| bobthepanda wrote:
| This doesn't sound like disagreement.
| awillen wrote:
| I was really confused for a bit there, and I finally
| realized the comment I was responding to said "not just
| in science," not "just not in science" as I had
| originally read it. Whoops.
| techer wrote:
| Despite being widely attributed as a Chinese curse, there is
| no known equivalent expression in Chinese.
|
| The nearest related Chinese expression translates as "Better
| to be a dog in times of tranquility than a human in times of
| chaos."
|
| https://en.wikipedia.org/wiki/May_you_live_in_interesting_ti.
| ..
|
| Not saying you claimed it was Chinese but FYI.
| mdiesel wrote:
| I've only come across it as part of Pratchetts work, in the
| book Interesting Times, and had just assumed it was another
| of his witticisms.
| Bakary wrote:
| There is a Chinese saying that every saying is eventually
| attributed to the Chinese given enough time
| NeutronStar wrote:
| Is this saying itself attributed to the Chinese?
| systemvoltage wrote:
| I think we need to take Vox documentaries with a grain of salt.
| They're not as thorough as the Economist, etc. and have their
| own agenda mixed with YouTube content engagement metrics. You
| can argue that's the case with any publication but IMO YouTube
| forces content makers to do crazy shit like exciting facial
| expressions in thumbnails for better engagement.
| apsec112 wrote:
| I personally know the author of that piece very well and can
| vouch for their credibility - feel free to ask if you have
| any questions about their work or about the replication
| crisis in science.
| awillen wrote:
| That's just one article - it's not a unique point of view
| from Vox.
|
| https://www.nature.com/news/1-500-scientists-lift-the-lid-
| on... https://ecrcommunity.plos.org/2019/11/18/living-in-the-
| repro...
| patcon wrote:
| This sounds like the age-old balance between maintenance/repair
| and breaking-new-ground/innovation. My personal sense is that
| there is and always has been a gendered dimension to striking
| this balance, and varying levels of predisposition that
| bellcurve along gender-specific neurotypes.
|
| For me, once I was pointed to this axis, I started recognizing
| it everywhere -- legal institutions, the way companies are run,
| etc etc
|
| Hofstede's cultural dimension studies commissioned by IBM in
| the 50s (?) have things to say about this too
| mickallen wrote:
| You make a good point.
| xenocyon wrote:
| Yes, boring in the conventional sense is desirable from a
| scientific point of view (because the preference for surprising
| results leads to the biggest problem science has today: most
| published findings are incorrect). The linked article doesn't
| really do a very good job of explaining what "boring" means to
| the author, but my hope is he doesn't have a problem with
| boring in the sense of negative findings or unsurprising
| results.
| bjornsing wrote:
| Feynman made some interesting observations in his 1973 Cargo Cult
| Science speech...
| Areading314 wrote:
| Irritating to see this Atlantic "study" being accepted as valid
| and written about. This happened with the "bullshit jobs" meme
| where it stirred up enough controversy that it became an accepted
| fact for many people, even though the original research was
| highly questionable and didn't actually prove much of anything
| burlesona wrote:
| This is a reasonable and well-explained argument, which I've
| heard many times before: the incentives are currently producing a
| large quantity of mediocre research, rather than groundbreaking
| science.
|
| So let me ask the research community on HN something I see
| discussed less often. What alternative incentives exist today, or
| could be created, that would push more scientists to try higher
| risk / higher reward research?
| derbOac wrote:
| Only speaking for the US, but
|
| Federal grant funding needs to be increased.
|
| Indirect funds need to be eliminated.
|
| The funding structures need to be reorganized. Some ideas I
| think have merit are basing grant award on a lottery system, or
| funding individual researchers based on previous research for a
| limited nonrenewable period of time.
|
| Grant review panels need to be mixed up dramatically. Currently
| they consist largely of people who have received grants in the
| past. One of the biggest predictors of proposal rating is being
| a co-author of people on the grant panel. I think grant panels
| need to be based on some lottery system too, like everyone with
| an ORCID ID gets thrown into a pool somehow, like jury duty.
|
| Universities need to be funded in a way that's separated from
| grant receipts.
| cbsmith wrote:
| Honestly: more research $'s from government instead of
| industry. Industry wants conservative investments that reliably
| pay off.
| TobyBrull wrote:
| Give people tenure after their PhD and pay them a crappy salary
| that is guaranteed for the rest of their lives.
| bjornsing wrote:
| Interesting idea. Like a Universal Basic Income for PhDs...
| :)
| austincheney wrote:
| Percentage ownership of resultant intellectual property.
|
| A good example is the Minneola, a wonderful citrus fruit. The
| plant patent is owned by three parties: the University of
| Minnesota and the two inventors.
| zucker42 wrote:
| Of all the possible alternative measurement or incentive
| systems for scientific progress, I can hardly think of a more
| meaningless and damaging one than patents. There's a huge
| problem of companies submitting vague, overbroad, or
| superfluous patents. And plus it does nothing to address the
| major problem indicated in the original article: we don't
| incentive people to produce important research.
| varjag wrote:
| Largely inapplicable beyond applied sciences.
| whatshisface wrote:
| Imagine how much the Maxwell Trust would own today... The
| biggest problem with a viral IP system is that there would
| be nothing left for applied scientists.
| varjag wrote:
| And science would likely devolve into patent trolling
| system.
| logifail wrote:
| > Percentage ownership of resultant intellectual property.
|
| Pfft. I've been through this, my ex-supervisor attempted to
| file a patent which included a bunch of figures and
| illustrations I drew single-handedly while writing up my PhD
| and which were lifted from my thesis without my consent.
|
| My PhD was funded by a government research council.
|
| My ex-supervisor was filing a patent on behalf of a private
| company he set up to run in parallel out of his university
| laboratory.
|
| There are some people who just don't care about rules and
| figure they can just ask for forgiveness later. I'm still
| cross about this, over 20 years later.
| kome wrote:
| i would say to emphasize bibliometrics measures of productivity
| less (how much you publish) and emphasize measures of social
| impact more. But it easy to say...
| asdfasgasdgasdg wrote:
| The elephant in the room, IMO, is that there is only so much
| ground to be broken. The physical world is fundamentally
| limited and limiting. There's a limit to how much there is to
| know about it, and as we come to know more of the big things,
| it becomes more and more difficult to learn the smaller and
| smaller things. This outcome -- that the rate of groundbreaking
| research is slowing -- is what we should expect, even under a
| consistent policy.
| bonoboTP wrote:
| My biggest transformative learning experience regarding this
| has been how much of even technical (in particular CS, AI,
| ML) research is about recombination of existing ideas and
| that the most successful researchers aren't doing strictly
| technical contributions and "breaking ground", "discovering
| new terrain", but of selling new stories and narratives
| involving known concepts and shifting the emphasis from known
| aspect to another known aspect.
|
| Relatedly, in this lecture [0] he expresses it by contrasting
| the "positivist" model where knowledge is piled on, linearly
| expanding vs the model of discourse, where research is a
| conversation, where participants must know what the others
| take for granted, assume, doubt etc and new contributions
| argue that there are better ways to do or conceptualize
| things. Works are forgotten, left behind, ignored etc. A work
| can be valuable in one context at one stage in the discourse,
| while worthless at another time. It's not like a neutral map
| of some terrain that we can just file and store forever.
|
| He's in the social sciences, where this may be more obvious,
| but it's also true of sufficiently developed technical
| sciences too. When the low hanging fruit has been picked, the
| game becomes closer to zero sum. I mean new value can still
| be created, but not mainly through bringing in "new
| characters into the story" but by letting the story unfold
| using the same characters (main ideas). Only a true paradigm
| shift may break this equilibrium.
|
| [0] https://youtu.be/vtIzMaLkCaM
| HPsquared wrote:
| Kind of like how sales and marketing people search for and
| find new customers and applications for an existing product
| or service.
| shishy wrote:
| I'm not sure that's actually something we want within the
| institution. IMO the most important thing is improving the
| overall _reliability_ of research.
|
| On this note, and admittedly a bit of a (relevant) plug: I work
| at a startup called scite that's trying to improve this --
| https://scite.ai
|
| Citations are the primary mechanism by which scientific papers
| "talk" to one another, and one of the systemic issues we see in
| the status quo is a sort of numerical reductionism where a lot
| of emphasis is placed on "how many times someone is cited"
| without any indication of whether those citation counts are
| from papers that support or dispute someone's findings.
|
| One of the things we do at scite is let you see, for a paper,
| how it has been cited (i.e. not just a count, but the
| surrounding textual context around the citation, and a
| classification from our model as to whether the citing work
| provides supporting or disputing evidence for the cited paper,
| or just mentions it).
|
| That information is also aggregated to the author level, or
| journal, and so on.
|
| The hope is that by providing (and improving) this service so
| that researchers can see how papers are cited, we're able to
| promote more reliable science (in addition to letting someone
| explore new subject areas / doing lit reviews much faster, and
| other things).
|
| I just wanted to share that because a lot of the other comments
| were more abstract and it might help to include something more
| tangible that's being actively worked on now. There's a lot of
| interesting work in general in this space.
|
| If you have any thoughts / feedback / questions, feel free to
| reply here (I check occassionally) or write to me at the email
| in my profile.
| efavdb wrote:
| A related point is that we could improve how good work is
| bubbled up. Another poster pointed out Allison's work that led
| to the Nobel prize, and he claimed it was appreciated by just
| one group and that helped it get broader notice. How many great
| works go unnoticed now just because there is no famous author
| on the paper who has the bully pulpit to command attention?
| 13415 wrote:
| The only alternatives I see:
|
| 1. National research funding agencies and universities need to
| stop doing their assessments based on publication counting. Of
| course, publications need to be taken into account somehow, but
| at the very least these indicators should be weighed by curated
| rankings of journals.
|
| 2. Existing publications can be used as an indicator for how
| much output is to be expected in the future, but the goals for
| research output should be kept reasonably low. You cannot
| control the output of researchers, you can only control which
| ones you hire and (to some extent) the type of research they
| conduct.
|
| 3. Hiring in academia needs to be based more often on screening
| candidates by committees of outside experts in the field
| instead of local staff and people who don't know the area well.
| Projects also always need to be evaluated by experts on
| project's topic. (As strange as this may sound, this is often
| not the case!) It might even help to prune away irrelevant
| indicators, e.g. ask candidates to submit only 3-5 of their
| best publications and completely ignore the rest. And by
| "completely" I really mean completely. The reason is that if
| you have two researchers and one of them has 20 publications
| with five really good ones, and the other has 40 mediocre
| publications and not a single good one, then it is very hard
| for a normal assessment committee to justify taking the first
| one, but it's always easy to converge on the second one, even
| if that is exactly the wrong choice.
|
| 4. Assessment committees need to be told to evaluate the
| quality and originality of the research only. If you really
| want or need a certain output quantity, then make it part of
| the formal hiring criteria, not part of the scientific
| evaluation.
|
| 5. Increase funding for risky projects and risky individual
| grants, elimination of criteria that exclude unusual CVs (e.g.
| allow long time after PhD, people from other areas, people with
| time spent in business, unusual research suggestions).
| Originality should be one of the highest ranking criteria.
| There should also be a focus on versatility and _hard skills_.
| Even in the humanities, never hire anyone who says anything
| disparaging about mathematical methods and statistics, and
| never leverage people who don 't know the tools of their trade
| into positions of power.
|
| Basically, you cannot push scientists to anything. Once someone
| is hired at the postdoc level, you cannot steer much,
| micromanagement and constant evaluation are highly counter-
| productive. You need to hire scientists that do interesting
| research. Treat them like an investment in a startup: Most of
| them will fail but some of them will succeed. Give them a
| second chance, maybe even a third one, but not indefinitely
| many. The hiring policies and processes at universities are
| often bad. Candidates are not evaluated by experts, there is
| plenty of favoritism, boring high producers are favored over
| interesting researchers who want to built something up, because
| the local staff doesn't like scientists who "shake things up",
| and so on. There is a lot of inertia to overcome in Academia,
| being a good scientist can sometimes even be harmful to your
| career. Funding agencies need to steer against this.
|
| _Edit: Much of what I mention can also be achieved by getting
| an absolute top researcher in an area, give him or her an
| institute or research unit and ton of money, and let them do
| their thing. They know what to do and whom to hire. But it 's
| expensive._
| asciident wrote:
| From my experience, most of your ideas have been implemented
| in different ways in the academic procedures as best as they
| can.
|
| 1. Funding agencies and universities don't make decisions.
| Only individuals in those institutions do. And I don't know
| any individuals who admit to doing assessments based on
| publication counts. They'd be openly mocked for such an
| archaic way of thinking. The only people who seem to openly
| obsess over publication counts are Ph.D. students.
|
| 2. Agreed, but output can be measured in different ways
| besides traditional academic publications.
|
| 3. It seems like this is already done, through recommendation
| letters when people apply, and in external letters during the
| tenure process. I see the opposite outcomes, whenever I've
| seen someone with fewer but better publications compared
| against someone with more but weaker publications, the first
| is always preferred. Because it's easy for someone to make
| the case "Person 1 has fewer paper but they're more impactful
| and they choose riskier problems, an excellent trait" versus
| someone to make the case for "Person 2 has more papers in
| quantity so are more productive." Unless of course it's not
| clear that person 1's papers are actually higher quality, in
| which case the original premise doesn't hold.
|
| 4. This is already the primary factor from what I've seen.
| That's the whole point of job talks, publications listed on
| CVs, etc. I've never seen a tenure-track hire without people
| reading some of the papers listed on the CV, and discussing
| the presented work in the job talk.
|
| 5. Half of the funding agencies now say they're looking for
| risky ideas, the word "transformative" has been a common part
| of NSF vocabulary for decades. Originality is one of the NIH
| core criteria factors, as is criteria about the Investigator
| themselves. How panelists interpret the criteria is more
| social and political than a technical thing.
|
| Anyways, maybe we probably just live in different academic
| worlds, but I don't see as much blatant expression of the
| biases you are describing, and feel like the major goals of
| your ideas have already been implemented which has led us to
| the current situation. Maybe not everyone buys into them, but
| as you say there isn't a way to get a bunch of individuals to
| follow orders, so the only change that's possible is changes
| to process which has already happened. I don't agree that we
| need to add additional bureaucracy like rating applicants on
| "did they disparage mathematics", which I find extremely
| problematic.
|
| Even the last idea in your edit is already done in every way
| possible. The MacArthur award, Turing Award, Nobel Prize,
| various prizes, etc. all give an absolutely top researcher
| money and prestige, so they can do their thing.
|
| I think the best way forward are social solutions, rather
| than procedural. Convince minds, rather than add bureaucracy.
| 13415 wrote:
| We're definitely living in different parts of the world,
| work in different areas, and are at different universities.
| Obviously, I wouldn't have written the points if they had
| been implemented at my university. (I don't want to write
| in which country and which discipline, since I don't want
| to be identified. I'm one of very few foreigners with long-
| term funding in my area, we are a small country with only
| few universities plagued by all of the problems I've
| mentioned.)
|
| I have to insist about what I wrote about candidates who
| disparage mathematical methods in the humanities. These
| people can destroy whole departments if you let them roam
| freely.
| mensetmanusman wrote:
| I left academia for industry, so my opinion might be colored by
| the transition.
|
| The way I see it, the easy problems that can be solved by one
| or two individuals are mostly gone. You need large diverse
| teams to tackle interesting problems.
|
| To translate this into policy, you need opportunities for many
| more first authors, and you need to try to incentivize
| completely different departments to work together.
| virtuallynathan wrote:
| I'm not sure this is true in all areas, there are very simple
| questions left unanswered in health/medicine/nutrition that
| could be at least partially answered by very small studies.
| bra-ket wrote:
| > You need large diverse teams to tackle interesting problems
|
| This 'corporate' view of science is just not true, kind of
| like adding "man-hours" to software projects doesn't
| necessarily account for better software, speeds up or enables
| those projects.
|
| Large diverse teams just produce more mediocre research, if
| recent experience with massive "deep learning" industry is of
| any indication, or fail completely like that "blue brain"
| project with multi billion funding and thousands of
| participants.
|
| Bigger is not always better and it has never been necessary
| for good science.
| mattkrause wrote:
| The Human Brain Project is an extreme case.
|
| On the other hand, there are pretty hard limits to the
| sorts of projects that our standard model of a single full-
| time person[0] can do, even with some part-time
| collaborators (i.e., middle authors). This is especially
| true for projects that involve a combination of very
| different approaches: bioinformatics, HTS, in vivo
| validation, etc.
|
| I've noticed more and more papers with multiple (sometimes
| 5!) co-first authors, but that's a kludge around the fact
| that we can't figure out how to allocate credit to teams.
|
| [0] And, almost invariably, a trainee
| s0rce wrote:
| I'd say this is true in a sense that you "stand on the
| shoulders of giants", you rely on a lot of technology and
| science that has been developed by others. But small non-
| diverse teams can accomplish important (but likely small)
| breakthroughs in their field. You don't necessarily need a
| large diverse team to do important science. My anecdote is my
| PhD work.
| tracyhenry wrote:
| Not sure if it applies to other fields, but as I observe in CS,
| many professors start tackling more interesting & risky
| projects after getting their tenure. Even though their
| students/postdocs still value paper count a lot, they care more
| about producing ground breaking works.
| SubiculumCode wrote:
| I reject the notion that a dominant proportion of research
| publications are mediocre. Instead, the problems have gotten
| harder. Harder research questions forces careful, yet
| incremental, research programs.
| Bakary wrote:
| I think it might be a mistake to frame the issue as scientists
| not doing enough groundbreaking research. Those scientists do
| exist in greater numbers than before and have the right
| incentives guiding them. What is happening instead is that the
| superfluous group is expanding at an ever greater rate.
|
| These are two parallel worlds being discussed as if they were
| one.
|
| To answer your question, I believe we need to look outside of
| academia to relieve the pressures on most people so that fewer
| are tempted to add to the science glut. I'd venture to mention
| UBI but that's a whole can of worms
| aqsalose wrote:
| In my limited experience as a disillusioned PhD student
| dropout, the issues are in approximate order:
|
| 1. Publications in prestigious journals have become a measure,
| not a way to communicate. This leads to perverse behavior.
|
| Any kind of project that would involve deep work is very scary,
| unless it has early low-hanging fruit called easy publishable
| papers as intermediate steps that are easily published. I
| personally felt I was encouraged to find such fruits without
| too much coherence in what I was doing. Preferentially one does
| not have to learn or study anything challenging, because that
| makes the fruit potentially much more difficult to pick.
|
| My proposed solution: do not judge researchers by length of
| their publication list to evaluate them to hire/fund. Judge
| them by a selection of their recent work instead and maybe
| their presentation what they are doing now.
|
| 2. This is connected to another problem, corrective feedback
| comes quite late into play, when the most work has been done
| and it is very costly to change anything. I feel better results
| could be had of reviewers entered the stage when study was
| being drafted.
|
| Currently, submit a manuscript into a journal, get it rejected
| after a round of substantive, critical but correct reviewer
| comments? Optimal thing is not to review the work as suggested
| in the light of new ideas, but try to find a less prestigious
| journal likely to accept it with minimal work.
|
| 3. This has effects on conferences. While their primary purpose
| has always been presenting ones research and hearing about
| others, they kind of turn too much into a platform for
| advertising your research _so that you get cited_. Thus the
| general feeling can be quite one-way, and there is less
| knowledge enhancing communication by scientific discussion.
| Some conferences are better than others (amount genuine
| curiosity and interest helps), but sometimes I felt people
| would come to deliver their own little advert, maybe as chore
| between meeting their personal friends, instead of coming to
| talk, both receive and offer meaningful updates.
|
| 4. PI-centered networks and death of university department. I
| believe originally university departments were formed because
| it was thought beneficial and reasonable to have scholars with
| similar interests nearby. Social networks are probably quite
| important part of science, and is easier with people who are
| located often in the same building. Research group culture is a
| fascinating way to have groups of people nearby who don't
| communicate. It appears that to PI, each unsanctioned contact
| from their underlings to outside the group is a distraction at
| best (if it leads to underling working on publishing something
| PI is not involved in) or an unwelcome threat at worst, because
| it risks changing their plans, whether they concern the
| projects underlings should working on, the author order on
| planned manuscripts, or something else. All presentations to
| group outsiders is about signalling importance, prestige and
| coolness, but no research communication within department to
| help with unfinished work.
|
| Solution would involve stopping funding faux institutions
| within institutions, and start paying researchers in a way that
| encourages them actually collaborate within their physical
| departments when it is useful. (Salary paid by
| university/dept., no project grants.)
|
| 5. I suppose science moving very large projects involving lots
| of contributors can not be helped, but it is incompatible with
| publication authorship as the currency of value in academia. It
| leads to authorship beimg traded as a currency within projects.
| One is "paid" by official authorship in some papers; sometimes
| a person can show their magnanimity by bestowing it upon some
| people who participated. Sometimes technical or statistical
| help does not get acknowledged.
|
| IDK what to do about that. Moving out of evaluating publication
| lists to evaluating personal contribution/capability might
| help. In addition to big publications that are meant to
| communicate the big findings publicly but leave outsiders to
| interpret author order like tea leaves, record publicly some
| documentation what exactly each individual did, in their own
| writing, and that documentation is what the contributors should
| refer to in their CVs?
| CJefferson wrote:
| I think the problem is requiring excessive polish in academic
| papers.
|
| This causes two problems:
|
| 1) It's easier to polish something which is incremental.
|
| 2) It's easier to achieve the look of polish by purposefully
| hiding issues and smoothing over edges.
|
| I'd personally like every paper about a technique or algorithm
| to include as many problems where it does terribly as where it
| does well -- this is annoyingly uncommon. Also, people
| shouldn't be afraid to discuss all the nasty unfinished bits
| but in practice if you mention them reviews say "well, go fix
| that before acceptance".
| chalst wrote:
| I think this is close to the truth but misses an important
| factor: journal editors are, as a result of increasing
| submissions, asking for more refereeing. As a result, an
| increasingly high proportion of academics don't referee at
| all and those that do tend to referee more papers in less
| depth.
|
| With less time spent per referee report, easily appreciated
| evidence of polish matters more where papers that pay
| attention to careful issues of methodology, tackle deep
| problems, etc., require time on the part of the referee that
| they are unwilling to spend.
| Cd00d wrote:
| I don't know if it's the cause, but the "polish" requirement
| is horrendous.
|
| When I left academia one of my most significant thoughts was,
| "at least I won't have to argue over the 21st draft of a 3
| page paper ever again".
| bjornsing wrote:
| Yes. I'm into ML/AI and would love to read more papers on
| cool ideas studied in isolation, but most papers out there
| are more "we combined these fiftytwelve techniques and look
| it's SOTA on this dataset!" (It should also say "we have no
| idea why" but they tend to leave that out.)
| garden_hermit wrote:
| Many novel funding approaches have been proposed in an attempt
| to free scientists from the endless grant-> public-> grant
| cycle.
|
| One is to accept only mini grant proposals, say 10 pages at
| most, and screen these to meet some minimum threshold, say,
| aiming for the top 20th percentile. Then a random subset of
| these get funded. This helps diversify the funding somewhat,
| and hopefully catch more "risky, but interesting" projects.
|
| Another approach has been universal funding. The most basic is
| "everyone gets X research dollars", and any additional funding
| would require submitting grants on top of that. More complex
| schemes propose "everyone gets 2X dollars, but they must donate
| X to other researchers", which allows some people to accumulate
| the funds necessary to conduct big, expensive work. But all of
| these proposals aim to minimize grant writing and review, and
| hopefully to diversify the kinds of work being done.
|
| Longer windows of evaluation could also be useful. Big
| interesting ideas take time to develop and support, but the
| competitive academic environment demands quick evaluations,
| which make ot difficult to sustain long projects. Evaluating a
| research project after 5 years, instead of 1 or 2, could help
| alleviate some of the pressure on researchers so they can
| pursue longer agendas.
|
| Every researcher can come up with their pet theory and
| solution. Ultimately, I think that all of these approaches, and
| many others, have merit. But we need to experiment with
| different policies to see what works.
| vosper wrote:
| This is something Patrick Collison (Stripe CEO) has been
| talking about these past few years. Here's an Atlantic article
| from Collison and Tyler Cowen called "We Need a New Science of
| Progress"
|
| https://www.theatlantic.com/science/archive/2019/07/we-need-...
|
| And an EconTalk episode on the same subject (which goes into
| more detail than the article)
|
| https://www.econtalk.org/patrick-collison-on-innovation-and-...
| 2038AD wrote:
| Surprised to see no mention of Thomas Kuhn or _The Structure of
| Scientific Revolutions_ in the blogpost or in the discussion
| here. Though briefly looking at the linked papers, he 's cited by
| Thurner et al. and directly discussed by Bhattacharya and
| Packalen. His ideas on incremental science seem pretty
| appropriate.
| NotPavlovsDog wrote:
| I am currently pursuing a second master's degree (in management),
| some of my personal experience in academia:
|
| 1) Publish or perish is a real thing. Citation ratings and
| measuring affects many of the academics I have encountered. This
| contributes to boring papers.
|
| 2) Mentioning the replication crisis in social science gets a lot
| of the social science academics observably upset and defensive.
| They have to take care of 1)!
|
| Compare this to the exemplary stance in CS, as demonstrated by
| researchers criticizing Google for not backing magic AI claims. [
| https://www.nature.com/articles/s41586-020-2766-y ] This vigorous
| scientific stance is almost impossible to imagine in social
| science outside of Critical Management Studies.
|
| 3) I have enjoyed my time spent with CS academics a lot more,
| luckily have some interaction even now - the ones I have spoken
| to seem to lack the self-criticism blind spots the social
| scientists exhibited. The CS group seems to have a lot more fun
| with their research.
|
| It was also interesting to observe that for term papers, many CS
| professors wanted to see a report with _working software_ and a
| well-thought-out story of what challenges one encountered and how
| they were overcome. Not one paper was returned because of
| improper styling nor citing. They appeared to enjoy the battle
| stories of trouble-shooting and finding solutions to assignment
| challenges. They even read the sources I linked to, commenting on
| what they thought was a good find.
|
| From that experience (and others), CS sure feels like more of a
| meritocracy.
|
| In opposition, some social science professors were obsessed with
| proper APA or Harvard styling and citing "the right thinkers".
| Circular citation is a real thing. Perhaps since they can't prove
| anything, establishing credibility via cliques, mutual citation
| and signaling may be their academic survival strategy.
|
| It does feel like management and CS are orthogonal, and this has
| motivated me to direct my research towards a hard realism
| approach to facilitating communication between developers and
| value driven management.
|
| I.e. I have observed that even when a manager or leader wants to
| hire developers, social science damaged managers and HR will
| actively sabotage these efforts because they are still,
| fundamentally, driven by Taylorist, control fetish concepts
| (which they refuse to accept). The SS victims seem to wish to
| push their personal struggle on developers. "I suffered through
| school, you must show a diploma as well". Some even demand
| transcripts!
|
| When I consulted a company that was complaining about how hard it
| was to recruit developers, I looked at their application process.
| They demanded college transcripts even before the review of
| applicants began. I hope you are laughing with me at the entitled
| absurdity of this. Oh you want a personal letter as well?
|
| Bitter HR person, do you fail to understand that you are
| competing in your recruitment against multiple actors actively
| scraping through code repositories and blogs to find and contact
| devs? Or does accepting that a developer is incomparably more
| competitive on the job market than you and has multiple offers of
| employment at any time just hurt too much?
|
| One of my favorite quotes from a researcher (well cited) about
| managing developers includes a rant on how creative, industry
| competitive (can get another job) and financially stable
| developers are "hard to manage".
|
| This may be how many incompetent managers feel, deeply.
|
| If I can facilitate getting these emotionally insecure
| individuals out of recruitment and management, assisting
| developers and managers that want to produce _value_ to find each
| other, it would be in opposition to many well peer-reviewed
| publications. Boring publications.
| garden_hermit wrote:
| My doctoral degree has required training in both Computer
| Science and Social Science, and regularly interact with both
| communities. Drawing on this, I feel that this comment is
| unnecessarily partisan and divisive.
|
| > 2) Mentioning the replication crisis in social science gets a
| lot of the social science academics observably upset and
| defensive.
|
| The replication crisis is widely discussed in social sciences.
| Nearly every major social science journal will have at least
| one, likely many, editorials and articles on the crisis
| tailored to their field. Maybe this is explicitely a Management
| Studies thing? The field is relatively small and insular, and
| so cannot be generalized to wider social science. And are your
| experiences drawon from the community as a whole, or the
| faculty in your department?
|
| > 3) I have enjoyed my time spent with CS academics a lot more,
| luckily have some interaction even now - the ones I have spoken
| to seem to lack the ego-driven blind spots the social
| scientists exhibit. The CS group seems to have a lot more fun
| with their research.
|
| Maybe this is just the local community of wherever you studied?
| I've met equal parts ego-driven and chill people in both CS and
| Social Science. A major concept in most social science fields
| is sampling--I'd urge you to consider how representative the
| people you are talking to actually are, and how your sample
| might be biased from the global population. My prior is that
| there would be little difference in personality, in aggregate,
| between the fields.
|
| > I.e. I have observed that even when a manager or leader wants
| to hire developers, social science damaged managers and HR will
| actively sabotage these efforts because they are still,
| fundamentally, driven by Taylorist, control fetish concepts
| (which they refuse to accept)
|
| I mean, that's one possibility. The other is that hiring people
| is difficult and expensive, and that credentials are a quick
| and useful, if flawed, method of sorting through the pile.
|
| And are most HR people really trained in Management? I honestly
| don't know, but I doubt that the typical Psychology or
| Sociology major, for instance, is going to know of, remember,
| or consider using Taylorist management theory.
|
| It definitely seems that you enjoyed your time in CS more than
| your time in Management. That's great! CS is an important field
| with lots of cool people. But generalizing this preference into
| a wider philosophy that privileges CS at the expense of (all?)
| social sciences is silly.
| hndudette2 wrote:
| Regarding the replication crisis, can I ask why we don't yet
| have a central body to which scientists report their intended
| experimental design, sample size and research hypothesis
| before the experiment begins? Then we can eliminate
| publication bias due to null results not being published
| which is a big part of the replication crisis. The meta-
| analyses only survey those studies that reported this ahead
| of time and the scope of possible publication bias can be
| quantified.
| garden_hermit wrote:
| These kinds of things exist, though the rate that they are
| adopted varies widely.
|
| In Psychology, for instance, there was discussion back in
| 2015 about per-registration in the field [1]. Since then,
| tools and repositories have been created to help facilitate
| per-registration [2].
|
| But again, they are not universally adopted. Some of this
| is probably generational--academia is quite conservative,
| and doesn't change quickly. A new generation will likely
| use these tools more. Other times, fields are more insular
| than others, and so such tools will take some time to
| diffuse into their community. In the meantime though,
| things have improved, even if only somewhat, across the
| sciences.
|
| [1] https://www.apa.org/science/about/psa/2015/08/pre-
| registrati... [2] https://www.cos.io/initiatives/prereg
| pmyteh wrote:
| There are registration databases, but there are issues.
|
| Who trawls the (now huge) database to find the registered
| studies which never reported? In any case, that could
| indicate a file drawer problem publication bias or simply
| that the funding disappeared or a researcher decided not to
| proceed.
|
| How do you move from very spotty preregistration to
| compulsory preregistration? It would probably take a
| coordinated push from all the major funders. And they
| haven't successfully moved to fully open access publication
| in over a decade of trying, despite the obvious financial
| gain to the non-publisher world of winning at _that_
| coordination game.
|
| How do you handle work on the margins of registerability?
| Not all scientific work is testing hypotheses, some is
| exploratory. Should a quantitative description be ruled out
| if it wasn't registered? If so, how are reasonable
| hypotheses for future work established? If not, you have to
| be extra vigilant to stop people smuggling phrases that
| imply confirmation into nominally exploratory work.
|
| And the other problem with the replication crisis is that
| even decently-planned, pre-registered, straightforward
| studies can not replicate. Sometimes for reasonable reasons
| (significance was actually by chance, some unanticipated
| confounder interfered) and sometimes for thoroughly
| blameworthy ones (rather than p-hacking you can just fudge
| the data, or make 'accidental' programming errors).
|
| So I'm all in favour of pre-registration, but it's not a
| magic bullet.
| hndudette2 wrote:
| We both probably agree that parapsychology hasn't
| demonstrated that psychic phenomenon is real, and yet
| their various meta-analyses purport to show that it is.
| The experimental design of the underlying studies is
| quite good, so it must be publication bias. I couldn't
| think of a better demonstration as to why preregistration
| is a critical necessity, even if it is difficult to get
| going.
|
| All of your questions are very good ones however, and I
| agree with your conclusion that it isn't going to be a
| silver bullet.
|
| _Who trawls the (now huge) database_
|
| Only the authors of the meta-analyses, as part of their
| lit review. It's simply a criteria for inclusion into
| meta-analysis.
|
| _How do you move from very spotty preregistration to
| compulsory preregistration?_
|
| Eventually, if meta-analyses only look at studies that
| are preregistered, this would boost citation count for
| preregistered studies and therefore incent scientists to
| preregister since their careers are tied to citation
| count. How to arrive at this end state is a bit of a
| chicken and egg problem and requires a cultural shift, so
| it'll be very difficult. But once the end state is
| arrived at it should be sticky without needing to be
| compulsory, since career incentives are aligned.
|
| _How do you handle work on the margins of
| registerability?_
|
| Preregistration is mostly helpful for research questions
| that are amenable to statistical meta-analysis, i.e.
| those questions where the hypotheses tested across papers
| are sufficiently similar so the quantitative results can
| be statistically combined and statistical significance &
| clinical significance can be evaluated. I think
| preregistration is mostly needed for these questions,
| although it's probably helpful elsewhere too.
| pmyteh wrote:
| Attitudes towards the replication crisis seem to me to vary
| between researchers in any given social science discipline.
| Those (careless?) empirical researchers whose careers are being
| retrospectively ripped apart are inevitably horrified (and
| coming up with a huge range of arguments both reasonable and
| unreasonable as to why they shouldn't) while theorists are
| mostly happy and younger empiricists are taking up the
| reproducability/better science challenge with some enthusiasm.
| Likewise, attitudes to perfect citation vary wildly. I don't
| care as long as it's vaguely consistent, which is a common (but
| not universal) position in my sub-discipline.
|
| I'm not sure how much of your experiences are specifically a
| feature of management research rather than the social sciences
| in general. I spent a bit of time studying management (and
| being a manager, in pre-academic times), though now I'm a
| computational political communication academic. Management
| research is seen by the rest of us as a bit weird - a bit of an
| outsider discipline, with different attitudes and surprisingly
| little scholarly overlap. I wouldn't expect that many political
| scientists or sociologists to be Taylorists, for example.
|
| Publish or perish, boring papers, and problematic academic
| management on the other hand - that is very familiar.
| NotPavlovsDog wrote:
| Interesting, thank you for commenting!
|
| Popular management research does seem like an extra easy
| target... I have had exposure to several social science
| departments across continents, not just management, and those
| members that could be described as "mainstream" are somewhat
| lacking in reflexivity, with some exceptions. (When they are
| critical, they stand out extra bright, like Alvesson,
| Wilmott, Rowlinson)
|
| But about management research. When a leading critical figure
| in the field summarizes it as follows:
|
| _" Whilst there are plenty of theories in management, there
| are no laws"_ [P.Griseri,as quoted by P. Morris in
| "Reconstructing project management" ],
|
| one could wonder if that state of affairs could perhaps be
| explained by deeply ingrained subservience of management
| research to industry [ as described by Alvesson and other
| Critical Management Studies (CMS) practitioners].
|
| As far as Taylorism, there may not be self-identification in
| the field per se, but whether taylorism, puritanism or any
| other -ism linked to a fundamental desire for subservience,
| control and performance measurement as sound management
| practice could be described as to run deep. ( A gem of a
| paper is "The impact of Puritan ideology on aspects of
| project management" by Whitty & Schultz )
|
| I have been doing some preliminary experiments with applying
| CMS to recruitment - a short summary could be "treating
| developers with respect by providing necessary information
| and establishing transparency in the job announcement
| increases responses dramatically. Who knew!"
|
| Another interest of mine lies in sociophysics, which I have a
| huge reading list to catch up on. Did you have any
| sociophysics topics that attracted your interest in relation
| to political communication?
| pmyteh wrote:
| I had to look up the word 'sociophysics', which isn't used
| in my part of the discipline, so probably not! There are a
| lot of physics-derived tools that are coming into
| increasing use, though. Half of social network analysis
| seems to come from sociology, and the other half (the
| highly computational part, in the main) from physics. I've
| done bits of network analysis (one of my better-recieved
| recent papers was applying network partitioning to the
| problem of identifying the edges of news 'stories' in sets
| of articles) but I do most of my stuff in the text analysis
| space where NLP/topic modelling/matrix factorisation stuff
| is making most of the running, and that's mostly drawing
| from CS rather than physics.
| NotPavlovsDog wrote:
| Sociophysics has not been met with a wide welcome in
| management studies, perhaps due to its approach to proof.
| It requires a lot more work than popular hand-waving. It
| will be interesting to see whether physics-derived tools
| and approaches will lead to improvement in social studies
| in regards to current critical concerns.
|
| It is unfortunately (or fortunately, depending on how you
| look at it) a far-away in the future area for me, as just
| applying viewpoints outside of the mainstream to
| experimental research in management has a lot of low-
| hanging fruit with potentially significant impact.
|
| That could be one of the benefits of the humanities
| aspect of management studies, it is well justified
| (though not popular) to take a humanitarian (and thus
| biased) stance towards improving the well-being of
| employees.
| amirkdv wrote:
| I can't help but but think a universal basic income, at least in
| societies that can afford it, would dramatically change the
| entrenched misalignment that causes this.
|
| The signal/noise issue is real but as others have noted not the
| most concerning. The main structural problem IMO is that all the
| structures that support scientific research (academic evaluation,
| university admin, granting agencies, publishing industry, etc.)
| are all re-enforcing untenable illusions. A few benefit immensely
| from this, but there are many who have the incentive to change,
| yet they are cornered by their economic circumstances to play
| along.
| pelasaco wrote:
| I have the impression that yes the papers are getting boring, and
| peer-review is one of problems, however i have it as side effect.
| The main problem, IMO, is Research as profession, where the
| researcher is just interested in finish his/her paper, to get it
| published, to be able to get a next grant and keep researching
| about a topic that he/she is not really passionate about. The
| comparison with Einstein is therefore unfair. For big part of the
| researchers, science is just a job.
| bjornsing wrote:
| Yes. And when "it's just a job" people dominate it sets the
| culture, and makes Einstein's life very difficult / improbable.
| bonoboTP wrote:
| Stepping back a bit, I feel the argument is analogous to how
| music is becoming more mainstream and generic, how casual gamers
| "corrupt" the art form of gaming, how clickbait and stupit stuff
| is overshadowing lovingly curated independent websites, eternal
| September etc.
|
| One answer to all these propositions is that the valuable stuff
| is still there, you just have an _additional_ flood of mediocre
| boring predictable stuff. You can still find enough indie music
| and games and movies to spend all your free time on.
|
| Similarly, if you know where to look and whose papers to watch
| out for, you can read more interesting stuff. The existence of
| more bullshit makes the filtering somewhat more effort, but it's
| bearable.
|
| Most papers are never deeply read anyway, outside of the
| reviewers. Even citations don't mean someone really read the
| paper in depth.
|
| So, most mediocre, boring papers get ignored.
|
| It's an illusion to expect that every researcher can pump out
| multiple really interesting non-boring papers every year. So we
| pretend. We write, we cite, we present, overiflate, overpromise
| etc. It must look like there is steady hard noble work and toil.
| It's like a ritual. The scientists are writing all these papers
| so tax payers can sleep well that their money is paid for hard
| work.
| timkam wrote:
| I largely agree, but we do not write all these papers "so tax
| payers can sleep". We write these papers because the
| administration incentivizes us to get the highest profile
| publications with the lowest effort possible. In relatively
| creative fields (i.e., much of computer science), this can even
| mean to make friends in the community and co-author papers with
| them without being the one who does any of the hard work. (In
| other sciences, it means being the one who pulls in the money,
| as far as I understand.)
|
| A related problem I see is that we treat all papers the same:
| position (opinion) papers, reviews & surveys, and "hard"
| results (relevant proofs, strong empirical results, software
| artifacts that are deployed at scale in practice etc.). As the
| blog post somewhat suggests, the administration should
| primarily ask us to summarize these results when assessing
| whether we are worthy of funding and faculty positions; sure,
| it takes more effort, but I think it is possible.
| nextos wrote:
| Exactly, the big issue is that paper counts have become the
| dominant metric. And like with all structures dominated by a
| single metric, it's easy to game the system.
|
| Most professors I have worked with in experimental sciences
| are completely disconnected from actual research. They just
| optimize for hiring hordes of people to churn out tons of
| mediocre papers, which they don't care about. I came from
| theoretical CS & math, so this was really shocking.
|
| It's really depressing and it needs to change. Academia is
| now in a phase similar to that of an innovative startup which
| has been filled in with many middle managers, and has turned
| into a corporate monstrosity.
| cbozeman wrote:
| Its TPS reports. That what it is. And until academia gets
| their own Bobs to ask academia's Lumbergh, "Yeah, Academia,
| lemme ask yah... real quick question here... how much time
| wouldja say yah spend each week dealing with these papers?"
| retrac wrote:
| "Paper count" is about as reliable a metric as "lines of
| code". Publish or die is a toxic mentality in academia and
| you are certainly not the first to have observed this,
| sadly.
| xmprt wrote:
| I'm not sure if those two fields are comparable. In
| software engineering, at least there's usually a dollar
| value impact for the code you write. In research, the
| impact isn't as tangible and probably won't be known for
| years.
| throwawaygh wrote:
| _> In research, the impact isn 't as tangible and
| probably won't be known for years._
|
| This still describes a lot of software engineering.
| unishark wrote:
| It's pretty outdated. In research schools you need to win
| funding or die nowadays.
|
| And while publications do help you, say in getting hired
| and promotions, your publications are dug into pretty
| deeply. They don't want to be tricked into hiring someone
| useless who can crank out fluff but can't win a grant.
| [deleted]
| derbOac wrote:
| This is completely true -- something that many outside of
| academics don't understand -- but I'm not sure it really
| makes the situation any better. In a lot of ways it just
| kicks the problems down the road.
|
| By whatever metric you use, it's really about the
| checklist rather than substance. And the checklists are
| based on stereotypes, and get gamed.
| drran wrote:
| Maybe, software solutions can be applied to academia.
|
| In software, we have "source code distributions", such as
| Debian, Fedora, Gentoo, Arch, Red Hat, SUSE, etc. They
| are collections of mostly a same free software, with
| minor differences, but with different policies and
| purpose. Some are conservative collections of proven
| software, others are bleeding edge, etc.
|
| IMHO, someone should start to collect papers, which are
| worth reading, and _keep this collection up to date_ ,
| with fixes to papers (AKA as patches in software), with
| one paper per topic, with ready to use math models and
| simulations, with in software tests for models, etc.
| andi999 wrote:
| Maybe you didn't mean it, but 'gaming the system' sounds
| bad. While I agree the outcome is bad, I think when your
| boss tells you to publish 10 paper per year and you do
| that, that this cannot be called 'gaming the system'
| jhoechtl wrote:
| > It's really depressing and it needs to change.
|
| It has been like that since ten years and nothing has
| changed since then. People find out just about the same as
| you did but nothing changes. It such a convenient metric it
| will stay for much longer.
| anonymousDan wrote:
| Regarding your second paragraph, I don't think this is the
| case in the UK at least, where academics are judged according
| to the REF criteria.
| unishark wrote:
| Reviews are especially bad. You can tell which people work at
| schools where citations count in their pay (and reviews are
| allowed in citation counts) by the ratio of reviews to
| original research papers they write. Some schools/govts
| really need to wise up.
|
| I think of it as a kind of predatory or scavenger behavior.
| They basically steal the citations from the papers they are
| reviewing in return for a small effort in helping us with
| information retrieval. We might as well just cite google and
| elsevier. Ideally, citations should pass through them somehow
| and get applied to the original sources who did the hard
| work.
| gradstudent wrote:
| It's true that some review papers are shallow but it's
| pretty cynical to paint all contributions of this type with
| the same brush. In my experience, good review papers are
| more than summaries. They can connect separate lines of
| research for example, compare and contrast different ideas
| and they can re-contextaulise a body of research to give a
| broader picture and reveal new and interesting directions.
| These papers are hard to write well and take a long time.
| Even more so in cases where empirical work has been
| undertaken so as to make direct comparisons and provide
| reference implementations.
|
| Even the summary papers can be useful. On particularly
| active problems there can be dozens of papers per year.
| Sorting out wheat from chaff by highlighting notable works,
| and pointing out trends in published research, is again a
| genuine contribution, and helpful for the scientific
| community.
| unishark wrote:
| Of course they are useful, as is google scholar and the
| search field in various journals. Or some blog that tells
| you whats was new in the recent conference. That doesn't
| mean they deserve to be cited over the real sources. They
| are cited as a crutch: "go here for a more detailed list
| of sources". The little meta-analysis they do doesn't
| explain most of the citations they get.
| bonoboTP wrote:
| Hard disagree. We need to incentivized _more_ curation,
| distillation, summarization, comparing /contrasting,
| systematizing, categorizing approaches along various
| axes, meta analyses etc., not less.
|
| These are often very valuable, more so than yet another
| 1% improvement paper that never gets reproduced.
|
| Review papers don't steal citations. These are cited from
| more distant literature so readers can familiarize
| themselves with the topic. Those wouldn't cite all the
| papers mentioned in the review individually.
|
| But as a broader point, yeah, maybe something like
| PageRank could be used to "pass on" the citation Ina
| sense.
| unishark wrote:
| > These are often very valuable, more so than yet another
| 1% improvement paper that never gets reproduced.
|
| It's not an either-or.
|
| And one should always read and accurately describe the
| contents of what you cite, meaning of course you wouldn't
| cite everything in a review paper directly. Only that
| which is relevant.
|
| Suppose we count both separately, research citations and
| review citations, the latter for your great review papers
| or even if people just want to reference the lit review
| section of your research paper. What would happen to the
| incentive? I suggest that reviews would be less
| incentivized as people would primarily be concerned with
| rating researchers according to their research citations.
| Just like no one cares about your textbook sales, popular
| though it may be for introductory uses. This would imply
| reviews were not "honest" citations but are done to hack
| the metric.
| bonoboTP wrote:
| People _should_ care about textbooks. I think it 's
| misguided to think that real scientific value lies only
| in original research.
| unishark wrote:
| Well you guys are pulling me to an extreme position here.
| Of course I love good textbooks too. And review journals
| like signal processing magazine are my favorites. I'm
| sure many people do them for great reasons too. Let's not
| forget that academics teach too, and there are academics
| and colleges that are entirely devoted to teaching, not
| research. People are free to prefer them more if they
| like. However, when it comes to research metrics, the
| person who did the research deserves the credit for it.
| Even if we subtract the citations for obvious review
| papers (which is commonly done) there is still the lost
| citations by the original researchers.
|
| Reviews also cause other devious problems, like journals
| hacking their own impact factors. And don't forget the
| Matthew principle (aka rich get richer) whereby only big-
| shot researchers can get invited to have their reviews
| published in high-impact journals, warping the network
| effects to their favor even more.
| Ericson2314 wrote:
| Inevitable of not, this has real costs, in that that government
| will eventually turn off the gravy train and the "good" and
| "bad" research alike will be negatively impacted.
|
| In other words, academia is sowing the seeds of its own demise,
| and with it, the official institutional pinnacle of our culture
| [please don't wince too hard when reading that!].
|
| ----
|
| The solution i think is less research but more library work.
| See https://blog.khinsen.net/posts/2020/07/08/the-landscapes-
| of-... and https://hapgood.us/2015/10/17/the-garden-and-the-
| stream-a-te... which it cites.
|
| The amount of work library work we are not doing is just
| staggering. Consider these claims:
|
| - There should be an accredited-author-ornly "kernel wikipedia"
| which the main one can choose to incorporate, almost like
| release vs staging branches.
|
| - Different subfield way wish to keep their own intentionally
| biased corpuses, like https://ncatlab.org/nlab/
|
| - After 6 months, virtually no one should have to read the
| original research article, because the librarian core will have
| incorporated it's results claims (however controversial) into
| the appropriate articles
|
| - Textbooks / "knowledge bootstrap plans" should be
| continuously updated from the encyclopedia, not unlike how
| bootstrapping is managed in a package repo
|
| - Librarians do the work, but also adjudicate disputes, as
| researchers will be naturally incentivized to contest how their
| work is incorporated as that is the primary way it is consumed.
| bonoboTP wrote:
| Scholarpedia is similar as well.
|
| I agree there is not enough library work and there is too
| much focus on the 4-15 page publication format. This amount
| is often not enough to fully flesh out an idea or to present
| something non-incremental. Rather, people tend to chunk up
| work, called salami publishing.
|
| Now, sure doctoral theses also exist but the vast majority of
| research is presented in about 4-15 pages.
|
| But I disagree that after 6 months everything should be
| integrated into other summary work.
|
| Most published research is not worthy of being integrated
| into other work and 6 months are jot enough to decide. Most
| papers are forgotten and rightfully so. Publication doesn't
| mean it's correct or worthy of eternal remembrance. It just
| means that a scientist wants to share a finding with the
| research community. It's not established knowledge yet. Only
| when it is actually adopted and used successfully by the
| community, will it be part of established shared knowledge.
| After some years such key ideas do get written into
| textbooks.
| MaxBarraclough wrote:
| > There should be an accredited-author-ornly "kernel
| wikipedia"
|
| This is roughly what the _Citizendium_ project was aiming
| for. Sadly it seems it 's not doing well these days.
|
| https://en.wikipedia.org/wiki/Citizendium
|
| https://en.citizendium.org/wiki/Citizendium
| Ericson2314 wrote:
| Exactly! The fact of the matter is Accreditation and the
| State go hand-in-hand. You can go full anarchism like
| Wikipedia (and thats' great!) but if you want something a
| bit more curated, a bit more "official", you need officers
| (by the etymology even!) and you need funding, i.e. state
| support.
| craftinator wrote:
| > In other words, academia is sowing the seeds of its own
| demise
|
| This is first order thinking. Yes, if an increasingly large
| quantity of academic research is becoming bullshit,
| eventually the ratio of good to bad research will reach an
| inflection point where policy will be set that reduces
| research funding.
|
| This reduction in funding will have many effects, but the
| largest will be that as funding becomes a scarce resource, a
| higher curation of both researchers and topics will come into
| effect. There will be less research and papers overall, but
| the largest decrease in research will be those that are
| bullshit.
|
| Finally, as more and more research will be fruitful and
| useful, and fewer and fewer bullshit papers are published, an
| inflection point will be reached. There will be more funding
| allocated to these exciting new areas of study, and more
| research will happen. Of course, with funding no longer a
| scarce resource, less curation will occur, and a few more
| bullshit papers will be published... And the cycle will have
| advanced one wavelength.
|
| It's been happening for years, centuries, as a basic economic
| cycle, and looks something like this: _/T\\_/T\\_/T\
| Ericson2314 wrote:
| I'm thinking higher-order too, but I subscribe to the idea
| at the start of
| https://pedestrianobservations.com/2020/08/29/recession-
| and-...
|
| > Question. In what ways can a recession be useful for
| forcing inefficient public-sector agencies to lay off
| redundant workers and reduce bloat?
|
| > Answer. None.
|
| And keep in mind that academia might as well be considered
| public sector, but the idea transcends to most large-
| scale/loosely-planned cost cutting for efficiency gain
| schemes.
|
| Your starting point is a lot on individual virtue, and bad
| apples crowding out the good. I don't disagree that some
| researches are better and some are worse, but I think that
| any "cure" is going to be gamed and worse than the disease.
|
| > but the largest will be that as funding becomes a scarce
| resource, a higher curation of both researchers and topics
| will come into effect
|
| Do you have any evidence of this? I disagree in the
| strongest of terms. By my reckoning, curation,
| preventation, and other "forsight-drive" work is
| persistently undervalue by our society and economy. The
| nastiness that will accompany shrinking funding as research
| groups fight for _short-term_ survival will only make that
| worse.
|
| > but the largest decrease in research will be those that
| are bullshit
|
| Why would those who are the best at gaming incentives now
| loose that skill?
|
| > It's been happening for years, centuries, as a basic
| economic cycle, and looks something like this:
| _/T\\_/T\\_/T\
|
| So after centuries of Byzantine decline, there is new group
| of lean and mean Greek Constantinopolitan bureaucrats?
|
| After the Spain grew rich on New World bullion and then
| poor with a dearth of industrialization there is a new
| generation of hyper-efficient factories putting Germany to
| shame?
|
| I dunno what history you are reading, but the good actors
| never outlive the broken system and get the last laugh in
| mine.
| amelius wrote:
| "Science advances one funeral at a time"
|
| -- paraphrased from Max Planck
| agumonkey wrote:
| > So we pretend. We write, we cite, we present, overiflate,
| overpromise etc.
|
| > so tax payers can sleep well that their money is paid for
| hard work.
|
| It's a bit contradictory.
|
| I won't go into scandalized rant because it's useless, and
| there's no point shouting against the universe from a bedroom
| chair. Still it's a bit disheartening. I wish the scientific
| world could reinvigorate and pump some spark back into the
| field.
| dmix wrote:
| Or even more reductionist there are only a finite amount of
| talented and smart people ever in history doing truely new and
| interesting stuff.
|
| It's just the nature of the game. You can't just turn a dial
| and add more talent.
|
| Although a bit off topic there are some authoritarian nation
| states who tried to do this by social engineering. Soviet Union
| made being an engineer a common place job title and China is
| trying to make tons of super smart kids at STEM. But we all
| know there is more to being smart than forcing kids to go down
| certain academic route and get top end scores (you know the
| whole creative side and capacity for original works to be
| created).
|
| The democratic approach still seems to be the winner (not just
| due to brain). It's as I mention always going to be a small
| minority who do great things. That doesn't scale up
| artificially.
| vosper wrote:
| > Or even more reductionist there are only a finite amount of
| talented and smart people ever in history doing truely new
| and interesting stuff. It's just the nature of the game. You
| can't just turn a dial and add more talent.
|
| It's true that we can't turn a dial, but there must be a vast
| amount of human potential that goes unused every day, because
| people are born in impoverished countries, or suffer
| discrimination due to gender or race or something else, get
| sick with a preventable disease, or whatever other thing
| denies them the opportunity to make the most of themselves.
|
| This is probably actually the norm in the world, not the
| exception.
|
| So I think there's rather a lot we could do as a species (or
| within individual countries) to give people opportunities and
| make sure potential for progress is realised
| LamaOfRuin wrote:
| Granting your premise (which I don't personally think is
| accurate): currently every country in the world squanders a
| massive amount of talent by not providing opportunity, even
| starting at the most basic level of universal food security
| and healthcare. You may not be able to infinitely turn a dial
| and add more talent (or productive research/innovation), but
| that doesn't mean our many dials are anywhere near their
| maximum levels either.
| sigotirandolas wrote:
| Another closely related possibility is that a lot of
| processes in science are inherently serial, i.e. they can't
| scale with the number of researchers, because they require
| slowly integrating information and making consecutive steps
| of progress.
|
| Maybe after say, quantum computers are available, it takes
| around 25 years for a team of researchers to slowly make
| progress and integrate information until they find a killer
| application. Funding 25 teams of researchers may make them
| pump out papers 25x faster, but they aren't going to find the
| killer application in 1 year.
| mattkrause wrote:
| > You can't just turn a dial and add more talent.
|
| I'm not sure I agree there. It's notoriously hard to get/keep
| a research position, especially a) in academia and b) outside
| CS. There are a decent number of graduate student slots, some
| postdocs, and then you're thrown into the thunder dome.
|
| The competition itself--and the resulting uncertainty--chases
| a lot of people into other careers. The people in my grad
| school cohort who "left" research were just as smart and
| often just as successful (in terms of papers, etc) as those
| that stayed.
|
| It's not hard to imagine some policy tweaks that could have
| kept some of them in research.
| derbOac wrote:
| >The people in my grad school cohort who "left" research
| were just as smart and often just as successful (in terms
| of papers, etc) as those that stayed.
|
| There's research to support that actually:
|
| https://www.insidehighered.com/news/2018/12/11/new-study-
| say...
|
| Buried in that paper they basically show the hazard rate
| for exiting academics is sort of uncorrelated with metrics
| like citation rate etc.
| abathur wrote:
| I think I agree with a large chunk of this: there are
| inevitably triage/discovery problems as the number of
| things/options go up.
|
| The cost of evaluating each remains fairly constant, and at
| some point production outstrips the ability of even extremely-
| interested/dedicated individuals to have a good handle on it
| all. Beyond this point, you either need organized collective
| effort (and trust) to efficiently distribute the work of
| keeping up (and synthesize the results) or the larger community
| will be wasting ever-increasing slices of potential energy to
| extract increasingly less knowledge of the scope of activity.
|
| But there's a difference between simple volume growth for good
| reasons (i.e., people chasing incentives that are aligned at
| multiple levels of society), and volume growth caused by people
| chasing incentives that are misaligned at one or more points up
| the stack.
|
| If the researchers are spending their time on things even they
| find boring _because Goodhart 's law_, I think it is important
| to recognize it and try to undo the misaligned incentives.
| They're not only externalizing a cost on their entire field of
| knowledge--but we're also collectively suffering some ill-
| defined opportunity cost of whatever it was they would've spent
| their time on if they were following curiosity.
| smitty1e wrote:
| Rick Beato rails against gridding and autotune in music over on
| YouTube.
|
| My wife, a health outcomes researcher, points out that the
| expectation is that every little publishable bit be kicked out
| the door for a pharma project.
|
| Historically, they would have dropped a single summary result
| at the end of the effort.
|
| Now, she's trying to "autotune" the neutral or even negative
| results. Nominally in the name of "transparency".
|
| If this is true, the need is a research Adele to come along and
| shame the riff-raff with sheer natural brilliance.
| Sophistifunk wrote:
| We all know autotune, but what's "gridding"?
| 1986 wrote:
| I'm guessing this means quantization of a rhythm.
| redis_mlc wrote:
| Correct, it's aligning played musical notes to a bar
| (measure) on a computer screen.
|
| Then almost always, a clone tool is used to repeat that
| one bar identically dozens of times.
|
| So you lose most of the organic nature of music in a
| mechanical fashion.
| smitty1e wrote:
| Who needs humanity when you can have mathematical
| perfection?
|
| Beside humans?
| riedel wrote:
| In the line of your comment about tax payers money: one of the
| biggest issues IMHO is the disfunctional "market" here because
| most of the work is paid easily by the tax payer . Particularly
| publishers are mostly not paying for reviews but are paid for
| journal and conferences largely on quantity (with few noble
| exceptions). The effect for a long time now was an ever
| increasing number of conferences (peer reviewed proceedings
| still make a large share in CS) and journals. What is actually
| worse IMHO, the review request get more and more and are
| largely boring. So you really have to attract the attention of
| the reviewer to not trigger the reflex of looking for reasons
| to reject (or worse the opposite, which may account to the same
| because there are always other reviewers, so its a matter of
| luck who gets assigned)
|
| I think better structures are needed that actually incentivise
| good reviews. This would lead to better papers, too. There are
| multiple options for this. Maybe it is simply time to see
| reviews as publications themselves and say goodbye to flawed
| blinded review process. Another thing would be to take the
| publishers out of the game. Even IEEE and ACM have become
| largely a playing field for power politics between the US,
| China and Europe . I think we could do better and be more
| inclusive here. Publishing reviews and rejected work would
| further make meta studies much more reliable removing also bias
| towards positive results (who actually wants to hear about
| easily replicatable negative results at a conference)
| azhenley wrote:
| Are papers getting more boring or does the reader just find
| less novels things to read as they read more and more? Maybe
| the reader is bored of the field in general. The 500th time you
| do anything isn't as exciting as the 5th time.
|
| On another note, why does everything have to be exciting? Small
| incremental improvements over the years is what I aim for.
| patrec wrote:
| > One answer to all these propositions is that the valuable
| stuff is still there
|
| I don't understand the popularity of this argument. Whilst
| someone might (theoretically) have written the Great American
| Novel in his or her basement unbeknownst to pretty much anyone,
| a lot of art is fairly resource intensive and thus you can say
| with extremely high confidence that's its not being produced
| and the fault lies not with people not looking hard enough.
| Same with research and journalism.
| Siira wrote:
| In games at least, I feel that more good games are being
| produced.
| Bakary wrote:
| >It's an illusion to expect that every researcher can pump out
| multiple really interesting non-boring papers every year. So we
| pretend. We write, we cite, we present, overiflate, overpromise
| etc. It must look like there is steady hard noble work and
| toil. It's like a ritual. The scientists are writing all these
| papers so tax payers can sleep well that their money is paid
| for hard work.
|
| The flip-side to this is that there is a glut of moderately
| motivated scientists and PhD candidates. As a result, the
| demand for conferences and citations is self-sustaining to meet
| the expectations of millions of people wanting to enter the
| academic world without a clear plan beyond joining the world
| itself.
|
| It's as though the English-major-to-professor cycle has also
| become a reality in the hard sciences.
| danielheath wrote:
| Perhaps it varies by specialty/region, but I have a good
| friend in the sciences here in Australia; funding has dried
| right up.
|
| This year, less than 10% of NHMRC grant applications got
| approved, and of the ones approved most spent nearly as much
| time on grant applications as they did on the real work.
| Anyone who is only moderately motivated has retrained,
| because working for low pay half of the year is not appealing
| to anyone who can pass any doctorate program.
|
| Even if you took the position that only the good ones are
| getting funded, spending 50% of your 'good' researchers time
| is a tragic waste of human ability.
| semi-extrinsic wrote:
| In 1916, J.J. Thomson (as in Thomson scattering; discoverer
| of the electron) said it perfectly [1]:
|
| "If you pay a man a salary for doing research, he and you
| will want to have something to point to at the end of the
| year to show that the money has not been wasted. In promising
| work of the highest class, however, results do not come in
| this regular fashion, in fact years may pass without any
| tangible result being obtained, and the position of the paid
| worker would be very embarrassing and he would naturally take
| to work on a lower, or at any rate a different plane where he
| could be sure of getting year by year tangible results which
| would justify his salary. The position is this: You want one
| kind of research, but, if you pay a man to do it, it will
| drive him to research of a different kind. The only thing to
| do is to pay him for doing something else and give him enough
| leisure to do research for the love of it."
|
| [1] https://archive.org/details/b29932208 - see pages
| 198-200, ref HN comment:
| https://news.ycombinator.com/item?id=21388715
| Bakary wrote:
| I was thinking more along the lines of people who are not
| suited to do research in the first place. This will either
| be those who join the academic world out of inertia
| following their educational path or class/cultural
| pressure, or those who dislike the general labor market and
| want to enter the academic world due to its
| particularities.
|
| On a speculative note: I believe it's yet another argument
| in favor of UBI
| derbOac wrote:
| I kinda would suggest the problem in some ways is the
| opposite, that even people who are well-suited to what
| academics should be are being driven out.
|
| It's such a mess at the moment that it's not really
| possible to continue out of inertia. Even if you're well-
| suited to research and discovery at some level, and
| motivated, it's a soul-crushing experience because at
| every step you're incentivized toward something else.
| pdfernhout wrote:
| Here is a 1994 essay (also given as testimony to the US
| Congress) on part of of why it is a mess, from Dr. David
| Goostein (then vice-provost at Caltech):
| http://www.its.caltech.edu/~dg/crunch_art.html
|
| In that essay, Prof. Goodstein explains how academic had
| been growing exponentially for decades since WWII, with
| each PhD giving birth to another litter of fifteen PhDs
| (my phrasing...), until finally that exponential process
| began to hit limits. That set of changes from the ending
| of the exponential growth in academia beginning around
| the 1970, leading to a break down of peer review and lots
| of other issues.
|
| That's also part of why those of us (like me) who got
| honest well-meant advice from extremely successful
| academics in the 1980s or later about the value of a PhD
| and a career in academia were misled, even though that
| advice had worked for the advice giver. Freeman Dyson, by
| contrast, tried to steer bright people (including me)
| away from PhDs even in the 1980s -- having an intuition
| for this sort of thing. Wish I had listened to him more.
| :-)
|
| Also related by Philip Greenspun from 2006 (later
| updated), on "Women in Science" but which also applies
| more broadly and is a good description of the
| consequences of the situation Goodstein described in
| 1994: http://philip.greenspun.com/careers/women-in-
| science "This is how things are likely to go for the
| smartest kid you sat next to in college. He got into
| Stanford for graduate school. He got a postdoc at MIT.
| His experiment worked out and he was therefore fortunate
| to land a job at University of California, Irvine. But at
| the end of the day, his research wasn't quite interesting
| or topical enough that the university wanted to commit to
| paying him a salary for the rest of his life. He is now
| 44 years old, with a family to feed, and looking for job
| with a "second rate has-been" label on his forehead. Why
| then, does anyone think that science is a sufficiently
| good career that people should debate who is privileged
| enough to work at it? Sample bias. ... A good career is
| one that pays well, in which you have a broad choice of
| full-time and part-time jobs, in which there is some sort
| of barrier to entry so that you won't have to compete
| with a lot of other applicants, in which there are good
| jobs in every part of the country and internationally,
| and in which you can enjoy job security in middle age and
| not be driven out by young people willing to work 100
| hours per week. How closely does academic science match
| these criteria? I took a 17-year-old Argentine girl on a
| tour of the M.I.T. campus. She had no idea what she
| wanted to do with her life, so maybe this was a good time
| to show her the possibilities in female nerddom. While
| walking around, we ran into a woman who recently
| completed a Ph.D. in Aero/Astro, probably the most
| rigorous engineering department at MIT. What did the
| woman engineer say to the 17-year-old? "I'm not sure if
| I'll be able to get any job at all. There are only about
| 10 universities that hire people in my area and the last
| one to have a job opening had more than 800 applicants."
| And that's engineering, which, thanks to its reputation
| for dullness and the demand from industrial employers,
| has a lot less competition for jobs than in science. What
| about personal experience? The women that I know who have
| the IQ, education, and drive to make it as professors at
| top schools are, by and large, working as professionals
| and making 2.5-5X what a university professor makes and
| they do not subject themselves to the risk of being
| fired. With their extra income, they invest in child care
| resources and help around the house so that they are able
| to have kids while continuing to ascend in their careers.
| The women I know who are university professors, by and
| large, are unmarried and childless. By the time they get
| tenure, they are on the verge of infertility. ... I've
| taught a fair number of women students in electrical
| engineering and computer science classes over the years.
| I can give you a list of the ones who had the best heads
| on their shoulders and were the most thoughtful about
| planning out the rest of their lives. Their names are on
| files in my "medical school recommendations" directory.
| ..."
|
| Anyway, that's all part of why I support a UBI. Give
| everyone enough money to live as a perpetual graduate
| student (without publishing and without becoming a
| "Disciplined Mind" as the name of a book by Jeff Schmidt
| on academic perils) and some of them will, which will
| eventually remake academia in a healthier way (including
| taking more chances on small-scale but far reaching basic
| research, whether towards cold fusion, quantum
| teleportation, new computer languages, new materials, new
| batteries, new ways of thinking, new communications
| patterns, new ways of conflict resolution, or whatever).
| If for that reason _alone_ the USA should invest in a
| Universal Basic Income.
| frongpik wrote:
| I'd assign scientists a "pension" every time they make a
| worthy discovery. That pension would be a monthly payment
| of a fixed dollar amount, it would be inalienable and
| paid for the rest of the scientist's life regardless of
| other achievements. Publishing a minor, but worthy, paper
| would give a 100 usd/month grant. Getting a Nobel prize
| would give, say, 1 million a year. However the deflation
| would gradually erase value of the grants.
| seotut2 wrote:
| How would that fix the actual problem? And how do you
| judge what is a worthy discovery and what isn't?
|
| As parent said, UBI would probably fix the incentive
| scheme, your solution wouldn't. You need to decouple the
| reward from the result so that research is done out of
| sheer curiosity and love of science.
| andi999 wrote:
| You cant decouple this (fully) with UBI. Only a little
| research is done at a desk, a lot need equipment,
| sometimes very expensive. With UBI you just get the time
| but not the stuff.
| andi999 wrote:
| This is a great quote. Thinking of Einstein working at the
| patent office...
| roenxi wrote:
| I basically mirror Thomson's opinion but there is are other
| frames on the thinking. Eg, as paying someone to
| demonstrate a deep understanding of a topic and articulate
| the uncertainties. That is different from traditional
| research because they aren't being asked to do anything
| exactly new. But in practice that is what I think the
| better researchers actually do.
|
| Maybe there is a clue in the name "re-search".
| Epistemologically I don't think it is possible to set out
| to discover something that isn't already known to exist, so
| the justification for paying people would best be for
| evidence that they were searching rather than if they find
| something.
| barrkel wrote:
| This is a bit off-topic to papers, but perhaps there's an
| angle.
|
| _One answer to all these propositions is that the valuable
| stuff is still there, you just have an additional flood of
| mediocre boring predictable stuff._
|
| I don't think this is true for most things with a mass market.
| I think products which have efficiencies of scale - or low
| marginal costs of reproduction - suck up more capital, which
| raises certain basic levels of standards and polish and
| marketing, but also greatly increases capital risk and thus
| reduces investor risk appetite for product novelty. These mass
| market products starve the mid-market of attention, which makes
| the mid-market less capital efficient, which means the mid-
| market needs to get cheaper and cheaper, lose polish, until
| it's less attractive to the mass market of consumers, and so on
| in a spiral until a certain spartan rawness becomes its own
| aesthetic - indie games, indie films, indie music.
|
| I think there's a concrete example which demonstrates the
| actual absence of that long tail of interesting content, and
| not just that it's harder to find: blogs.
|
| Blogs were great in the mid 2000s. The market did two things:
| it matured - the best blogs got more mindshare and turned into
| something closer to media organizations, much more capital
| intensive with full time paid writers, while and Facebook and
| Twitter sucked up the mass market of newsfeed consumers, both
| on the producer and consumer side, making it easier to produce
| low-value inanities and easier to consume tidbits from
| celebrities. This sucked the air of attention from blogs; being
| long-form, they were too hard to write and too long to read;
| and being unpaid, they didn't have slick editors or snazzy
| design. So a lot of them shrivelled up. There are still niche
| blogs, plenty of them, but a _lot_ fewer - and we can track
| this loss, because we have (or had) them in our newsreaders.
| unishark wrote:
| The problem with research is similar but even deeper.
| Researchers are supposed to show feasibility or existence of
| some result, then applied scientists and engineers take this
| off into industry or elsewhere to build on it.
|
| But there's no direct trading going on at that state to
| ensure they deliver what is claimed, the research "product"
| is just dumped into papers. And those papers serve as an
| impediment to subsequent researchers, since you can't publish
| or get funded for the same thing that's already been done and
| supposedly proven by someone else. Of course you wouldn't
| have to if they had really solved the problem. But they just
| threw together some minimal effort to stick their name on it
| first before moving on to the next low-hanging fruit
| somewhere else. Doing a literature review can be really
| frustrating.
| jononor wrote:
| To explain blogs you also have to factor in the moves toward
| more image and video centric content, I think.
| TeMPOraL wrote:
| Video content could be in itself another argument in favor
| of GP's thesis. There seems to be dearth of high quality
| video material. You have big-budget high quality videos,
| big-budget garbage, and then lots and lots of low-budget
| garbage: i.e. all the stuff "influencers" and "YouTube
| personas" create to live off ad revenue.
|
| There is a small and somewhat obscure but relatively stable
| middle - low-budget, high-quality videos. I attribute its
| survival to two things: some live off Patreon donations and
| occasional sponsorship deals, others live because the
| creators have an actual dayjob, and they do YouTube as a
| hobby. This is similar to blogs - best ones are done on the
| side, with no expectation of revenue.
| jononor wrote:
| I see more of your latter paragraph. In several of my
| interest areas there is a good amount of useful content
| in videos. Best example is probably electronics, where a
| lot of tribal knowledge is now more accessible than ever
| before. Practical mechanics, both old fashioned (metal
| and woodworking) and new (CAD design, 3d-printing), also
| has a lot of useful video content. While there is a lot
| of pop-commercial "personas" making noise with their
| videos also, it does not seem to have hurt output by "in
| it for the passion" people.
| skizm wrote:
| Music is becoming more generic because musicians can rapidly
| reach a wide audience and iterate quickly, so they see what
| works and what doesn't and continue down the path of sound that
| produces the most money.
| bumby wrote:
| Isn't the premise that this analogy extends to academia by
| "seeing what works and what doesn't" to produce the most
| publications to the point where the majority of papers are
| derivative? While the iteration of the individual may not be
| quicker, the large increase in the number of publications
| seems to be indicative of the field iterating quicker.
|
| E.g., the artist takes a big risk to produce a sound that
| diverges from "what works" just like a scientist takes a big
| risk to diverge from well-worn theory that has produced past
| publication fodder.
| jhoechtl wrote:
| All what you say is true and I appreciate your deep thoughts.
|
| There is another dimension to that though. It's the long tail.
| The internet has tremendously facilitated mixing existing
| content. That's creative too but not in the original sense of
| creating something new.
|
| With detached knowledge before the internet age much less
| content has been produced and I would assume the novelty to
| mixed content ratio was very much in favour of new content
| instead of mixed content.
|
| Internet technology brought us search engines but their search
| algorithmus favour those paying most instead of providing a
| service to discriminated remixed content from new content.
| alexhutcheson wrote:
| The problem isn't the existence of the papers, it's the years
| of effort that go into producing the papers.
|
| It's the failure to harness the hard work and brainpower of so
| many brilliant people towards efforts that would meaningfully
| improve the lives of others.
| reilly3000 wrote:
| The people who fund research look too hard for ROI. The funds
| that get invested into research buy hours of time of people doing
| research and maybe gear. They cannot buy discoveries. Until this
| mismatch is addressed broadly, we will continue to have both a
| reproducibility crisis and 'boring' papers.
| garden_hermit wrote:
| There is a lot of caution in research funding. Some research is
| considered "safe", whereas others risky. IN a properly balanced
| portfolio, perhaps 80% of funding should go to safe work and
| the remaining 20% to more risky, but potentially high-reward
| studies. However this isn't how funding agencies usually
| operate.
|
| At least in the context of the U.S. government, funding
| agencies are terrified of being seen as "wasting" funding,
| because that is a sure way to have funding cut by congress. The
| "Golden Fleece" awards, a congressman's attack on perceived
| frivolous spending, have routinely been targeted at specific
| research funded by the NSF and have helped make the agency
| afraid of being attacked by congress.
| bsf_ wrote:
| I believe this is close to the truth. Moving one step further
| upstream, the emphasis on high ROI stems from an overall lack
| of availability of funding. Since there isn't enough money to
| go around, it becomes necessary to choose some more selective
| metric to pare down the pool of grant applicants.
| virtuallynathan wrote:
| Do we just have too many people doing "science"? Clearly
| people are getting funding to crank out nonsense, boring
| papers, studies that can't be replicated, etc.
| bsf_ wrote:
| Only if you believe that as a society we have solved all of
| the interesting problems. Since this obviously is not true,
| I would instead focus on the way we evaluate science to
| improve it.
| virtuallynathan wrote:
| I'm mostly suggesting the money isn't well distributed,
| and I'm pretty sure that's due to the misaligned
| incentives. If we incentivized outcomes we care about, I
| suspect more money would come naturally from the results.
| colincooke wrote:
| I think peer-review is an answer, perhaps not a great one, to how
| can we trust specialist science. In a world where the renaissance
| man role is no longer feasible, to move science forward we must
| narrow our focus on particular sub-fields and problems within
| that field. One issue of this is that it becomes much harder for
| the general science community to verify your results, which is
| where peer review attempts to help, by forcing you to get other
| experts in your sub-field to review your work before it can be
| stamped as trusted.
|
| Does it work? Kind of. I've personally seen papers in reputable
| journals that while not fradulant, are pretty misleading. At the
| same time however I'm yet to see a workable alternative that
| fixes the trust issue.
|
| The question of boringness I think is pretty field dependent. In
| the ML community I've often seen papers almost be rejected by the
| peer review system because they're NOT exciting enough, despite
| them being pretty influential (a great example is AdamW [1]).
|
| Honestly my assesment of peer review is that not enough trainees
| (read: grad students/post docs) are doing reviewing. Trainees are
| often better acquainted with the details of methods, but also
| have a more open mind to accept a finding that could go a little
| against the grain. Additionally, they're often faster and do a
| better job since they have more time on their hands then senior
| researchers. There have been some efforts to fix this, but so far
| mostly isolated to specific fields.
|
| [1] https://openreview.net/forum?id=Bkg6RiCqY7
| Dig1t wrote:
| Sturgeon's law: 90% of everything is shit.
|
| More everything means more shit.
| jpmattia wrote:
| > _The peer-reviewed research papers allows you to "measure"
| productivity. How many papers in top-tier venues did research X
| produce? And that is why it grew so strong._
|
| If I were king, my rule would be to look at the number of
| citations per paper and years of citation, rather than total
| publication count. I think that would solve a number of issues,
| although I'm sure that it would eventually be gamed as well.
|
| I should mention though: It turns out that people have been
| bitching about this for decades. For example: I was at Bell Labs
| in the 90s, and there was regular lunchtime discussion about the
| Least Publishable Increment (LPI). Everyone had some example
| about how the LPI had decreased to near zero, and then a few wags
| went on to show how the LPI had in fact gone negative in certain
| subfields, not least because reviewers were overloaded and
| couldn't keep up.
|
| Hopefully, some of this will be self-correcting: Publications are
| no longer the money-making activity they used to be, so resources
| will begin to dry up. Eventually, pubs are going to start
| rejecting valid papers for being too incremental.
| garden_hermit wrote:
| This is whats done in science evaluation--the H-index for
| instance is built from a combination of publication and
| citation counts. Similarly, journal prestige is quantified
| using the Journal Impact Factor, which aggregates the average
| number of citation a paper in the journal receives.
|
| As you speculate, these can also be gamed. Authors will cite
| themselves more to inflate their citations, or form citation
| carters with other authors to cite each other's work.
| Similarly, journals have been known to coerce authors to cite
| other works in the venue, in order to inflate their impact
| factor. Beyond these, there are issues with comparing citations
| across disciplines, article type, and other contexts.
|
| > Eventually, pubs are going to start rejecting valid papers
| for being too incremental.
|
| Many of hte most prestigious journals, such as Science, Nature,
| and Cell, already do this. However newer Mega-journals, like
| PLoS, have had explicitly the opposite policy, and state that
| they accept anything that is "sound science", no matter the
| size of its contribution; they have however become more
| selective over time.
| colincooke wrote:
| Most people do actually judge by citations rather than raw
| pubs. It's an open secret that you can shove out papers
| (acceptance to any venue, regardless of the quality of the work
| is probabilistic) with enough effort. Getting cites is harder
| (other than self-cites which is a seperate issue).
|
| When I look up a researcher first thing I look at is their
| number of citations and H-10 index, then I look through their
| top papers (the ones that have been cited a lot). As far as I
| know hiring committees also care about these things.
|
| Of course it isn't the only thing that matters, or the most
| important, but its much more useful than raw number of papers
| published.
| mattkrause wrote:
| Even those are garbage measures though.
|
| They can be gamed (albeit with a bit more effort), they vary
| a ton between fields and subfields, and citation rates often
| reflect the "prestige" of the lab/authors and the trendiness
| of the topic, rather than the intrinsic "merit" of a paper.
|
| I don't think there's any good automatic proxy for research
| quality.
| t_serpico wrote:
| Garbage is a strong word. I think there's definitely a
| moderate to strong correlation between h-index and quality
| of researcher. The downside with the h-index is that it
| also measures your ability to play the 'game' (i.e network,
| cite your peers, etc.), but I really see no way around
| that. You can also argue that this is an important skill to
| have. If you produce the most amazing research in the world
| and people don't cite you, that could point toward a number
| of red flags (e.g you're an asshole, your work isn't clear
| to understand, you don't appropriately cite your peers,
| etc.)
| frongpik wrote:
| I think it's a real thing and has the same nature as the force
| that keeps a water droplet together: it's the surface tension
| that prevents others from sticking out and supports them to be
| like others.
| tracyhenry wrote:
| By database folks: We are Drowning in a Sea of Least Publishable
| Units (LPUs)
|
| https://researchsetup.github.io/files/lpu.pdf
| paulpauper wrote:
| How are there so many papers if the papers are so long and
| complicated these days and acceptance rates so low? Econ papers
| for example have tons of data and stat analysis and are often 50+
| pages and have multiple authors because there is so much data to
| crunch and so much stats involved. The era of the 5-10 page paper
| written by a single person that make an interesting observation
| or novel insight, is over.
| zwieback wrote:
| The good papers are still out there but it's hard to slog through
| the mediocre or downright worthless stuff if you're not an expert
| in the field. That's a real problem, I think, even venturing into
| an adjacent field takes a lot of reading and sorting through the
| chaff unless you have someone to guide you. And number of
| citations is not a great metric to figure out what's worth
| reading.
|
| By its nature, science and R&D is very incremental though. I have
| some sympathy for the PhD student wanting to get something
| published (or the prof wanting to get the PhD student to publish)
| and 5 or even 10 years is short for truly groundbreaking results.
| antpls wrote:
| Imagine if research was like github, but deduplicated : a giant
| graph database of unique facts, hypothesis, assumptions, proofs
| and experiment results, where anyone can contribute, from fixing
| a typo to publishing experiment dataset.
|
| You would reference to previous knowledge without having to think
| about citing authors, because it would be automatically generated
| from the graph.
|
| Then we would have hundreds of "knowledge verifiers", like we
| have bots scanning github, and ultimately bots deriving new
| knowledge and contributing to the graph.
|
| The whole world would be building a shared database of knowledge,
| without any duplicated effort.
|
| Most of the individual facts in this database would probably be
| boring (like the trivial facts in logic or mathematic), but once
| in a while, an interesting and important fact would be found.
| refulgentis wrote:
| So, Wikipedia, except with some sort of syntactical structure
| that allowed for formal evaluation and analysis of
| arguments...we could call it a "compiler" for "compiling" sets
| of rules, or "libraries"
| bongoman37 wrote:
| This is an interesting idea and would work for something like
| Math or maybe theoretical physics, but once you get to messier
| fields like Biology it would be mired in hopeless confusion.
| Every experiment has so many varied parameters and restrictions
| due to things like ethics committees that standardizing it is
| nigh impossible. This is not even going down to fields like
| psychology.
| coliveira wrote:
| The same can be said about software development: almost the
| totality of software developed is boring, unnecessary, and
| sometimes wrong. That said, we still know that developing
| software is an important activity.
| dekhn wrote:
| Nearly every "exciting" paper I've encountered in my career
| (spanning multiple areas of biology and computer science) has
| turned out to be much less exciting when subjected to critical
| analysis by a team of grad students (journal club). I've found
| that many boring papers stand up better.
| xakahnx wrote:
| I find the same failed incentive scheme in industry. When a
| company gets big enough (tens of thousands of employees), taking
| risks to achieve something new and interesting doesn't pay off.
| It's more reliable to just be present and focus on writing about
| yourself during performance review time.
| ivoras wrote:
| a) 90% of everything is junk
|
| b) that's what you get when you start optimising for paper
| acceptance
___________________________________________________________________
(page generated 2021-01-02 23:03 UTC)