[HN Gopher] What's wrong with social science and how to fix it (...
___________________________________________________________________
What's wrong with social science and how to fix it (2020)
Author : pkkm
Score : 78 points
Date : 2022-12-11 18:12 UTC (4 hours ago)
(HTM) web link (fantasticanachronism.com)
(TXT) w3m dump (fantasticanachronism.com)
| Der_Einzige wrote:
| In some communities, notably AI, there are attempts to fight this
| which are quite successful:
|
| e.g. https://paperswithcode.com/ and the recent phenomenon of
| people hosting their system demonstrations on huggingface.
| patientplatypus wrote:
| This has always been the part of academia I have not understood.
| Academics are supposed to "know" more than the public. They are
| paid to have some understanding of reality in a way that the lay
| public does not so they can have tenure - that is they can't be
| fired (or it is very hard) for researching controversial subjects
| with payoffs that are hard to define by someone not in the field.
|
| Everyone else in academia is working towards this end result or
| drops out somewhere along the path and works in industry. The
| exception being those specialist degrees that are quasi-academic
| such as law and medicine, but which have more applied (and
| therefore measurable) results.
|
| And yet, we don't hold these people to the standards they set for
| themselves as a cohort? I wouldn't expect every paper to be
| replicable, or every researcher to always be right. But I see
| things like the https://en.wikipedia.org/wiki/EmDrive and wonder
| how many billions of dollar were put into this thing. It's
| essentially a state backed way of defrauding the public out of
| tax dollars to pay people with doctorates that made friends with
| other people with doctorates. What if the US spent the money that
| was used on the EmDrive on housing or social services for
| instance? Or building libraries or bridges?
|
| All it would take would be for a few academic journals to demand
| that the statistical replication of papers be required, but it
| would mean that the editors would have to be guided by principles
| other than maximizing revenue.
| ThrowawayTestr wrote:
| If I had a bunch of money, I'd setup a grant specifically for
| funding replication studies. Real science can be replicated and
| the only way to replicate is to do it.
| amelius wrote:
| Also, replication studies could be a great way for young
| scientists to get started in their career.
| FeepingCreature wrote:
| I mean, the incentives do seem to blow the other way. "We
| have invalidated the well-paid and instrumentally useful
| papers of a dozen other scientists! Please hire us!"
| PartiallyTyped wrote:
| > Hey we found those issues in so and so papers, we
| identified mistakes of a dozen other scientists and we do
| our due diligence. Please hire us!
|
| I think it's a lot about the phrasing.
| amelius wrote:
| It may seem that way, but there will always be a PI
| involved, with different incentives.
| zinclozenge wrote:
| When I was doing my physics masters (at a pretty good school
| ie had a couple of nobel prize winners) there was at least a
| handful of students whose phd projects involved "testing"
| theories of gravitation and pushing them to their limits.
|
| Not quite the same as experimental replication, but along the
| same lines I'd say.
| kansface wrote:
| Given that they are testing theory, yes. What theory
| underlines psychology? How does one relate the findings?
| texaslonghorn5 wrote:
| Yes, and this is already how many PhD students spend their
| first year or two
| akira2501 wrote:
| If the incentive is to get the grant and not to actually
| produce replicable studies, then what impact would you expect
| this to have? If the inputs are mostly garbage, then all this
| might do is confirm what we already know.
|
| I wonder if you'd be better off having a two-phase grant
| mechanism. One smaller payment for the research, and a second
| larger payment if it is replicable. Actually incentivize the
| "market" to produce good work. As it is, this incentive doesn't
| actually exist.
| yucky wrote:
| There needs to be a prestigious award with a large monetary
| reward for scientists who definitively disprove the most
| previously published studies each year. Then track scientists
| how have their names attached to the most garbage studies and
| publicly drag them.
|
| Best way to clean house on an untrustworthy institution that's
| been politicized. Do science for the science, not to fit some
| shit narrative.
| 10g1k wrote:
| 1. Social science is not science.
|
| 2. It's entirely populated by ideological activists rather than
| scientists. (It will never be populated by scientists because 1.)
| marcosdumay wrote:
| "Science" is the name of how you work with something, not of a
| subject.
| 10g1k wrote:
| And there is no science in social science.
| yucky wrote:
| Yes, which I believe is why people rightly dismiss social
| science, as not being scientific at all.
| orwin wrote:
| What is the science process that social scientists do not
| follow?
|
| Hypothesis refutability?
|
| Replication? (probably their weakest point btw)
|
| I mean, in this thread i've read about successes in
| sociology and economics, probably the most math-based
| social sciences, but i'd say linguistics are probably the
| second most successfull science in the last 40 years, and
| that's considered a social science. History have also made
| a lot of new discoveries recently (by criticizing older
| works and getting past them). Archeology have slow
| successes, but still, we understand Neandertal way, way
| better than we did ten years ago.
| dash2 wrote:
| Sigh. This over-the-top reaction is very common among people
| who don't know better. I'll just link my answer from the last
| time (here, about economics, but it works fine for soc sci in
| general): https://news.ycombinator.com/item?id=31411593
| pkkm wrote:
| This middlebrow dismissal is way too broad and makes me suspect
| that your only interaction with social science is via popular
| media. You'd have a good point if you said "poorly done
| science" rather than "not science" and restricted your
| dismissal to the kind of n = 30 social psychology studies that
| get paraded around by journalists when they confirm their
| biases. But it's false as stated; even in psychology alone,
| there are many findings which can withstand scrutiny and
| replication, e.g.: - Spaced repetition (you
| remember things longer if you space out your learning over time
| than if you cram). - Primacy/recency (you usually
| remember the first and last items in a sequence better than
| items in the middle). - Stroop effect (you respond slower
| when there are incongruent stimuli distracting you). -
| Fitts's law (a model of human movement that can be used to
| develop better UIs). - Strongly negative influence of
| sleep deprivation on mental performance.
|
| And that's not to mention other social sciences such as
| economics.
| huitzitziltzin wrote:
| 1. What's science? This is a surprisingly hard question. Im
| going to guess physics is your model and you're going to cite
| popper about falsifiability to me. A lot of activities which
| don't look at all like physics are science.
|
| 2. Citation needed??? The professional incentives around
| publishing are bad - I don't dispute the author's claim there.
| But to say we are ideological activists suggests you haven't
| met very many social scientists.
| matthewdgreen wrote:
| Reading through this quickly I see several problems that make me
| wonder if the author has worked in academia.
|
| 1. _Publication venues aren't created equal._ People outside of
| academia don't understand that anyone can and will start a
| journal /conference. If I want to launch the Proceedings of the
| of Confabulatory Results in Computer Science Conference, all I
| need is a cheap hosting account and I'm ready to go. In countries
| like the US this is explicitly protected speech, and the big for-
| profit sites run by Elsevier et al. will often go ahead and add
| my publications to their database as long as I make enough of an
| effort to make things look legitimate.
|
| In any given field there are probably a handful of "top" venues
| that everyone in the field knows, and a vast long tail ranging
| from mid-tier to absolute fantasist for-profit crap. If you focus
| on the known conferences you might see bad results, but if you
| focus on the long tail you're doing the equivalent of searching
| the 50th+ page of Google results for some term: that is, you're
| deliberately selecting for SEO crap. And given the relatively
| high cost of peer-reviewing the good vs _not peer-reviewing_ the
| bad, unfiltered searches will always be dominated by crap (surely
| this is a named "law"?) Within TFA I cannot tell if the author is
| filtering properly for decent venues (as any self-respecting
| expert would do) or if they're just complaining about spam. Some
| mention is made of a DARPA project, so I'm hopeful it's not _too_
| bad. However even small filtering errors will instantly make your
| results meaningless.
|
| 2. _Citations aren't intended as a goddamn endorsement (or
| metric)._ In science we cite papers liberally. We do not do this
| because we want people to get a cookie. We don't do it because
| we're endorsing every result. We do it because we've learned (or
| been told) about a _possibly relevant_ result and we want the
| reader to know about it too. When it comes to citations, more is
| _usually_ better. Just as it is much better to let many innocent
| people go free rather than imprison one guilty one, it is vastly
| better to cite a hundred mediocre or incorrect papers than to
| miss one important citation. Readers should not see a citation
| and infer correctness or importance unless _the author
| specifically states this in the text_ at which point, sure,
| that's an error. But most citation-counting metrics don't
| actually read the citing papers, they just do text matching.
| Since most bulk citations are just reading lists, this also
| filters for citations that don't mean much about the quality of a
| work.
|
| The idea of using citation counts as a metric for research
| quality is a bad one that administrators (and the type of
| researchers who solve administrative problems) came up with. It
| is one of those "worst ideas except for any other" solutions:
| probably better than throwing darts at a board and certainly more
| scalable than reading every paper for quality. But the idea is
| artificial at best, and complaints like "why are people citing
| incorrect results" fundamentally ask citations to be something
| they're not.
|
| Overall there are many legitimate complaints about academia and
| replicability to be had out there. But salting them with
| potential nonsense does nobody any favors, and just makes the
| process of fixing these issues much less likely to succeed.
| jcampbell1 wrote:
| You are only making the author seem more correct. You have a
| system where citations act as cookies and endorse
| (administrators fault) but that is not the researchers intent.
|
| > Whatever the explanation might be, the fact is that the
| academic system does not allocate citations to true claims.
| This is bad not only for the direct effect of basing further
| research on false results, but also because it distorts the
| incentives scientists face. If nobody cited weak studies, we
| wouldn't have so many of them. Rewarding impact without regard
| for the truth inevitably leads to disaster.
|
| You also argue that high quality journals are good at filtering
| quality. The author presents evidence that this is specious.
|
| You furthermore question whether the author worked in academia,
| which is answered.
|
| Given your reading comprehension skills, I wonder if you are in
| the correct job.
| WalterBright wrote:
| > for-profit crap
|
| And of course we all know that making something for profit
| means it's bad.
| kazinator wrote:
| > _Citations aren't intended as a goddamn endorsement (or
| metric)._
|
| Works for GNU Parallel. :)
| darawk wrote:
| 1. He addresses this repeatedly throughout the piece. Journal
| impact factor is (largely) uncorrelated to replication
| probability.
|
| 2. Yes, but this hardly seems like a defense of citing
| something false (without comment), or something that has
| literally been retracted years ago, which is a large part of
| his complaint.
|
| He is not suggesting the use of citation count as a metric for
| quality. I have no idea how you could have possibly gotten that
| from reading this article. A bullet point in his "what to do"
| section is literally "ignore citation counts".
| matthewdgreen wrote:
| > He is not suggesting the use of citation count as a metric
| for quality. I have no idea how you could have possibly
| gotten that from reading this article.
|
| TFA is extremely clear that the presence of citations (in the
| aggregate, as a count) on "weak" papers is something the
| author considers a problem and a perhaps a moral failure on
| the part of citing authors. The author also believes that
| citations should be "allocated" to true claims.
|
| * "Yes, you're reading that right: studies that replicate are
| cited at the same rate as studies that do not. Publishing
| your own weak papers is one thing, but citing other people's
| weak papers?" Here citations are clearly treated as a bulk
| metric, and "weak" is a quality metric.
|
| * " As in all affairs of man, it once again comes down to
| Hanlon's Razor. Either: Malice: [the citing authors] know
| which results are likely false but cite them anyway. or,
| Stupidity: they can't tell which papers will replicate even
| though it's quite easy." Aside from being gross and insulting
| -- here the author claims that the decision to cite a result
| can have only two explanations, malice and stupidity. Not,
| for example, the much more straightforward explanation that I
| mention above (and that the author even admits is likely.)
|
| * " Whatever the explanation might be, the fact is that the
| academic system does not allocate citations to true claims."
| The use of "allocate citations" clearly recognizes that
| citation counts are treated as a metric, and indicate that
| that the author wishes this allocation to be done
| differently.
|
| * "This is bad not only for the direct effect of basing
| further research on false results, but also because it
| distorts the incentives scientists face. If nobody cited weak
| studies, we wouldn't have so many of them." Here the author
| makes it clear that they see citation count (correctly) as a
| metric that encourages researchers, and believes the optimal
| solution is to remove all (but perhaps explicitly negative?)
| citations to those papers.
|
| > A bullet point in his "what to do" section is literally
| "ignore citation counts".
|
| Yes, after extensively complaining about the fact that
| citations aren't used by authors in a manner that reflects
| the way they're used as a metric, then _complaining further_
| about the fact that authors do not use them this way and
| repeatedly urging them to change the way citations are used
| -- the author then admits that their use of a metric is
| problematic and should be ended.
|
| We agree! The only problem here was that the author took a
| detour to a totally absurd place to get there.
| boxed wrote:
| A few of the suggestions are about making new financial
| incentives. But the problem in the first place is the financial
| incentive structures! It's much better to let people do their
| jobs and not have a bunch of incentives. We know this is true for
| programmers, why do people expect it's not true about everyone
| else?
| robertlagrant wrote:
| Generally people do work for money.
| poszlem wrote:
| Because if you don't add money as incentive the only other
| strong incentive is for people who join the social sciences
| because they want to push some kind of policy that they feel
| strongly about (In this case, their personal beliefs and values
| serve as their incentive). There is even a joke among
| psychologists that people who join psychology departments are
| mostly those who themselves need psychotherapy (and they choose
| to study psychology instead).
| davidgay wrote:
| The actual title is "What's Wrong with Social Science and How to
| Fix It". Not exactly a neutral title edit...
|
| Especially as this comes from "a part of DARPA's SCORE program,
| whose goal is to evaluate the reliability of social science
| research"
| class4behavior wrote:
| The original title is too long and I assume op thought the
| findings can be extended to all science anyway.
|
| On the other hand, removing the later part as the previous
| submission did makes the title sound overly presumptuous, imo.
| pkkm wrote:
| > The actual title is "What's Wrong with Social Science and How
| to Fix It".
|
| Yeah. I couldn't fit the whole thing into the hard 80-character
| limit so I decided to drop the word "social" because I thought
| the subtitle was important to keep. I guess it's moot now that
| the title has been edited, I assume by a mod.
| christkv wrote:
| The worst part is when these studies lead to policy changes.
| themitigating wrote:
| Science is not an organization, business, or any single entity
| and discussing it as such is ridiculous.
|
| How would you implement changes?
| nordsieck wrote:
| > How would you implement changes?
|
| It's pretty easy: you add requirements to accepting federal
| grant money.
| madsbuch wrote:
| It is an institution though. With very strict rules regarding
| the scientific method.
| ivalm wrote:
| No, there absolutely no strict rules regarding the scientific
| method. At most there are conventions about the structure of
| papers in some journals.
| madsbuch wrote:
| that regards form, which indeed is not a part of the
| institution.
| orhmeh09 wrote:
| These rules are not as strict as one might think, especially
| when they're tied up with people's careers--consider the
| reproducibility crisis for instance.
| madsbuch wrote:
| they ought to be, though not all people have the integrity.
| jmaygarden wrote:
| It doesn't appear that you read the article. Social science
| journals definitely are organizations, businesses and/or single
| entities that could change.
| [deleted]
| ajkjk wrote:
| Whether or not that's the case, it's obviously possible for
| loose-knit organizations to change. That's basically the role
| of philosophy, or more specifically, convincing people to do
| something different with argument.
| yarg wrote:
| Replication is a huge issue, but I've been wondering lately about
| refusal to publish.
|
| Even if all of the published papers were peer reviewed and
| replicated, there's a lot of science that never sees the light of
| day.
|
| Even non-results are important - if not glamourous, and (even
| worse) publishing is disincentivised if the results go against
| what the funders wanted or expected.
| jimmaswell wrote:
| > refusal to publish
|
| My impression is that in social science, this happens when the
| results are "wrong". Maybe compounding the replication crisis
| if only the "right" results get published but "right" was
| actually wrong.
| trompetenaccoun wrote:
| The replication crisis can not be overstated though because
| "science" that can not be replicated isn't science at all. The
| very base definition of science is that it's a method that
| produces testable and predictable propositions about our world.
| If a theory or paper does not deliver that it's guesswork, a
| superstition or religious belief in extreme cases. But
| certainly not science, no matter how many people or
| institutions refer to it as such. Just as the Democratic
| People's Republic of Korea isn't actually democratic.
|
| Do not trust the science. Verify the science.
| jltsiren wrote:
| Science is about the process, not the results. Somebody
| discovers an phenomenon, studies it, fails to take something
| relevant into account, and publishes a faulty result, and
| that's science. Somebody else tries to replicate the study,
| fails it, and can't figure out what went wrong, and that's
| science. Then somebody discovers the flaw, gets a different
| result, and publishes it, and that's science.
| trompetenaccoun wrote:
| When there is no reliable review mechanism, then the
| process is inherently flawed. It may be science in your
| book but following that definition, everything anyone
| studies and publishes is science. Homeopathy is science by
| that definition. Flat earth studies too.
|
| There needs to be proper scrutiny to ensure minimal quality
| standards are at least followed, else the whole thing is
| bunk. A study that can not be replicated by anyone is not
| research, it's monkeys hitting keys without understanding
| what they're doing. Sadly monkeys with PhDs in some cases
| but that doesn't change anything.
|
| The distinguishing feature of what "real science" is in the
| eye of the believer seems to be someone holding an academic
| title, i.e. an argument from authority, not verifiable
| standards that exclude randomness. There is nothing
| scientific about belief.
| hirundo wrote:
| "Yes, you're reading that right: studies that replicate are cited
| at the same rate as studies that do not ... Astonishingly, even
| after retraction the vast majority of citations are positive, and
| those positive citations continue for decades after retraction."
|
| The academic influence of social science papers is uncorrelated
| with their scientific quality.
| Semaphor wrote:
| Needs (2020)
|
| 100 comments at the time:
| https://news.ycombinator.com/item?id=24447724
| timcavel wrote:
| poszlem wrote:
| We absolutely need to change the way we do social sciences. My
| favourite pet peeve in social sciences is idea laundering.
|
| "It's analogous to money laundering. Here's how it works: First,
| various academics have strong moral impulses about something. For
| example, they perceive negative attitudes about obesity in
| society, and they want to stop people from making the obese feel
| bad about their condition. In other words, they convince
| themselves that the clinical concept of obesity (a medical term)
| is merely a story we tell ourselves about fat (a descriptive
| term); it's not true or false--in this particular case, it's a
| story that exists within a social power dynamic that unjustly
| ascribes authority to medical knowledge.
|
| Second, academics who share these sentiments start a peer-
| reviewed periodical such as Fat Studies--an actual academic
| journal. They organize Fat Studies like every other academic
| journal, with a board of directors, a codified submission
| process, special editions with guest editors, a pool of
| credentialed "experts" to vet submissions, and so on. The
| journal's founders, allies and collaborators then publish
| articles in Fat Studies and "grow" their journal. Soon, other
| academics with similar beliefs submit papers, which are accepted
| or rejected. Ideas and moral impulses go in, knowledge comes out.
| Voila!
|
| Eventually, after activist scholars petition university libraries
| to carry the journal, making it financially viable for a large
| publisher like Taylor & Francis, Fat Studies becomes established.
| Before long, there's an extensive canon of academic work--ideas,
| prejudice, opinion and moral impulses--that has been laundered
| into "knowledge." (source: https://www.wsj.com/articles/idea-
| laundering-in-academia-115...)
|
| I was one of the extreme "trust the science" people, until I
| joined a startup that worked with academia. The amount of
| pettiness, vindictiveness, cutthroat power games I have seen
| surpassed even the most hardcore of startups.
| geraldwhen wrote:
| I struggle to see what part of social science is morales or
| quasi religious beliefs mascarading as science.
|
| There is no cure.
| poszlem wrote:
| At the very least we need to start demanding two things:
| falsifiability and reproducibility. We should stop calling
| "science" things that are unfalsifiable.
___________________________________________________________________
(page generated 2022-12-11 23:00 UTC)