[HN Gopher] A flawed paper in management science has been cited ...
___________________________________________________________________
A flawed paper in management science has been cited more than 6k
times
Author : timr
Score : 687 points
Date : 2026-01-25 09:04 UTC (1 days ago)
(HTM) web link (statmodeling.stat.columbia.edu)
(TXT) w3m dump (statmodeling.stat.columbia.edu)
| dgxyz wrote:
| Not even surprised. My daughter tried to reproduce a well-cited
| paper a couple of years back as part of her research project. It
| was not possible. They pushed for a retraction but university
| don't want to do it because it would cause political issues as
| one of the peer-reviewers is tenured at another closely
| associated university. She almost immediately fucked off and went
| to work in the private sector.
| jruohonen wrote:
| > They pushed for a retraction ...
|
| That's not right; retractions should only be for research
| misconduct cases. It is a problem with the article's
| recommendations too. Even if a correction is published that the
| results may not hold, the article should stay where it is.
|
| But I agree with the point about replications, which are much
| needed. That was also the best part in the article, i.e. "stop
| citing single studies as definitive".
| dgxyz wrote:
| I will add it's a little more complicated than I wanted to
| let on here as I don't identify it in the process. But it
| definitely was misconduct on this one.
|
| I read the paper as well. My background is mathematics and
| statistics and the data was quite frankly synthesised.
| jruohonen wrote:
| Okay, but to return to replications, publishers could
| incentivize replications by linking replication studies
| directly on a paper's website location. In fact, you could
| even have a collection of DOIs for these purposes,
| including for datasets. With this point in mind, what I
| find depressing is that the journal declined a follow-up
| comment.
|
| But the article is generally weird or even harmful too.
| Going to social media with these things and all; we have
| enough of that "pretty" stuff already.
| dgxyz wrote:
| Agree completely on all points.
|
| However there are two problems with it. Firstly it's a
| step towards gamification and having tried that model in
| a fintech on reputation scoring, it was a bit of a
| disaster. Secondarily, very few studies are replicated in
| the first place unless there is a demand for linked
| research to replicate it before building on it.
|
| There are also entire fields which are mostly populated
| by bullshit generators. And they actively avoid
| replication studies. Certain branches of psychology are
| rather interesting in that space.
| jruohonen wrote:
| > Certain branches of psychology are rather interesting
| in that space.
|
| Maybe, I cannot say, but what I can say is that CS is in
| the midst of a huge replication crisis because LLM
| research cannot be replicated by definition. So I'd
| perhaps tone down the claims about other fields.
| dgxyz wrote:
| Another good example that for sure. You won't find me
| having any positive comments about LLMs.
| kelipso wrote:
| It's much much more likely that she did something wrong trying
| to replicate it than the paper was wrong. Did she try to
| contact the authors, discuss with her advisor?
|
| Pushing for retraction just like that and going off to private
| sector is...idk it's a decision.
| dgxyz wrote:
| It went on for a few months. The source data for the paper
| was synthesised and it was like trying to get blood out of a
| stone trying to get hold of it, clearly because they knew
| they were in trouble. Lots of research money was wasted
| trying to reproduce it.
|
| She was just done with it then and a pharma company said "hey
| you fed up with this shit and like money?" and she was and
| does.
|
| edit: as per the other comment, my background is mathematics
| and statistics after engineering. I went into software but
| still have connections back to academia which I left many
| years ago because it was a political mess more than anything.
| Oh and I also like money.
| dekhn wrote:
| A single failure to reproduce a well-cited paper does not
| constitute grounds for a retraction unless the failure somehow
| demonstrates the paper is provably incorrect.
| nairboon wrote:
| Nowadays high citation numbers don't mean anymore what they used
| to. I've seen too many highly cited papers with issues that keep
| getting referenced, probably because people don't really read the
| sources anymore and just copy-paste the citations.
|
| On my side-project todo list, I have an idea for a scientific
| service that overlays a "trust" network over the citation graph.
| Papers that uncritically cite other work that contains well-known
| issues should get tagged as "potentially tainted". Authors and
| institutions that accumulate too many of such sketchy works
| should be labeled equally. Over time this would provide an
| additional useful signal vs. just raw citation numbers. You could
| also look for citation rings and tag them. I think that could be
| quite useful but requires a bit of work.
| boelboel wrote:
| Going to conferences seeing researchers who've built a career
| doing subpar (sometimes blatantly 'fake') work has made me grow
| increasingly wary of experts. Worst is lots of people just seem
| to go along with it.
|
| Still I'm skeptical about any sort of system trying to figure
| out 'trust'. There's too much on the line for
| researchers/students/... to the point where anything will
| eventually be gamed. Just too many people trying to get into
| the system (and getting in is the most important part).
| mezyt wrote:
| The worse system is already getting gamed. There's already
| too much on the line for researchers/students, so they don't
| admit any wrong doing or retract anything. What's the worse
| that could happen by adding a layer of trust in the h-index ?
| boelboel wrote:
| I think it could end up helping a bit in the short term.
| But in the end an even more complicated system (even if in
| principle better) will reward those spending time gaming it
| even more.
|
| The system ends up promoting an even more conservative
| culture. What might start great will end up with groups and
| institutions being even more protective of 'their truths'
| to avoid getting tainted.
|
| Don't think there's any system which can avoid these sort
| of things, people were talking about this before WW1,
| globalisation just put it in overdrive.
| elzbardico wrote:
| Those citation rings are becoming rampant in my country, along
| with the author count inflation.
| raddan wrote:
| Interesting idea. How do you distinguish between critical and
| uncritical citation? It's also a little thorny--if your related
| work section is just describing published work (which is a
| common form of reviewer-proofing), is that a critical or
| uncritical citation? It seems a little harsh to ding a paper
| for that.
| wasabi991011 wrote:
| "Uncritically" might be the wrong criteria, but you should
| definitely understand the related work you are citing to a
| decent extent.
| nairboon wrote:
| That's one of the issues that causes a bit of work. Citations
| would need to be judged with context. Let's say paper X is
| nowadays known to be tainted. If a tainted work is cited just
| for completeness, it's not an issue, e.g. "the method has
| been used in [a,b,c,d,x]" If the tainted work is cited
| critically, even better: e.g. "X claimed to show that..., but
| y and z could not replicate the results". But if it is just
| taken for granted at face value, then the taint-label should
| propagate: e.g. ".. has been previously proved by x and thus
| our results are very important...".
| mike_hearn wrote:
| I explored this question a bit a few years ago when GPT-3 was
| brand new. It's tempting to look for technological solutions to
| social problems. It was COVID so public health papers were the
| focus.
|
| The idea failed a simple sanity check: just going to Google
| Scholar, doing a generic search and reading randomly selected
| papers from within the past 15 years or so. It turned out most
| of them were bogus in some obvious way. A lot of ideas for
| science reform take as axiomatic that the bad stuff is rare and
| just needs to be filtered out. Once you engage with some
| field's literatures in a systematic way, it becomes clear that
| it's more like searching for diamonds in the rough than
| filtering out occasional corruption.
|
| But at that point you wonder, why bother? There is no
| alchemical algorithm that can convert intellectual lead into
| gold. If a field is 90% bogus then it just shouldn't be engaged
| with at all.
| MarkusQ wrote:
| There is in fact a method, and it got us quite far until we
| abandoned it for the peer review plus publish or perish death
| spiral in the mid 1900s. It's quite simple:
|
| 1) Anyone publishes anything they want, whenever they want,
| as much or as little as the want. Publishing does not say
| anything about your quality as a researcher, since anyone can
| do it.
|
| 2) Being published doesn't mean it's right, or even credible.
| No one is filtering the stream, so there's no cachet to being
| published.
|
| We then let memetic evolution run its course. This is the
| system that got us Newton, Einstein, Darwin, Mendeleev,
| Euler, etc. It works, but it's slow, sometimes ugly to watch,
| and hard to game so some people would much rather use the
| "Approved by A Council of Peers" nonsense we're presently
| mired in.
| seec wrote:
| Yeah, the gatekeepers just want their political power, and
| that's it. Also, education/academia is a big industry
| nowadays; it feeds many people who have a big incentive to
| perpetuate the broken system.
|
| We are just back to the universities under the religious
| control system that we had before the Enlightenment. Any
| change would require separating academia from political
| government power.
|
| Academia is just the propaganda machine for the government,
| just like the church was the tool for justifying god-gifted
| powers to kings.
| lo0dot0 wrote:
| I think that the solution is very simple, remove the citation
| metric. Citations don't mean correctness. What we want is
| correctness.
| portly wrote:
| Maybe there should be a different way to calculate h-index.
| Where for an h-index of n, you also need n replications.
| pseudohadamard wrote:
| >people don't really read the sources anymore and just copy-
| paste the citations.
|
| That's reference-stealing, some other paper I read cited this
| so it should be OK, I'll steal their reference. I always make
| sure I read the cited paper before citing it myself, it's scary
| how often it says something rather different to what the
| citation implies. That's not necessarily bad research, more
| that the author of the citing paper was looking for effect A in
| the cited reference and I'm looking for effect B, so their
| reason for citing differs from mine, and it's a valid reference
| in their paper but wouldn't be in mine.
| renewiltord wrote:
| Family member tried to do work relying on previous results from a
| biotech lab. Couldn't do it. Tried to reproduce. Doesn't work.
| Checked work carefully. Faked. Switched labs and research
| subject. Risky career move, but. Now has a career. Old lab is in
| mental black box. Never to be touched again.
|
| Talked about it years ago
| https://news.ycombinator.com/item?id=26125867
|
| Others said they'd never seen it. So maybe it's rare. But no one
| will tell you even if they encounter. Guaranteed career
| blackball.
| rcxdude wrote:
| I haven't identified an outright fake one but in my experience
| (mainly in sensor development) most papers are at the very
| least optimistic or are glossing over some major limitations in
| the approach. They should be treated as a source of ideas to
| try instead of counted on.
|
| I've also seen the resistance that results from trying to
| investigate or even correct an issue in a key result of a
| paper. Even before it's published the barrier can be quite high
| (and I must admit that since it's not my primary focus and my
| name was not on it, I did not push as hard as I could have on
| it)
| dekhn wrote:
| When I was a postdoc, I wrote up the results from a paper
| based on theories from my advisor. The paper wasn't very
| good- all the results were bad. Overnight, my advisor rewrote
| all the results of the paper, partly juicing the results, and
| partly obscuring the problems, all while glossing over the
| limitations. She then submitted it to a (very low prestige)
| journal.
|
| I read the submitted version and told her it wasn't OK. She
| withdrew the paper and I left her lab shortly after. I simply
| could not stand the tendency to juice up papers, and I didn't
| want to have my reputation tainted by a paper that was false
| (I'm OK with my reputation being tainted by a paper that was
| just not very good).
|
| What really bothers me is when authors intentionally leave
| out details of their method. There was a hot paper (this was
| ~20 years ago) about a computational biology technique
| ("evolutionary trace") and when we did the journal club, we
| tried to reproduce their results- which started with writing
| an implementation from their description. About half way
| through, we realized that the paper left out several key
| steps, and we were able to infer roughly what they did, but
| as far as we could tell, it was an intentional omission made
| to keep the competition from catching up quickly.
| MaxBarraclough wrote:
| I've read of a few cases like this on Hacker News. There's
| often that assumption, sometimes unstated: if a junior
| scientist discovers clear evidence of academic misconduct by a
| senior scientist, it would be career suicide for the junior
| scientist to make their discovery public.
|
| The _replication crisis_ is largely particular to psychology,
| but I wonder about the scope of the _don 't rock the boat_
| issue.
| mike_hearn wrote:
| It's not particular to psychology, the modern discussion of
| it just happened to start there. It affects all fields and is
| more like a validity crisis than a replication crisis.
|
| https://blog.plan99.net/replication-studies-cant-fix-
| science...
| renewiltord wrote:
| He's not saying it's Psychology the field. He's saying
| replication crisis may be because junior scientist (most
| often involved in replication) is afraid of retribution:
| it's psychological reason for fraud persistence.
|
| I think perhaps blackball is guaranteed. No one likes a
| snitch. "We're all just here to do work and get paid. He's
| just doing what they make us do". Scientist is just job.
| Most people are just "I put thing in tube. Make money by
| telling government about tube thing. No need to be
| religious about Science".
| MaxBarraclough wrote:
| I see my phrasing was ambiguous, for what it's worth I'm
| afraid mike_hearn had it right, I was saying the
| replication crisis largely just affects research in
| psychology. I see this was too narrow, but I think it's
| fair to say psychology is likely the most affected field.
|
| In terms of solutions, the practice of 'preregistration'
| seems like a move in the right direction.
| projektfu wrote:
| For original research, a researcher is supposed to replicate
| studies that form the building blocks of their research. For
| example, if a drug is reported to increase expression of some
| mRNA in a cell, and your research derives from that, you will
| start by replicating that step, but it will just be a note in
| your introduction and not published as a finding on its own.
|
| When a junior researcher, e.g. a grad student, fails to
| replicate a study, they assume it's technique. If they can't
| get it after many tries, they just move on, and try some other
| research approach. If they claim it's because the original
| study is flawed, people will just assume they don't have the
| skills to replicate it.
|
| One of the problems is that science doesn't have great
| collaborative infrastructure. The only way to learn that nobody
| can reproduce a finding is to go to conferences and have
| informal chats with people about the paper. Or maybe if you're
| lucky there's an email list for people in your field where they
| routinely troubleshoot each other's technique. But most of the
| time there's just not enough time to waste chasing these things
| down.
|
| I can't speak to whether people get blackballed. There's a lot
| of strong personalities in science, but mostly people are
| direct and efficient. You can ask pretty pointed questions in a
| session and get pretty direct answers. But accusing someone of
| fraud is a serious accusation and you probably don't want to
| get a reputation for being an accuser, FWIW.
| thewanderer1983 wrote:
| did "not impact the main text, analyses, or findings."
|
| Made me think of the black spoon error being off by a factor of
| 10 and the author also said it didn't impact the main findings.
|
| https://statmodeling.stat.columbia.edu/2024/12/13/how-a-simp...
| flowerthoughts wrote:
| > This doesn't mean that the authors of that paper are bad
| people!
|
| > We should distinguish the person from the deed. We all know
| good people who do bad things
|
| > They were just in situations where it was easier to do the bad
| thing than the good thing
|
| I can't believe I just read that. What's the bar for a bad person
| if you haven't passed it at "it was simply easier to do the bad
| thing?"
|
| In this case, it seems not owning up to the issues is the bad
| part. That's a choice they made. Actually, multiple choices at
| different times, it seems. If you keep choosing the easy path
| instead of the path that is right for those that depend on you,
| it's easier for me to just label you a bad person.
| psychoslave wrote:
| Seems fair in the frame of what is responded.
|
| But there is a concern which goes out of the "they" here.
| Actually, "they" could just as well not exist, and all
| narrative in the article be some LLM hallucination, we are
| still training ourself how we respond to this or that behavior
| we can observe and influence how we will act in the future.
|
| If we go with the easy path labeling people as root cause,
| that's the habit we are forging for ourself. We are missing the
| opportunity to hone our sense of nuance and critical thought
| about the wider context which might be a better starting point
| to tackle the underlying issue.
|
| Of course, name and shame is still there in the rhetorical
| toolbox, and everyone and their dog is able to use it even when
| rage and despair is all that stay in control of one mouth.
| Using it with relevant parcimony however is not going to happen
| from mere reactive habits.
| macleginn wrote:
| I guess he means that the authors can still be decent people in
| their private and even professional lives and not general
| scoundrels who wouldn't stop at actively harming other people
| to gain something.
| bell-cot wrote:
| https://tvtropes.org/pmwiki/pmwiki.php/Main/PragmaticVillain.
| ..
| chrisjj wrote:
| Hmm. I wonder how he knows these bad-doers are good people.
| trvz wrote:
| Most people aren't evil, just lazy.
| mikkupikku wrote:
| In real life, not disney movies made for simple minded
| children, lazy apathy is what most real evil looks like.
| Please see _" the banality of evil."_
| luckylion wrote:
| At which point do you cross the line? Somebody who
| murders to take someone else's money is ultimately just
| too lazy to provide value in return for money, so they're
| not evil?
| kibwen wrote:
| When apathy results in harm to others and benefits to
| oneself, those others are allowed to appropriately label
| that apathy as evil.
| trvz wrote:
| You can call them bad or shitty or something else.
|
| True evil is different.
| Panoramix wrote:
| I'd rather if the article would stick to the facts
| CoastalCoder wrote:
| > I can't believe I just read that. What's the bar for a bad
| person if you haven't passed it at "it was simply easier to do
| the bad thing?"
|
| This actually doesn't surprise much. I've seen a lot of variety
| in the ethical standards that people will publicly espouse.
| knallfrosch wrote:
| "It was easier for me to just follow orders than do the right
| thing." - Fictional SS officer, 1945. Not a bad person.
|
| /s
| readthenotes1 wrote:
| But he shoveled the neighbor sidewalks when it snowed.
|
| I have a relative who lives in Memphis, Tennessee. A few
| years ago some guy got out of prison, went to a fellow's home
| to buy a car, shot the car owner dead, stole the car and
| drove it around until he got killed by the police.
|
| One of the neighbors said, I kid you not, "he's a good kid"
| dilawar wrote:
| There are extremely competent coworkers I wouldn't like them as
| neighbours. Some of my great neighborhoods would make very
| sloppy and annoying coworkers.
|
| These people are terrible at their job, perhaps a bit malicious
| too. They may be great people as friends and colleagues.
| abanana wrote:
| People are afraid to sound too critical. It's very noticeable
| how every article that points out a mistake anywhere in a
| subject that's even slightly politically charged, has to
| emphasize "of course I believe X, I absolutely agree that Y is
| a bad thing", before they make their point. Criticising an
| unreplicable paper is the same thing. Clearly these people are
| afraid that if they sound too harsh, they'll be ignored
| altogether as a crank.
| 1dom wrote:
| > Clearly these people are afraid that if they sound too
| harsh, they'll be ignored altogether as a crank.
|
| This is true though, and one of those awkward times where
| good ideals like science and critical feedback brush up
| against potentially ugly human things like pride and ego.
|
| I read a quote recently, and I don't like it, but it's stuck
| with me because it feels like it's dancing around the same
| awkward truth:
|
| "tact is the art of make a point without making an enemy"
|
| I guess part of being human is accepting that we're all human
| and will occasionally fail to be a perfect human.
|
| Sometimes we'll make mistakes in conducting research.
| Sometimes we'll make mistakes in handling mistakes we or
| others made. Sometimes these mistakes will chain together to
| create situations like the post describes.
|
| Making mistakes is easy - it's such a part of being human we
| often don't even notice we do it. Learning you've made a
| mistake is the hard part, and correcting that mistake is
| often even harder. Providing critical feedback, as necessary
| as it might be, typically involves putting someone else
| through hardship. I think we should all be at least slightly
| afraid and apprehensive of doing that, even if it's for a
| greater good.
| anal_reactor wrote:
| American culture has this weird thing to avoid blame and
| direct feedback. It's never appropriate to say "yo, you did
| shit job, can you not fuck it up next time?". For example,
| I have a guy in my team who takes 10 minutes every standup
| - if everyone did this, standup would turn into an hour-
| long meeting - but telling him "bro what the fuck, get your
| shit together" is highly inappropriate so we all just sit
| and suffer. Soon I'll have my yearly review and I have no
| clue what to expect because my manager only gives me
| feedback when strictly and explicitly required so the
| entire cycle "I do something wrong" -> "I get reprimanded"
| -> "I get better" can take literal years. Unless I
| accidentally offend someone, then I get 1:1 within an hour.
| One time I was upset about the office not having enough
| monitors and posted this on slack and my manager told me
| not to do that because calling out someone's shit job makes
| them lose face and that's a very bad thing to do.
|
| Whatever happens, avoid direct confrontation at all costs.
| fn-mote wrote:
| On one hand, I totally agree - soliciting and giving
| feedback is a weakness.
|
| On the other hand, it sounds like this workplace has weak
| leadership - have you considered leaving for some place
| better? If the manager can't do their job enough to give
| you decent feedback and stop a guy giving 10 min stand
| ups, LEAVE.
|
| Reasons for not leaving? Ok, then don't be a victim. Tell
| yourself you're staying despite the management and focus
| on the positive.
| whstl wrote:
| I agree. If the company culture is not even helping or
| encouraging people to give pragmatic feedback, the war is
| already lost. Even the CEO and the board are in for a few
| years of stress.
| anal_reactor wrote:
| The biggest reason for not leaving is that I understand
| that perfect things don't exist and everything is about
| tradeoffs. My current work is complete dogshit -
| borderline retarded coworkers, hilariously incompetent
| management. But on the other hand they pay me okay salary
| while having very little expectations, which means that
| if I spend entire day watching porn instead of working,
| nobody cares. That's a huge perk, because it makes the de
| facto salary per hour insanely huge. Moreover, I found a
| few people from other teams I enjoy talking to, which
| means it's a rare opportunity for me to build a social
| life. Once they start requiring me to actually put in the
| effort, I'll bounce.
| mikkupikku wrote:
| What you're describing is mostly a convergence on the
| methods of "nonviolent communication".
| 0xDEAFBEAD wrote:
| I'll be direct with you, this sounds like an issue
| specific to your workplace. Get a better job with a
| manager who can find the middle ground between cursing in
| frustration and staying silent.
| lo_zamoyski wrote:
| While I agree there's a childish softness in our culture
| in many respects, you don't need to go to extremes and
| adopt thuggish or boorish behavior (which is also a
| problem, one that is actually concomitant with softness,
| because soft people are unable to bear discomfort or
| things not going their way). Proportionality and charity
| should inform your actions. Loutish behavior makes a
| person look like an ill-mannered toddler.
| bethekidyouwant wrote:
| "Lose face" is not western
| anal_reactor wrote:
| The phrase no, the concept yes.
| dullcrisp wrote:
| "For the sake of time, is it okay if we move on to the
| next update? We can go into further details offline."
|
| Also if that doesn't work, "Hey Bro I notice you like to
| give a lot of detail in standup. That's great, but we
| want to keep it a short meeting so we try to focus on
| just the highlights and surfacing any key blockers. I
| don't want to interrupt you, so if you like I can help
| you distill what you've worked on before the meeting
| starts."
| lo_zamoyski wrote:
| The fountain is charity. This is no mere matter of
| sentiment. Charity is willing the objective good of the
| other. This is what should inform our actions. But charity
| does not erase the need for justice.
| mgfist wrote:
| In general Western society has effectively outlawed "shame"
| as an effective social tool for shaping behavior. We used to
| shame people for bad behavior, which was quite effective in
| incentivizing people to be good people (this is overly
| reductive but you get the point). Nowadays no one is ever at
| fault for doing anything because "don't hate the player hate
| the game".
|
| A blameless organization can work, so long as people within
| it police themselves. As a society this does not happen, thus
| making people more steadfast in their anti-social behavior
| mike_hearn wrote:
| That's a legitimate fear though - it's exactly what happened
| in this case. _" The reviewers did not address the substance
| of my comment; they objected to my tone"._
| layer8 wrote:
| Labeling people as villains (as opposed to condemning acts), in
| particular those you don't know personally, is almost always an
| unhelpful oversimplification of reality. It obscures the root
| causes of why the bad things are happening, and stands in the
| way of effective remedy.
| hexbin010 wrote:
| I'm not a bad person, I just continuously do bad things, none
| of which is my fault - there is always a deeper root cause
| \o/
| Ygg2 wrote:
| On the flip side, even if you punish the villain, garbage
| papers still get printed. Almost like there is a root
| cause.
|
| Both views are maximalistic.
| bavell wrote:
| On the flop side, maybe there wouldn't be as many garbage
| papers printed if there were any actual negative
| consequences. It's not so simple as you make it out to
| be.
| hermannj314 wrote:
| A national "War on Data", a Data enforcement agency
| (DEA), and a Data Abuse Resistence Education (DARE)
| program and we should have this problem wrapped up in no
| time.
|
| Negative consequences and money always work!
| its_ethan wrote:
| They may not always work, but it's also not the case that
| they never work - which is what it seems like you're
| suggesting.
| Ygg2 wrote:
| There have been negative consequences for individuals
| before it didn't really change anything big.
| josfredo wrote:
| The person is inseparable from the root cause.
| saikia81 wrote:
| I'm guessing you believe that a person is always completely
| responsible for their actions. If you are doing root cause
| analysis you will get nowhere with that attitude.
| stogot wrote:
| In the case of software RCA, but if a crime is committed
| then many times there is a victim. There could be some
| root cause, but ignoring the crime creates a new problem
| for the victim (justice)
|
| Both can be pursued without immediately jumping to
| defending a crime
| Retric wrote:
| There's many ways that people can fail where they aren't
| the root cause.
|
| These failures aren't on that list because they require
| active intent.
| jcattle wrote:
| In that case let's just shut down the FAA and any accident
| investigations.
|
| It's not processes that can be fixed, it's just humans
| being stupid.
| squibonpig wrote:
| Then "root cause" means basically nothing
| subscribed wrote:
| I hope you don't work in technology. If you do, I hope I
| never work with you.
|
| Blameless post-mortems are critical for fixing errors that
| allowed incident to happen.
| circus1540 wrote:
| What if the root cause is that because we stopped labeling
| villains, they no longer fear being labeled as such. The
| consequences for the average lying academic have never been
| lower (in fact they usually don't get caught and benefit from
| their lie).
| tomtomtom777 wrote:
| Are we living on the same planet?
|
| Surely the public discourse over the past decades has been
| steadily moving from substantive towards labeling each
| other villains, not the other way around.
| Levitz wrote:
| But that kind of labeling happens because of having the
| wrong political stances, not because of the moral
| character of the person.
| fc417fc802 wrote:
| Most people seem to think that holding the "wrong"
| political stance is a failure of moral character so I'm
| having difficulty making sense of your point.
| Levitz wrote:
| They truly don't. That's just part of the alienation.
|
| When the opposition is called evil it's not because logic
| dictates it must be evil, it's called evil for the same
| reason it's called ugly, unintelligent, weak, cowardly
| and every other sort of derogatory adjective under the
| sun.
|
| These accusations have little to do with how often people
| consider others things such as "ugly" or "weak", it's
| just signaling.
| AnimalMuppet wrote:
| I disagree. There's an awful lot of "my position is
| _obviously_ based on the data, so if you disagree it
| _must_ be because you want to be evil ". (In my opinion,
| the left does this more than the right, for whatever
| that's worth.)
| fc417fc802 wrote:
| If we expand "based on the data" to also include "based
| on my obviously correct ethical framework dictated by my
| obviously correct religion" then I figure the score is
| probably pretty close to even. The weird thing to me is
| how the far left has adopted behaviors that appear to be
| fundamentally religious in nature (imo) while fervently
| denying any such parallel.
| bethekidyouwant wrote:
| For activist, politicians scientists, civilians? be
| specific
| fc417fc802 wrote:
| Actually the risks for academic misconduct have never been
| higher. For quite a while now there's been borderline
| activism to go out and search the literature for it -
| various custom software solutions have been written
| specifically to that end. We're also rapidly approaching a
| reality in which automated cross checking of the literature
| for contradictions will be possible.
|
| Unfortunately academia as a pursuit has never had a larger
| headcount and the incentives to engage in misconduct have
| likely never been higher (and appear to be steadily
| increasing).
| mjburgess wrote:
| I'm not sure the problems we have at the moment are a lack of
| accountability. I mean, I think let's go a little overboard
| on holding people to account first, then wind it back when
| that happens. The crisis at the moment is mangeralism across
| all of our institutions which serves to displace
| accountability .
| jbreckmckye wrote:
| > Labeling people as villains is almost always an unhelpful
| oversimplification of reality
|
| This is effectively denying the existence of bad actors.
|
| We can introspect into the exact motives behind bad behaviour
| once the paper is retracted. Until then, there is _ongoing
| harm to public science_.
| smt88 wrote:
| I think they're actually just saying bad actors are
| inevitable, inconsistent, and hard to identify ahead of
| time, so it's useless to be a scold when instead you can
| think of how to build systems that are more resilient to
| bad acts
| jbreckmckye wrote:
| To which my reply would be, we can engage in the analysis
| _after we have taken down the paper_.
|
| It's still up! Maybe the answer to building a resilient
| system lies in _why_ it is still up.
| mike_hearn wrote:
| You have to do both. Offense and defense are closely
| related. You can make it hard to engage in bad acts, but
| if there are no penalties for doing so or trying to do
| so, then that means there are no penalties for someone
| just trying over and over until they find a way around
| the systems.
|
| Academics that refuse to reply to people trying to
| replicate their work need to be instantly and publicly
| fired, tenure or no. This isn't going to happen, so the
| right thing to do is for the vast majority of
| practitioners to just ignore academia whilst politically
| campaigning for the zeroing of government research
| grants. The system is unsaveable.
| michaelmrose wrote:
| Perhaps start by defunding any projects by institutions
| that insist on protecting fraudsters especially in the
| soft sciences. There is a lot of valuable hard science
| that IS real and has better standards.
| mike_hearn wrote:
| But that would defund all of them. Plenty of fraud at
| 'top' institutions like Harvard, Stanford, Oxford etc...
| michaelmrose wrote:
| If funding depended on firing former fraudsters and
| incompetents they would find the will to fire them
| mike_hearn wrote:
| I don't think they would. They'd rather stage riots and
| try to unseat the government than change.
| egeozcan wrote:
| IMHO, you should deal with actual events, when not ideas,
| instead of people. No two people share the exact same
| values.
|
| For example, you assume that guy trying to cut the line is
| a horrible person and a megalomaniac because you've seen
| this like a thousand times. He really may be that, or maybe
| he's having an extraordinarily stressful day, or maybe he's
| just not integrated with the values of your society
| ("cutting the line is bad, no matter what") or anything
| else BUT _none_ of all that really helps you think clearly.
| You just get angry and maybe raise your voice when you 're
| warning him, because "you know" he won't understand
| otherwise. So you left your values now too because you are
| busy fighting a stereotype.
|
| IMHO, correct course of action is assuming good faith even
| with bad actions, and even with persistent bad actions, and
| thinking about the productive things you can do to change
| the outcome, or decide that you cannot do anything.
|
| You can perhaps warn the guy, and then if he ignores you,
| you can even go to security or pick another hill to die on.
|
| I'm not saying that I can do this myself. I fail a lot,
| especially when driving. It doesn't mean I'm not working on
| it.
| jbreckmckye wrote:
| I honestly think this would qualify as "ruinous empathy"
|
| It's fine and even good to assume good faith, extend your
| understanding, and listen to the reasons someone has done
| harm - in a context where the problem was already
| redressed and the wrongdoer is labelled.
|
| This is not that. This is someone publishing a false
| paper, deceiving multiple rounds of reviewers,
| manipulating evidence, knowingly and for personal gain.
| And they still haven't faced any consequences for it.
|
| I don't really know how to bridge the moral gap with this
| sort of viewpoint, honestly. It's like you're telling me
| to sympathise with the arsonist whilst he's still running
| around with gasoline
| egeozcan wrote:
| I thought assuming good faith does not mean you have to
| sympathize. English is not my native language and
| probably that's not the right concept.
|
| I mean, do not put the others into any stereotype. Assume
| nothing? Maybe that sounds better. Just look at the hand
| you are dealt and objectively think what to do.
|
| If there is an arsonist, you deal with that a-hole
| yourself, call the police, or first try to take your
| loved ones to safety first?
|
| Getting mad at the arsonist doesn't help.
| fc417fc802 wrote:
| > I don't really know how to bridge the moral gap with
| this sort of viewpoint, honestly. It's like you're
| telling me to sympathise with the arsonist whilst he's
| still running around with gasoline
|
| That wasn't how I read it. Neither sympathize nor sit
| around doing nothing. Figure out what you can do that's
| productive. Yelling at the arsonist while he continues to
| burn more things down isn't going to be useful.
|
| Assuming good faith tends to be an important thing to
| start with if the goal is an objective assessment. Of
| course you should be open to an eventual determination of
| bad faith. But if you start from an assumption of bad
| faith your judgment will almost certainly be clouded and
| thus there is a very real possibility that you will miss
| useful courses of action.
|
| The above is on an individual level. From an
| organizational perspective if participants know that a
| process could result in a bad faith determination against
| them they are much more likely to actively resist the
| process. So it can be useful to provide a guarantee that
| won't happen (at least to some extent) in order to ensure
| that you can reliably get to the bottom of things. This
| is what we see in the aviation world and it seems to work
| extremely well.
| Levitz wrote:
| I used to think like this, and it does seem morally sound
| at first glance, but it has the big underlying problem of
| creating an excellent context in which to be a selfish
| asshole.
|
| Turns out that calling someone on their bullshit can be a
| perfectly productive thing to do, it not only deals with
| that specific incident, but also promotes a culture in
| which it's fine to keep each other accountable.
| egeozcan wrote:
| You cannot call all the bullshit. You need to call what's
| important for you. That defines your values.
|
| It's also important to base your actions on what's at
| hand, not teaching a lesson to "those people".
| fc417fc802 wrote:
| I think they're both good points. An unwillingness to
| call out bullshit itself leads to a systemic dysfunction
| but on the flip side a culture where everyone just rages
| at everything simply isn't productive. Pragmatically,
| it's important to optimize for the desired end result. I
| think that's generally going to be fixing the system
| first and foremost.
|
| It's also important to recognize that there are a lot of
| situations where calling someone out isn't going to have
| any (useful) effect. In such cases any impulsive behavior
| that disrupts the environment becomes a net negative.
| jason_oster wrote:
| When bad behavior has been identified, reported, and
| repeated - as described in the article - it is no longer
| eligible for a good faith assumption.
| rolymath wrote:
| I would argue that villainy and "bad people" is an
| overcomplication of ignorance.
|
| If we equate being bad to being ignorant, then those people
| are ignorant/bad (with the implication that if people knew
| better, they wouldn't do bad things)
|
| I'm sure I'm over simplifying something, looking forward to
| reading responses.
| lo_zamoyski wrote:
| It's possible to take two opposing and flawed views here, of
| course.
|
| On the one hand, it is possible to become judgmental,
| habitually jumping to unwarranted and even unfair conclusions
| about the moral character of another person. On the other, we
| can habitually externalize the "root causes" instead of
| recognizing the vice and bad choices of the other.
|
| The latter (externalization) is obvious when people
| habitually blame "systems" to rationalize misbehavior. This
| is the same logic that underpins the fantastically silly and
| flawed belief that under the "right system", misbehavior
| would simply evaporate and utopia would be achieved. Sure,
| pathological systems can create perverse incentives, even
| ones that put extraordinary pressure on people, but moral
| character is not just some deterministic mechanical response
| to incentive. Murder doesn't become okay because you had a
| "hard life", for example. And even under "perfect
| conditions", people would misbehave. In fact, they may even
| misbehave more in certain ways (think of the pathologies
| characteristic of the materially prosperous first world).
|
| So, yes, we ought to condemn acts, we ought to be charitable,
| but we should also recognize human vice and the need for
| justice. Justly determined responsibility should affect
| someone's reputation. In some cases, it would even be harmful
| to society not to harm the reputations of certain people.
| michaelmrose wrote:
| What specific pathologies characteristic of the materially
| prosperous first world? People almost universally behave
| better in a functional system with enough housing food
| education and so forth. Morality is and will always remain
| important but systems matter a LOT. For instance we've
| experienced less murder since we stopped mass lead
| poisoning our entire population.
|
| It's a paradox. We know for an absolute fact that changing
| the underlying system matters massively but we must
| continue to acknowledge the individual choice because the
| system of consequences and as importantly the system of
| shame keeps those who wouldn't act morally in check. So we
| punish the person who was probably lead poisoned the same
| as any other despite knowing that we are partially at fault
| for the system that lead to their misbehavior.
| andy99 wrote:
| Just to add on, armchair quarterbacking is a thing, it's easy
| in hindsight to label decisions as the result of bad
| intentions. This is completely different than whatever might
| have been at play in the moment and retrospective judgement
| is often unrealistic.
| michaelmrose wrote:
| Every single comment on every thread on this entire website
| is armchair quarterbacking. It's completely obvious that
| this is dishonest bad work.
| regenschutz wrote:
| As with anything, it's just highly subjective. What some call
| an heinous act is another person's heroic act. Likewise,
| where I draw the line between an unlucky person and a villain
| is going to be different from someone else.
|
| Personally, I do believe that there are benefits to labelling
| others as villains if a certain threshold is met. It
| cognitively reduces strain by allowing us to blanket-label
| all of their acts as evil [0] (although with the drawback of
| occasionally accidentally labelling acts of good as evil),
| allowing us to prioritise more important things in life than
| the actions of what we call villains.
|
| [0]: https://en.wikipedia.org/wiki/Halo_effect#The_reverse_ha
| lo_e...
| katzgrau wrote:
| It's not really subjective if you don't believe it's your
| place to judge the human to begin with.
|
| If you were in their exact life circumstance and
| environment you would do the same thing. You aren't going
| to magically sidestep cause and effect.
|
| The act itself is bad.
|
| The human performing the act was misguided.
|
| I view people as inherently perfect whose view of life,
| themselves, and their current situations as potentially
| misguided.
|
| Eg, like a diamond covered in shit.
|
| Just like it's possible for a diamond to be uncovered and
| polished, the human is capable of acquiring a truer
| perspective and more aligned set of behaviors - redemption.
| Everyone is capable of redemption so nobody is inherently
| bad. Thinking otherwise may be convenient but is ultimately
| misguided too.
|
| So the act and the person are separate.
|
| Granted, we need to protect society from such
| misguidedness, so we have laws, punishments, etc.
|
| But it's about protecting us from bad behavior, not
| labeling the individual as bad.
| WalterBright wrote:
| > If you were in their exact life circumstance and
| environment you would do the same thing.
|
| I don't buy that for a moment. It presumes people do not
| have choices.
|
| The difference between a man and an animal is a man has
| honor. Each of us gets to choose if we are a man or an
| animal.
| katzgrau wrote:
| > It presumes people do not have choices.
|
| No, there are choices. It states that given the exact
| same starting parameters and sequence of events, you
| would make the same choice.
| WalterBright wrote:
| You're denying free will.
| katzgrau wrote:
| I didn't say anything about free will. What I did say is
| irrefutable.
| Dylan16807 wrote:
| If everyone would make the same choice, then free will
| doesn't exist. It's only one step away from what you
| said.
|
| And sure what you said is irrefutable in the sense that
| it's impossible to collect evidence about it. That's
| generally a bad sign for theories.
| katzgrau wrote:
| The role of cause and effect is unshakeable.
|
| > If everyone would make the same choice, then free will
| doesn't exist. It's only one step away from what you
| said.
|
| I didn't say anything about free will. "One step away" is
| where you went, not me.
|
| If you believe free will and determinism are logically
| incompatible, that's your own theory to prove.
|
| I'm simply saying that everyone would make the same
| choice given the exact same circumstances and starting
| conditions.
|
| To believe anything otherwise is magical thinking, and
| basically implies a moral superiority to someone else.
| philipallstar wrote:
| > The human performing the act was misguided.
|
| What does this mean? If someone rapes someone else, they
| were inherently perfect but misguided, in your view?
| Dylan16807 wrote:
| If free will doesn't exist then you "shouldn't" judge
| people for their choices but also you can't stop yourself
| from doing so.
|
| If free will does exist then yes you can judge people for
| their choices.
|
| Everyone is capable of redemption but saying they _need_
| redemption is judging them.
| katzgrau wrote:
| A few things:
|
| 1. You can't judge the person, you can judge the behavior
|
| 2. To judge the person requires the ability to quantify
| the unquantifiable (circumstance, sequence of events
| leading to the outcome, going back to the literal
| beginning of time).
|
| 3. To judge the person implies a superiority to that
| person
|
| Sure, one can take/justify simplistic shortcuts for
| practical reasons. But some forget that's what they are -
| shortcuts that bypass the nuances/reality of the
| situation.
| the_arun wrote:
| Questions:
|
| 1. Who is responsible for adding guardrails to ensure all
| papers coming in are thoroughly checked & reviewed?
|
| 2. Who review these papers? Shouldn't they own responsibility
| for accuracy?
|
| 3. How are we going to ensure this is not repeated by others?
| convolvatron wrote:
| reviewers are unpaid. its also quite common to farm out the
| actual review work to grad students, postdocs and the like.
| if you're suggesting adding liability, then you're just
| undermining the small amount of review that already takes
| place.
| nobodyandproud wrote:
| There needs to be prestige for tearing down heavily flawed
| work.
| nickpsecurity wrote:
| That comment sounds like the environment causes bad behavior.
| That's a liberal theory refuted consistently by all the
| people in bad environments who choose to not join in on the
| bad behavior, even at a personal loss.
|
| God gave us free will to choose good or evil in various
| circumstances. We need to recognize that in our assessments.
| We must reward good choices and address bad ones (eg the
| study authors'). We should also change environments to
| promote good and oppose evil so the pressures are pushing in
| the right direction.
| direwolf20 wrote:
| Labeling people as villains used to be effective deterrence
| against doing villainous things. When did that change?
| jrjeksjd8d wrote:
| Ah yes, the mythical past when nobody did bad things
| because we punished them correctly.
| WalterBright wrote:
| The crime rate does change dramatically over time. For
| example, the homicide rate during the pandemic was about
| double what it is today.
| fn-mote wrote:
| Sure, but are you implying that is because of our
| stricter enforcement of the laws? Or other systemic /
| environmental causes (eg systemic poor mental health)?
|
| I am unfamiliar with the reasons to which the varying
| murder rate is ascribed. If I had to guess, I would guess
| economics is #1.
| tikhonj wrote:
| It's also pretty clearly a deterrence against people
| admitting and fixing their own mistakes, both individually
| and as institutions. Which is exactly what we're seeing
| here...
| jason_oster wrote:
| Correlation is not causation.
| direwolf20 wrote:
| You wouldn't be a villain from doing one bad thing, but a
| pattern.
| WalterBright wrote:
| When we began blaming society instead.
|
| I've read multiple times that a large percentage of the
| crime comes from a small group of people. Jail them, and
| the overall crime rate drops by that percentage.
| direwolf20 wrote:
| Which group is that?
| WalterBright wrote:
| The group of people who have long arrest records.
| direwolf20 wrote:
| So when someone is arrested, that makes them more likely
| to do crime in the future, so they should be preemptively
| jailed even if they didn't do a crime this time?
| WalterBright wrote:
| > that makes them more likely to do crime in the future
|
| Yes. One's past behavior is a strong predictor of future
| behavior.
|
| > so they should be preemptively jailed even if they
| didn't do a crime this time?
|
| No, it means that each successive conviction should
| result in a longer prison sentence.
| mminer237 wrote:
| Criminals? I'm not sure what you're looking for.
| direwolf20 wrote:
| "Criminals do the most crime" is a tautology
| nathan_compton wrote:
| Was it ever, though? This is an easy thing to say, but how
| would we demonstrate that it worked?
| Aurornis wrote:
| In this case they hadn't labeled anyone as villains, though.
| They could have omitted that section entirely.
|
| I happen to agree that labeling them as villains wouldn't
| have been helpful to this story, but they didn't do that.
|
| > It obscures the root causes of why the bad things are
| happening, and stands in the way of effective remedy.
|
| There's a toxic idea built into this statement: It implies
| that the real root cause is external to the people and
| therefore the solution must be a systemic change.
|
| This hits a nerve for me because I've seen this specific
| mindset used to avoid removing obviously problematic people,
| instead always searching for a "root cause" that required us
| all to ignore the obvious human choices at the center of the
| problem.
|
| Like blameless postmortems taken to a comical extreme where
| one person is always doing some careless that causes problems
| and we all have to brainstorm a way to pretend that the
| system failed, not the person who continues to cause us
| problems.
| fc417fc802 wrote:
| > There's a toxic idea built into this statement: It
| implies that the real root cause is external to the people
| and therefore the solution must be a systemic change.
|
| Not necessarily, although certainly people sometimes fall
| into that trap. When dealing with a system you need to fix
| the system. Ejecting a single problematic person doesn't
| fix the underlying problem - how did that person get in the
| door in the first place? If they weren't problematic when
| they arrived, does that mean there were corrosive elements
| in the environment that led to the change?
|
| When a person who is a cog within a larger machine fails
| that is more or less by definition also an instance of the
| system failing.
|
| Of course individual intent is also important. If Joe
| dropped the production database _intentionally_ then in
| addition to asking "how the hell did someone like him end
| up in this role in the first place" you will also want to
| eject him from the organization (or at least from that
| role). But focusing on individual intent is going to cloud
| the process and the systemic fix is much more important
| than any individual one.
|
| There's also a (meta) systemic angle to the above. Not
| everyone involved in carrying out the process will be
| equally mature, objective, and deliberate (by which I mean
| that unfortunately any organization is likely to contain at
| least a few fairly toxic people). If people jump to
| conclusions or go on a witch hunt that can constitute a
| serious systemic dysfunction in and of itself. Rigidly
| adhering to a blameless procedure is a way to guard against
| that while still working towards the necessary systemic
| changes.
| wholinator2 wrote:
| I agree with most of what you said but i'd like to raise
| 2 points
|
| 1) the immediate action _is more important immediately_
| than the systemic change. We should focus on maximizing
| our "fixing" and letting a toxic element continue to
| poison you while you waste time wondering how you got
| there is counterproductive. It is important to focus on
| the systemic change, but once you have removed the person
| that will destroy the organization/kill us all.
|
| 2) I forgot. Sorry
| fc417fc802 wrote:
| I suppose that depends on context. I think it's important
| to be pragmatic regarding urgency. Of course the most
| urgent thing is to stop the bleeding; removing the bullet
| can probably wait until things have calmed down a bit.
|
| If Joe dropped the production database and you're
| uncertain about his intentions then perhaps it would be a
| good idea to do the bare minimum by reducing his access
| privileges for the time being. No more than that though.
|
| Whereas if you're reasonably certain that there was no
| intentional foul play involved then focusing on the
| individual from the outset isn't likely to improve the
| eventual outcome (rather it seems to me quite likely to
| be detrimental).
| WalterBright wrote:
| > how did that person get in the door in the first place?
|
| is answered by:
|
| > any organization is likely to contain at least a few
| fairly toxic people
| fc417fc802 wrote:
| Of course. I actually think that "we did everything we
| reasonably could have" or "doing more would be
| financially disadvantageous for us" are acceptable
| conclusions for an RCA. But it's important that such a
| conclusion is arrived at only after rigorously following
| the process and making a genuine high effort attempt to
| identify ways in which the system could be improved. You
| wouldn't be performing an RCA if the incident didn't have
| fairly serious consequences, right?
|
| It could also well be that Joe did the same thing at his
| last employer, someone in hiring happened to catch wind
| of it, a disorganized or understaffed process resulted in
| the ball somehow getting dropped, and here you are.
| Aurornis wrote:
| Exactly. The above comment is an example of the kind of
| toxic blameless culture I was talking about: Deflecting
| every problem with a person into a problem with the
| organization.
|
| It's a good thing to take a look at where the process
| went wrong, but that's literally just a postmortem. Going
| fully into _blameless_ postmortems adds the precondition
| that you can't blame people, you are obligated to
| transform the obvious into a problem with some process or
| policy.
|
| Anyone who has hired at scale will eventually encounter
| an employee who seems lovely in interviews but turns out
| to be toxic and problematic in the job. The most toxic
| person I ever worked with, who culminated in dozens of
| peers quitting the company before he was caught red
| handed sabotaging company work, was actually one of the
| nicest and most compassionate people during interviews
| and when you initially met him. He, of course, was a big
| proponent of blameless postmortems and his toxicity
| thrived under blameless culture for longer than it should
| have.
| Aurornis wrote:
| > Ejecting a single problematic person doesn't fix the
| underlying problem - how did that person get in the door
| in the first place? If they weren't problematic when they
| arrived, does that mean there were corrosive elements in
| the environment that led to the change?
|
| This is exactly the toxicity I've experienced with
| blameless postmortem culture:
|
| Hiring is never perfect. It's impossible to identify
| every problematic person at the interview stage.
|
| Some times, it really is the person's own fault. Doing
| mental gymnastics to assume the system caused the person
| to become toxic is just a coping mechanism to avoid
| acknowledging that some people really are problematic and
| it's nobody's fault but their own.
| fc417fc802 wrote:
| On the contrary. It's all too easy to dismiss as being
| the fault of a fatally flawed individual. In fact that's
| likely to be the bias of those involved - our system is
| good, our management is competent. Behead the sacrificial
| lamb and be done with it. Phrases such as "hirinng is
| never perfect" can themselves at times be an extremely
| tempting coping mechanism to avoid acknowledging
| inconvenient truths.
|
| I'm not saying you shouldn't eventually arrive at the
| conclusion you're suggesting. I'm saying that it's
| extremely important not to start there and not to use the
| possibility of arriving there as an excuse to shirk
| asking difficult questions about the inner workings and
| performance of the broader organization.
|
| > Doing mental gymnastics to assume the system caused the
| person to become toxic
|
| No, don't assume. Ask if it did. "No that does not appear
| to be the case" can sometimes be a perfectly reasonable
| conclusion to arrive at but it should never be an excuse
| to avoid confronting uncomfortable realities.
| zugi wrote:
| Often institutions develop fundamental problems _because_
| individuals gradually adjust their behaviors away from
| the official norms. If it goes uncorrected, the new
| behavior becomes the unofficial norm.
|
| One strategy for correcting the institution _is_ to start
| holding individuals accountable. The military does this
| often. They 'll "make an example" of someone violating
| the norms and step up enforcement to steer the
| institutional norms back.
|
| Sure it can feel unfair, and "everyone else is doing it"
| is a common refrain, but holding individuals accountable
| is one way to fix the institution.
| arkh wrote:
| > Like blameless postmortems taken to a comical extreme
| where one person is always doing some careless that causes
| problems and we all have to brainstorm a way to pretend
| that the system failed, not the person who continues to
| cause us problems.
|
| Well, I'd argue the system failed in that the bad person is
| not removed. The root is then bad hiring decision and bad
| management of problematic people. You can do a blameless
| postmortem guiding a change in policy which ends in some
| people getting fired.
| Aurornis wrote:
| > You can do a blameless postmortem guiding a change in
| policy which ends in some people getting fired.
|
| In theory maybe, but in my experience the blameless
| postmortem culture gets taken to such an extreme that
| even when one person is consistently, undeniably to blame
| for causing problems we have to spend years pretending
| it's a system failure instead. I think engineers like the
| idea that you can engineer enough rules, policies, and
| guardrails that it's impossible to do anything but the
| right thing.
|
| This can create a feedback loop where the bad players
| realize they can get away with a lot because if they get
| caught they just blame the system for letting them do the
| bad thing. It can also foster an environment where it's
| expected that anything that is allowed to happen is
| implicitly okay to do, because the blameless postmortem
| culture assigns blame on the faceless system rather than
| the individuals doing the actions.
| bad_haircut72 wrote:
| agreed, the concept of a 'blameless' post mortem came
| from airplane crash investigation - but if one pilot
| crashes 6 commercial jets, we wouldnt say "must be a
| problem with the design of the controls"
| vladms wrote:
| So what do they say actually in aviation? There was a
| pilot suicide with the whole plane Germanwings Flight
| 9525, I find it more important the aviation industry did
| regulatory changes than the fact that (probably) "they
| blamed the pilot".
|
| I think there are too many people that actually like
| "blaming someone else" and that causes issues besides
| software development.
| throw310822 wrote:
| I hope that the pilot responsible was fired and got his
| license revoked!
| philipallstar wrote:
| > Well, I'd argue the system failed in that the bad
| person is not removed.
|
| This is just a proxy for "the person is bad" then.
| There's no need to invoke a system. Who can possibly
| trace back all the things that could or couldn't have
| been spotted at interview stage or in probation? Who
| cares, when the end result is "fire the person" or,
| probably, "promote the person".
| vladms wrote:
| I think as an employer you would prefer not to hire
| another person that is not productive.
|
| Your customers would prefer to have the enterprise doing
| stuff rather than hiring and firing.
| philipallstar wrote:
| Of course everyone would prefer that, but hiring is by
| far the most random thing an org does, even when it
| spends a huge amount on hiring.
| efitz wrote:
| Blameless postmortems are for processes where everyone is
| acting in good faith and a mistake was made and everyone
| wants to fix it.
|
| If one party decides that they don't want to address a
| material error, then they're not acting in good faith. At
| that point we don't use blameless procedures anymore, we
| use accountability procedures, and we usually exclude the
| recalcitrant people from the remediation process, because
| they've shown bad faith.
| stego-tech wrote:
| This hits the nail on the head. I liken it to a scale or
| ladder, each rung representing a new level of
| understanding:
|
| 1) Basic morality (good vs evil) with total agency ascribed
| to the individual
|
| 2) Basic systems (good vs bad), with total agency ascribed
| to the system and people treated as perfectly rational
| machines (where most of the comments here seem to sit)
|
| 3) Blended system and morality, or "Systemic Morality":
| agency can be system-based or individual-based, and
| morality can be good or bad. This is the single largest
| rung, because there's a _lot_ to digest here, and it 's
| where a lot of folks get stuck on one ("you can't blame
| people for making rational decisions in a bad system") or
| the other ("you can't fault systems designed by fallible
| humans"). It's why there's a lot of "that's just the way
| things are" useless attitudes at present, because folks
| don't want to climb higher than this rung lest they risk
| becoming accountable for their decisions to themselves and
| others.
|
| 4) "Comprehensive Morality": an action is net good or bad
| _because_ of the system _and_ the human. A good human in a
| bad system is more likely to make bad choices via adherence
| to systemic rules, just as a bad human in a good system is
| likely to find and exploit weaknesses in said system for
| personal gain. You cannot ascribe blame to one or the
| other, but rather acknowledge both separately and together.
| Think "Good Place" logic, with all of its caveats (good
| people in bad systems overwhelmingly make things worse by
| acting in good faith towards bad outcomes) and strengths
| (predictability of the masses at scale).
|
| 5) "Historical Morality": a system or person is net good or
| bad because of repeated patterns of behaviors within the
| limitations (incentives/disincentives) of the environment.
| A person who routinely exploits the good faith of others
| and the existing incentive structure of a system purely for
| personal enrichment is a _bad person_ ; a system that
| repeatedly and deliberately incentivizes the exploitation
| of its members to drive negative outcomes is a _bad
| system_. Individual acts or outcomes are less important
| than patterns of behavior and results. Humans struggle with
| this one because we live moment-to-moment, and we
| ultimately dread being held to account for past actions we
| can no longer change or undo. Yet it 's because of that
| degree of accountability - that you can and will be held to
| account for past harms, even in problematic systems - that
| we have the rule of law, and civilization as a result.
|
| Like a lot of the commenters here, I sat squarely in the
| third rung for _years_ before realizing that I wasn 't
| actually smart, but instead incredibly ignorant and
| entitled by refusing to truly evaluate root causes of
| systemic or personal issues _and address them accordingly_.
| It 's not enough to merely identify a given cause and call
| it a day, you have to do something to change or address it
| to reduce the future likelihood of negative behaviors and
| outcomes; it's how I can rationalize not necessarily
| faulting a homeless person in a system that fails to
| address underlying causes of homelessness and people
| incentivized not to show empathy or compassion towards
| them, but also rationalize vilifying the wealthy classes
| who, despite having infinite access to wealth and
| knowledge, _willfully and repeatedly choose to harm others_
| instead of improving things.
|
| Villainy and Heroism can be useful labels that don't
| necessarily simplify or ignorantly abstract the greater
| picture, and I'd like to think any critically-thinking
| human can understand when someone is using those terms from
| the first rung of the ladder versus the top rung.
| RajT88 wrote:
| > There's a toxic idea built into this statement: It
| implies that the real root cause is external to the people
| and therefore the solution must be a systemic change.
|
| It's both obviously. To address the human cause, you have
| to call out the issues and put at risk the person's career
| by damaging their reputation. That's what this article is
| doing. You can't fix a person, but you can address their
| bad behavior in this way by creating consequences for the
| bad things.
|
| Part of the root cause definitely is the friction aspect.
| The system is designed to make the bad thing easier, and
| when designing a system you need the good outcomes to be
| lower friction.
|
| > This hits a nerve for me because I've seen this specific
| mindset used to avoid removing obviously problematic
| people, instead always searching for a "root cause" that
| required us all to ignore the obvious human choices at the
| center of the problem.
|
| The real conversations like that take place in places where
| there's no recordings, or anything left in writing. Don't
| assume they aren't taking place, or that they go how you
| think they go.
| Spooky23 wrote:
| People don't really understand what this stuff means and
| create fucked up processes.
|
| In a blame focused postmortem you say "Johnny fucked up"
| and close it.
|
| When you are about accountability, the responsible parties
| are known or discovered if unknown and are responsible for
| prevention/response/repair/etc. The corrective action can
| incorporate and number of things, including getting rid of
| Johnny.
| zdragnar wrote:
| > Like blameless postmortems taken to a comical extreme
| where one person is always doing some careless that causes
| problem
|
| Post-mortems are a terrible place for handling HR issues.
| I'd much rather they be kept focused on processes and
| technical details, and human problem be kept private.
|
| Dogpiling in public is an absolutely awful thing to
| encourage, especially as it turns from removing a
| problematic individual to looking for whoever the scapegoat
| is this time.
| noitpmeder wrote:
| I agree, but in this hypothetical situation the HR part
| needs to happen, despite the fact that most people don't
| want to be the squeaky wheel that explicitly starts
| pointing fingers..
|
| It's way too easy to pretend the system is the problem
| while sticking your head in the sand because you don't
| want to solve the actual human problem.
|
| Sure, use the post mortem to brainstorm how to
| prevent/detect/excise the systematic problem ("How do we
| make sure no one else can make the same mistake again"),
| but eventually you just need to deal with the repeat
| offender.
| b112 wrote:
| The prior is stating an extreme case, eg "comical
| extreme".
|
| One problem is that if you behave as if a person isn't
| the cause, you end up with all sorts of silly rules and
| processes, which are just in place to counter
| "problematic individual".
|
| You end up using "process" as the scapegoat.
| michaelmrose wrote:
| One thing that stands in the way of other people choosing the
| wrong path is the perception of consequences. Minimal
| consequences by milquetoast critics who just want to
| understand is a bug not a feature.
|
| People are on average both bad and stupid and function
| without a framework of consequences and expectations where
| they expect to suffer and feel shame. They didn't make a
| mistake they stood in front of all their professional
| colleagues and published effectively what they knew were
| lies. The fact that they can publish lies and others are
| happy to build on lies ind indicates the whole community is a
| cancer. The fact that the community rejects calls for
| correction indicates its metastasized and at least as far as
| that particular community the patient is dead and there is
| nothing left to save.
|
| They ought to be properly ridiculed and anyone who has
| published obvious trash should have any public funds yanked
| and become ineligible for life. People should watch their
| public ruin and consider their own future action.
|
| If you consider the sheer amount of science that has turned
| out to be outright fraud in the last decade this is a crisis.
| rdiddly wrote:
| You presumably read the piece. There was no remedy. In fact
| the lavishly generous appreciation of all those complexities
| arguably is part of the reason there was no remedy. (Or vice
| versa, i.e. each person's foregone conclusion that there will
| be no remedy for whatever reason, might've later been
| justified/rationalized via an appeal to those complexities.)
|
| The act itself, of saying something other than the truth, is
| always more complex than saying the truth. - It took more
| words to describe the act in that very sentence. Because
| there are two ideas, the truth and not the truth. If the two
| things match, you have a single idea. Simple.
|
| Speaking personally, if someone's very first contact with me
| is a lie, they are to be avoided and disregarded. I don't
| even care what "kind of person" they are. In my world,
| they're instantly declared worthless. It works pretty well. I
| could of course be wrong, but I don't think I'm missing out
| on any rich life experiences by avoiding obvious liars. And
| getting to the root cause of their stuff or rehabilitating
| them is not a priority for me; that's _their own_ job. They
| might amaze me tomorrow, who knows. But it 's called judgment
| for a reason. Such is life in the high-pressure world of
| impressing rdiddly.
| shermantanktop wrote:
| Bad acts are in the past, and may be situational or isolated.
|
| Labelling a person as bad has predictive power - you should
| expect them to do bad acts again.
|
| It might be preferable to instead label them as "a person
| with a consistent history of bad acts, draw your own
| conclusion, but we are all capable of both sin and redemption
| and who knows what the future holds". I'd just call them a
| bad person.
|
| That said, I do think we are often too quick to label people
| as bad based one bad act.
| praxulus wrote:
| It is possible that the root cause is an individual person
| being bad. This hasn't been as common recently because people
| were told not to be villains and to dislike villains, so root
| causes of the remaining problems were often found buried in
| the machinery of complex social systems.
|
| However if we stop teaching people that villains are bad and
| they shouldn't be villains, we'll end up with a whole lot
| more problems of the "yeah that guy is just bad" variety.
| deadbabe wrote:
| If you defend a bad person, you are a bad person.
| perching_aix wrote:
| > What's the bar for a bad person if you haven't passed it at
| "it was simply easier to do the bad thing?"
|
| When the good thing is easier to do and they still knowingly
| pick the bad one for the love of the game?
| dullcrisp wrote:
| It feels good to be bad.
| perching_aix wrote:
| Not sure if this in jest referring to the inherently
| sanctimonious nature of the framing, but this is actually
| exactly what I was gesturing towards. If it didn't feel
| good, then it would be either an unintentional action
| (random or coerced), or an irrational one (go against their
| perceived self-interest).
|
| The whole "bad vs good person" framing is probably not a
| very robust framework, never thought about it much, so if
| that's your position you might well be right. But it's not
| a consideration that escaped me, I reasoned under the same
| lens the person above did on intention.
| jojomodding wrote:
| To me, it usually does not
| pdpi wrote:
| It's 2026, and social media brigading and harassment is a well-
| known phenomenon. In light of that, trying to preemptively de-
| escalate seems like a Good Thing.
| shrubby wrote:
| I was just following orders comes to mind.
|
| Yes, the complicity is normal. No the complicity isn't right.
|
| The banality of evil.
| boelboel wrote:
| It's interesting to talk about 'banality of evil' in the
| comment section about flawed papers. Her portrayal of
| Eichmann was very wrong, Arendt had an idea in her head of
| how he should be and didn't care too much about the facts and
| the process. Not that I totally disagree with the idea.
| tdb7893 wrote:
| I think calling someone a "bad person" (which is itself a
| horribly vague term) for one situation where you don't have all
| the context is something most people should be loath to do.
| People are complicated and in general normal people do a lot of
| bad things for petty reasons.
|
| Other than just the label being difficult to apply, these
| factors also make the argument over who is a "bad person" not
| really productive and I will put those sorts of caveats into my
| writings because I just don't want to waste my time arguing the
| point. Like what does "bad person" even mean and is it even
| consistent across people? I think it makes a lot more sense to
| label them clearer labels which we have a lot more evidence
| for, like "untrustworthy scientist" (which you might think is a
| bad person inherently or not).
| mekoka wrote:
| Connecting people's characters to their deed is a double edged
| sword. It's not that it's necessarily mistaken, but you have to
| choose your victories. Maybe today you get some satisfaction
| from condemning the culprits, but you also pay for it by making
| it even more difficult to get cooperation from the system in
| the future. We all have friends, family and colleagues that we
| believe to be good. They're all still capable of questionable
| actions. If we systematically tie bad deeds to bad people, then
| surely those people we love and know to be good are incapable
| of what they're being accused. That's part of how closing ranks
| works. I think King recognizes this too, which is why he
| recommends that _Penalties should reflect the severity of the
| violation, not be all-or-nothing._
| ambicapter wrote:
| The entire point of recognizing bad people is to make it
| harder for them to work with or affect you in the future.
|
| > If we systematically tie bad deeds to bad people, then
| surely those people we love and know to be good are incapable
| of what they're being accused.
|
| A strong claim that needs to be supported and actually the
| question who's nuances are being discussed in this thread.
| mekoka wrote:
| It doesn't need to be made into something other than logic.
|
| Anyone can do a bad deed.
|
| Anyone can also be a good person to someone else.
|
| If a bad deed automatically makes a bad person, those who
| recognize the person as good have a harder time reconciling
| the two realities. Simple.
|
| Also, is the point recognizing bad people or getting rid of
| bad science. Like I said, choose your victories.
| criddell wrote:
| I think the writer might enjoy Vonnegut's Mother Night.
|
| > Vonnegut is not, I believe, talking about mere
| inauthenticity. He is talking about engaging in activities
| which do not agree with what we ourselves feel are our own core
| morals while telling ourselves, "This is not who I really am. I
| am just going along with this on the outside to get by."
| Vonnegut's message is that the separation I just described
| between how we act externally and who we really are is
| imaginary.
|
| https://thewisdomdaily.com/mother-night-we-are-what-we-prete...
| Propelloni wrote:
| It is like in organisational error management (aka. error
| culture), there are three levels here:
|
| 1) errors happen, basically accidents.
|
| 2) errors are made, wrong or unexpected result for different
| intention.
|
| 3) errors are caused, the error case is the intended outcome.
| This is where "bad people" dwell.
|
| Knowing and keeping silent about 1) and 2) makes any error 3).
| I think, we are on 2) in TFA. This needs to be addressed, most
| obviously through system change, esp. if actors seem to act
| rationally in the system (as the authors do) with broken
| outcomes.
| locknitpicker wrote:
| > I can't believe I just read that. What's the bar for a bad
| person if you haven't passed it at "it was simply easier to do
| the bad thing?"
|
| For starters, the bar should be way higher than accusations
| from a random person.
|
| For me,there's a red flag in the story: posting reviews and
| criticism of other papers is very mundane in academia. Some
| Nobel laureates even authored papers rejecting established
| theories. The very nature of peer review involves challenging
| claims.
|
| So where is the author's paper featuring commentaries and
| letters, subjecting the author's own criticism to peer review?
| nathan_compton wrote:
| I guess there isn't much utility in categorizing people as
| "good" and "bad," arguably. Better to think about the
| incentives/punishments in the system and adjust them until
| people behave well.
| pfortuny wrote:
| Never qualify the person, only the deed. Because we are all
| capable of the same actions, some of us have just not done
| them. But we all have the same capacity.
|
| And yes, I am saying that I have the same capacity for wrong as
| the person you are thinking about, mon semblable, mon frere.
| irl_zebra wrote:
| > Because we are all capable of the same actions, some of us
| have just not done them
|
| > And yes, I am saying that I have the same capacity for
| wrong as the person you are thinking about...
|
| No one is disputing any of this. The person who is capable,
| and who has chosen to do, the bad deed is morally blameworthy
| (subject to mitigating circumstances).
| pfortuny wrote:
| Yes, blameworthy, but not "bad". Not the same thing. At
| all.
| irl_zebra wrote:
| They are very related concepts. Lack of remorse?
| Malicious act? Particularly heinous act? Both morally
| blameworthy and bad person! Isolated incident? Not a
| pattern? Morally blameworthy but not bad person.
|
| This is pretty standard virtue ethics we all learned in
| school. Your statements that morally blameworthiness and
| badness are "[n]ot the same thing...[a]t all" and that we
| should "[n]ever qualify the person, only the deed" make
| me think your moral framework is likely not linked to
| millennia of thought in this area from Socrates on down,
| so it's unlikely we will get anywhere and should "agree
| to disagree."
| necovek wrote:
| Being practical, and understanding the gamification of citation
| counts and research metrics today, instead of going for a
| replication study and trying to prove a negative, I'd instead go
| for contrarian research which shows a different result (or
| possibly excludes the original result; or possibly doesn't even
| if it does not confirm it).
|
| These probably have bigger chance of being published as you are
| providing a "novel" result, instead of fighting the get-along
| culture (which is, honestly, present in the workplace as well).
| But ultimately, they are (research-wise! but not politically)
| harder to do because they possibly mean you have figured out an
| actual thing.
|
| Not saying this is the "right" approach, but it might be a
| cheaper, more _practical_ way to get a paper turned around.
|
| Whether we can work this out in research in a proper way is
| linked to whether we can work this out everywhere else? How many
| times have you seen people tap each other on the back despite
| lousy performance and no results? It's just easier to switch
| private positions vs research positions, so you'll have more of
| them not afraid to highlight bad job, and well, there's this
| profit that needs to pay your salary too.
| em500 wrote:
| Most of these studies get published based on elaborate
| constructions of essentially t-tests for differences in means
| between groups. Showing the opposite means showing no
| statistical difference, which is almost impossible to get
| published, for very human reasons.
| necovek wrote:
| My point was exactly not to do that (which is really an
| unsuccesfull replication), but instead to find the actual,
| live correlation between the same input rigourously
| documented and justified, and new "positive" conclusion.
|
| As I said, harder from a research perspective, but if you can
| show, for instance, that sustainable companies are less
| profitable with a better study, you have basically
| contradicted the original one.
| psychoslave wrote:
| Social fame is fundamentally unscalable, as it operates in
| limited room on the scene and even less in the few spot lights.
|
| Benefits we can get from collective works, including scientific
| endeavors, are indefinitely large, as in far more important than
| what can be held in the head of any individual.
|
| Incitives are just irrelevant as far as global social good is
| concerned.
| fnord123 wrote:
| > Stop citing single studies as definitive. They are not. Check
| if the ones you are reading or citing have been replicated.
|
| And from the comments:
|
| > From my experience in social science, including some experience
| in managment studies specifically, researchers regularly belief
| things - and will even give policy advice based on those beliefs
| - that have not even been seriously tested, or have straight up
| been refuted.
|
| Sometimes people use fewer than one non replicatable studies.
| They invent studies and use that! An example is the "Harvard Goal
| Study" that is often trotted out at self-review time at
| companies. The supposed study suggests that people who write down
| their goals are more likely to achieve them than people who do
| not. However, Harvard itself cannot find such a study existing:
|
| https://ask.library.harvard.edu/faq/82314
| KingMob wrote:
| Definitely ignore single studies, no matter how prestigious the
| journal or numerous the citations.
|
| Straight-up replications are rare, but if a finding is real,
| other PIs will partially replicate and build upon it, typically
| as a smaller step in a related study. (E.g., a new finding
| about memory comes out, my field is emotion, I might do a new
| study looking at how emotion and your memory finding interact.)
|
| If the effect is replicable, it will end up used in other
| studies (subject to randomness and the file drawer effect,
| anyway). But if an effect is rarely mentioned in the literature
| afterwards...run far, FAR away, and don't base your research
| off it.
|
| A good advisor will be able to warn you off lost causes like
| this.
| ChrisMarshallNY wrote:
| Check out the "Jick Study," mentioned in _Dopesick_.
|
| https://en.wikipedia.org/wiki/Addiction_Rare_in_Patients_Tre...
| shiandow wrote:
| I appreciate the convenience of having the original text on hand,
| as opppsed to having to download it of Dropbox of all places.
|
| But if you're going to quote the whole thing it seems easier to
| just _say_ so rather than quoting it bit by bit interspersed with
| "King continues" and annotating each I with [King].
| zahirbmirza wrote:
| Could you also provide your critical appraisal of the article so
| this can be more of a journal club for discussion vs just a paper
| link? I have no expertise in this field so would be good for some
| insights.
| indubioprorubik wrote:
| And thus all citing, have fatally flawed there paper if its
| central to the thesis, thus, he who proofs the root is rotten,
| should gain there funding from this point forward.
| indubioprorubik wrote:
| I see this approach as a win win for science. Debunking bad
| science becomes a for profit enterprise, rigorous science
| becomes the only one sustainable, the paper churn gets reduced,
| as even producing a good one becomes a financial risk, when it
| becomes foundational and gets debunked later.
| motbus3 wrote:
| I will not go into the details of the topic but the "What to do"
| is the most obvious thing. If a paper that is impactful cannot be
| backed by other works that should be a smell
| ChrisMarshallNY wrote:
| Sounds like the Watergate Scandal. The crime was one thing, but
| it was the cover-up that caused the most damage.
|
| Once something enters The Canon, it becomes "untouchable," and no
| one wants to question it. Fairly classic human nature.
|
| _> "The most erroneous stories are those we think we know best
| -and therefore never scrutinize or question."
|
| -Stephen Jay Gould_
| steve-atx-7600 wrote:
| The title alone is sus. I guess there are a lot of low quality
| papers out there in sciencey sounding fields.
| rwmj wrote:
| The journal name ("Management Science") is a bit of a giveaway
| too.
| ykonstant wrote:
| Join me in my new business endeavor where we found the
| Journal for Journal Science.
| pbhjpbhj wrote:
| Isn't at least part of the problem with replication that journals
| are businesses. They're selling in part based on limited human
| focus, and on desire to see something novel, to see progress in
| one's chosen field. Replications don't fit a commercial
| publications goals.
|
| Institutions could do something, surely. Require one-in-n papers
| be a replication. Only give prizes to replicated studies. Award
| prize monies split between the first two or three independent
| groups demonstrating a result.
|
| The 6k citations though ... I suspect most of those instances
| would just assert the result if a citation wasn't available.
| arter45 wrote:
| Not in academia myself, but I suspect the basic issue is simply
| that academics are judged by the number of papers they publish.
|
| They are pushed to publish a lot, which means journals have to
| review a lot of stuff (and they cannot replicate findings on
| their own). Once a paper is published on a decent journal,
| other researchers may not "waste time" replicating all
| findings, because they also want to publish a lot. The result
| is papers getting popular even if no one has actually bothered
| to replicate the results, especially if those papers are quoted
| by a lot of people and/or are written by otherwise reputable
| people or universities.
| mike_hearn wrote:
| Journals aren't really businesses in the conventional sense.
| They're extensions of the universities: their primary customers
| and often only customers are university libraries, their
| primary service is creating a reputation economy for academics
| to decide promotions.
|
| If the flow of tax, student debt and philanthropic money were
| cut off, the journals would all be wiped out because there's no
| organic demand for what they're doing.
| Havoc wrote:
| Maybe that's why it gets cited? People starting with an answer
| and backfilling?
| loxodrome wrote:
| Do people actually take papers in "management science" seriously?
| abanana wrote:
| Yes, that's the problem, many do, and they swear by these
| oversimplified ideas and one-liners that litter the field of
| popular management books, fully believing it's all "scientific"
| and they'll laugh at you for questioning it. It's nuts.
| graemep wrote:
| There is a difference between popular management books and
| academic publications.
|
| For example there is a long history of studies of the
| relationship between working hours and productivity which is
| one of the few things that challenges the idea that longer
| hours means more output.
| abanana wrote:
| Yes, but the books generally take their ideas from the
| academic publications. And the replication problems, and
| general incentives around academic publishing, show that
| all too often, the academic publications in the social
| sciences are unfortunately no more rigorous than the
| populist books.
| graemep wrote:
| That is true, but the popular books both simplify and
| cherry pick which makes it a whole to worse.
| malshe wrote:
| They do and there is nothing wrong with that. The papers
| published in this journal are peer-reviewed and go through
| multiple rounds of review. Also, note that Andrew King could
| carry out the replication because the data is publicly
| available.
| jokoon wrote:
| It's harder to do social/human science because it's just easier
| to make mistakes that leads to bias. It's harder to do in maths,
| physics, biology, medecine, astronomy, etc.
|
| I often say that "hard sciences" have often progressed much more
| than social/human sciences.
| marginalia_nu wrote:
| Funny you say that, as medicine is one of the epicenters of the
| replication crisis[1].
|
| [1]
| https://en.wikipedia.org/wiki/Replication_crisis#In_medicine
| QuadmasterXLII wrote:
| you get a replication crisis on the bleeding edge between
| replication being possible and impossible. There's never
| going to be a replication crisis in linear algebra, there's
| never going to be a replication crisis in theology, there
| definitely was a replication crisis in psych and a
| replication crisis in nutrition science is distinctly
| plausible and would be extremely good news for the field as
| it moves through the edge.
| nickpsecurity wrote:
| Leslie Lamport came up with a structured method to find
| errors in proof. Testing it on a batch, he found most of
| them had errors. Peter Guttman's paper on formal
| verification likewise showed many "proven" or "verified"
| works had errors that were spottes quickly upon informal
| review or testing. We've also see important theories in
| math and physics change over time with new information.
|
| With the above, I think we've empirically proven that we
| can't trust mathmeticians more than any other humans We
| should still rigorously verify their work with diverse,
| logical, and empirical methods. Also, build ground up on
| solid ideas that are highly vetted. (Which linear algebra
| actually does.)
|
| The other approach people are taking are foundational,
| machine-checked, proof assistants. These use a vetted logic
| whose assistant produces a series of steps that can be
| checked by a tiny, highly-verified checker. They'll also
| oftne use a reliable formalism to check other formalisms.
| The people doing this have been making everything from
| proof checkers to compilers to assembly languages to code
| extraction in those tools so they are highly trustworthy.
|
| But, we still need people to look at the specs of all that
| to see if there are spec errors. There's fewer people who
| can vet the specs than can check the original English and
| code combos. So, are they more trustworthy? (Who knows
| except when tested empirically on many programs or proofs,
| like CompCert was.)
| uriegas wrote:
| I agree. Most of the time people think STEM is harder but it is
| not. Yes, it is harder to understand some concepts, but in
| social sciences we don't even know what the correct concepts
| are. There hasn't been so much progress in social sciences in
| the last centuries as there was for STEM.
| diamondage wrote:
| I'm not sure if you're correct. In fact there has been a
| revolution in some areas of social science in the last two
| decades due to the availability of online behavioural data.
| tgv wrote:
| The root of the problem is referred to implicitly: publish or
| perish. To get tenure, you need publications, preferably highly
| cited, and money, which comes from grants that your peers (mostly
| from other institutions) decide on. So the mutual back scratching
| begins, and the publication mill keeps churning out papers whose
| main value is the career of the author and --through citation--
| influential peers, truth be damned.
| jbreckmckye wrote:
| something something Goodhart's Law
| te7447 wrote:
| Something "systems that are attacked by entities that adapt
| often need to be defended by entities that adapt".
| bicepjai wrote:
| The same dynamics from school carry over into adulthood: early
| on it's about grades and whether you get into a "good" school;
| later it becomes the adult version of that treadmill : publish
| or perish.
| strangattractor wrote:
| Citations being the only metric is one problem. Maybe an
| improved rating/ranking system would be helpful.
|
| Ranking 1 to 3 - 1 being the best - 3 the bare minimum for
| publication.
|
| 3. Citations only
|
| 2. Citations + full disclosure of data.
|
| 1. Citations + full disclosure of data + replicated
| nick486 wrote:
| this will arguably be worse.
|
| you'll just get replication rings in addition to citation
| rings.
|
| People who cheat in their papers will have no issues cheating
| in their replication studies too. All this does, is give them
| a new tool to attack papers they don't like by faking a
| failed replication.
| dist-epoch wrote:
| In the past the elite would rule the plebs by saying "God says
| so, so you must do this".
|
| Today the elites rule the plebs by saying "Science sasy so, so
| you must do this".
|
| Author doesn't seem to understand this, the purpose of research
| papers is to be gospel, something to be believed, not
| scrutinized.
| abanana wrote:
| That's a very good point. Some of what's called "science"
| today, in popular media and coming from governments, is
| religion. "We know all, do not question us." It's the common
| problem of headlines along the lines of "scientists say" or
| "The Science says", which should always be a red flag - but the
| majority of people believe it.
| graemep wrote:
| In fact, religious ideas (at least in Europe) were often in
| opposition to the ruling elite (and still are) and even
| inspired rebellion:
| https://en.wikipedia.org/wiki/John_Ball_(priest)
|
| There is a reason scriptures were kept away from the oppressed,
| or only made available to them in a heavily censored form (e.g.
| the Slaves Bible).
| Throaway1982 wrote:
| A little more complicated than that.
|
| In the past, the elites said "don't read the religious texts,
| WE will tell you what's in them."
| mike_hearn wrote:
| Scientists say that today too, it's a standard response if
| people outside of academia critique their work. "That person
| is not an expert" - totally normal response, it's taken to be
| a killer rebuttal by journalists and politicians.
| Throaway1982 wrote:
| Not exactly...in the past the Bible was literally not
| allowed to be translated from Latin into local languages.
| Ordinary people were 100% reliant on the elites to tell
| them what was in it.
| mike_hearn wrote:
| Yes it's less harsh now, but it's a matter of degree and
| has improved in recent times. Even today many papers
| aren't open access.
| jltsiren wrote:
| That's a misunderstanding. There were plenty of ancient and
| medieval translations of the Bible, but the Bible itself
| wasn't as central as it is today.
|
| Catholic and Orthodox Christianity do not focus as much on
| the Bible as Protestant Christianity. They are based on the
| tradition, of which the Bible is only a part, while the
| Protestant Reformation elevated the Bible above the
| tradition. (By a tortured analogy, you could say that
| Catholicism and Orthodoxy are common law Christianity, while
| Protestantism is civil law Christianity.)
|
| From a Catholic or Orthodox perspective, there is a living
| tradition from the days of Jesus and the Apostles to present
| day. Some parts of it were written down and became the New
| Testament, but the parts that were left out were equally
| important. You cannot therefore understand the Bible without
| understanding the tradition, because it's only a partial
| account.
| poemxo wrote:
| > They intended to type "not significant" but omitted the word
| "not."
|
| This one is pretty egregious.
| B1FIDO wrote:
| Once, back around 2011 or 2012, I was using Google Translate
| for a speech I was to deliver in church. It was shorter than
| one page printed out.
|
| I only needed the Spanish translation. Now I am proficient in
| spoken and written Spanish, and I can perfectly understand what
| is said, and yet I still ran the English through Google
| Translate and printed it out without really checking through
| it.
|
| I got to the podium and there was a line where I said
| "electricity is in the air" (a metaphor, obviously) and the
| Spanish translation said "electricidad no esta en el aire" and
| I was able to correct that on-the-fly, but I was _pissed_ at
| Translate, and I badmouthed it for months. And sure, it was my
| fault for not proofing and vetting the entire output, but come
| on!
| dev_l1x_be wrote:
| There is a surprisingly large amount of bad science out there.
| And we know it. One of my favourite writeup on the subject: John
| P. A. Ioannidis: Why Most Published Research Findings Are False
|
| https://pmc.ncbi.nlm.nih.gov/articles/PMC1182327/pdf/pmed.00...
| FabHK wrote:
| John Ioannidis is a weird case. His work on the replication
| crisis across many domains was seminal and important. His
| contrarian, even conspiratorial take on COVID-19 not so much.
| raddan wrote:
| Ugh, wow, somehow I missed all this. I guess he joins the
| ranks of the scientists who made important contributions and
| then leveraged that recognition into a platform for unhinged
| diatribes.
| kelipso wrote:
| What's happening here?
|
| "Most Published Research Findings Are False" --> "Most
| Published COVID-19 Research Findings Are False" -> "Uh oh,
| I did a wrongthink, let's backtrack at bit".
|
| Is that it?
| mike_hearn wrote:
| Yes, sort of. Ioannidis published a serosurvey during
| COVID that computed a lower fatality rate than the prior
| estimates. Serosurveys are a better way to compute this
| value because they capture a lot of cases which were so
| mild people didn't know they were infected, or thought it
| wasn't COVID. The public health establishment wanted to
| use an IFR as high as possible e.g. the ridiculous Verity
| et al estimates from Jan 2020 of a 1% IFR were still in
| use more than a year later despite there being almost no
| data in Jan 2020, because high IFR = COVID is more
| important = more power for public health.
|
| If IFR is low then a lot of the assumptions that
| justified lockdowns are invalidated (the models and
| assumptions were wrong anyway for other reasons, but IFR
| is just another). So Ioannidis was a bit of a class
| traitor in that regard and got hammered a lot.
|
| The claim he's a conspiracy theorist isn't supported,
| it's just the usual ad hominem nonsense (not that there's
| anything wrong with pointing out genuine conspiracies
| against the public! That's usually called journalism!).
| Wikipedia gives four citations for this claim and none of
| them show him proposing a conspiracy, just arguing that
| when used properly data showed COVID was less serious
| than others were claiming. One of the citations is
| actually of an article written by Ioannidis himself. So
| Wikipedia is corrupt as per usual. Grokipedia's article
| is significantly less biased and more accurate.
| tripletao wrote:
| He published a serosurvey that claimed to have found a
| signal in a positivity rate that was within the 95% CI of
| the false-positive rate of the test (and thus
| indistinguishable from zero to within the usual p < 5%).
| He wasn't necessarily wrong in all his conclusions, but
| neither were the other researchers that he rightly
| criticized for their own statistical gymnastics earlier.
|
| https://statmodeling.stat.columbia.edu/2020/04/19/fatal-
| flaw...
|
| That said, I'd put both his serosurvey and the conduct he
| criticized in "Most Published Research Findings Are
| False" in a different category from the management
| science paper discussed here. Those seem mostly
| explainable by good-faith wishful thinking and motivated
| reasoning to me, while that paper seems hard to explain
| except as a knowing fraud.
| mike_hearn wrote:
| Yeah I remember reading that article at the time. Agree
| they're in different categories. I think Gellman's
| summary wasn't really supportable. It's far too harsh -
| he's demanding an apology because the data set used for
| measuring test accuracy wasn't large enough to rule out
| the possibility that there were no COVID cases in the
| entire sample, and he doesn't personally think some
| explanations were clear enough. But this argument relies
| heavily on a worst case assumption about the FP rate of
| the test, one which is ruled out by prior evidence (we
| know there were indeed people infected with SARS-CoV-2 in
| that region in that time).
|
| There's the other angle of selective outrage. The case
| for lockdowns was being promoted based on, amongst other
| things, the idea that PCR tests have a false positive
| rate of exactly zero, always, under all conditions. This
| belief is nonsense although I've encountered wet lab
| researchers who believe it - apparently this is how they
| are trained. In one case I argued with the researcher for
| a bit and discovered he didn't know what Ct threshold
| COVID labs were using; after I told him he went white and
| admitted that it was far too high, and that he hadn't
| known they were doing that.
|
| Gellman's demands for an apology seem very different in
| this light. Ioannidis et al not only took test FP rates
| into account in their calculations but directly measured
| them to cross-check the manufacturer's claims. Nearly
| every other COVID paper I read simply assumed FPs don't
| exist at all, or used bizarre circular reasoning like "we
| know this test has an FP rate of zero because it detects
| every case perfectly when we define a case as a positive
| test result". I wrote about it at the time because this
| problem was so prevalent:
|
| https://medium.com/mike-hearn/pseudo-epidemics-part-
| ii-61cb0...
|
| I think Gellman realized after the fact that he was being
| over the top in his assessment because the article has
| been amended since with numerous "P.S." paragraphs which
| walk back some of his own rhetoric. He's not a bad writer
| but in this case I think the overwhelming peer pressure
| inside academia to conform to the public health
| narratives got to even him. If the cost of pointing out
| problems in your field is that every paper you write has
| to be considered perfect by every possible critic from
| that point on, it's just another way to stop people
| flagging problems.
| tripletao wrote:
| Ioannidis corrected for false positives with a point
| estimate rather than the confidence interval. That's
| better than not correcting, but not defensible when
| that's the biggest source of statistical uncertainty in
| the whole calculation. Obviously true zero can be
| excluded by other information (people had already tested
| positive by PCR), but if we want p < 5% in any meaningful
| sense then his serosurvey provided no new information. I
| think it was still an interesting and publishable result,
| but the correct interpretation is something like Figure 1
| from Gelman's
|
| https://sites.stat.columbia.edu/gelman/research/unpublish
| ed/...
|
| I don't think Gelman walked anything back in his P.S.
| paragraphs. The only part I see that could be mistaken
| for that is his statement that "'not statistically
| significant' is not the same thing as 'no effect'", but
| that's trivially obvious to anyone with training in
| statistics. I read that as a clarification for people
| without that background.
|
| We'd already discussed PCR specificity ad nauseam, at
|
| https://news.ycombinator.com/item?id=36714034
|
| These test accuracies mattered a lot while trying to
| forecast the pandemic, but in retrospect one can simply
| look at the excess mortality, no tests required. So it's
| odd to still be arguing about that after all the overrun
| hospitals, morgues, etc.
| mike_hearn wrote:
| By walked back, what I meant is his conclusion starts by
| demanding an apology, saying reading the paper was a
| waste of time and that Ioannidis "screwed up", that he
| didn't "look too carefully", that Stanford has "paid a
| price" for being associated with him, etc.
|
| But then in the P.P.P.S sections he's saying things like
| "I'm not saying that the claims in the above-linked paper
| are wrong." (then he has to repeat that twice because in
| fact that's exactly what it sounds like he's saying), and
| "When I wrote that the authors of the article owe us all
| an apology, I didn't mean they owed us an apology for
| doing the study" but given he wrote extensively about how
| he would _not_ have published the study, I think he did
| mean that.
|
| Also bear in mind there was a followup where Ioannidis's
| team went the extra mile to satisfy people like Gellman
| and:
|
| _They added more tests of known samples. Before, their
| reported specificity was 399 /401; now it's 3308/3324. If
| you're willing to treat these as independent samples with
| a common probability, then this is good evidence that the
| specificity is more than 99.2%. I can do the full
| Bayesian analysis to be sure, but, roughly, under the
| assumption of independent sampling, we can now say with
| confidence that the true infection rate was more than
| 0.5%._
|
| After taking into account the revised paper, which raised
| the standard from high to very high, there's not much of
| Gellman's critique left tbh. I would respect this kind of
| critique more if he had mentioned the garbage-tier
| quality of the rest of the literature. Ioannidis'
| standards were still much higher than everyone else's at
| that time.
| zahlman wrote:
| > He wasn't necessarily wrong in all his conclusions, but
| neither were the other researchers that he rightly
| criticized for their own statistical gymnastics earlier.
|
| In hindsight, I can't see any plausible argument for an
| IFR actually anywhere near 1%. So how were the other
| researchers "not necessarily wrong"? Perhaps their
| results were justified by the evidence available at the
| time, but that still doesn't validate the conclusion.
| tripletao wrote:
| I mean that in the context of "Most Published Research
| Findings Are False", he criticized work (unrelated to
| COVID, since that didn't exist yet) that used incorrect
| statistical methods even if its final conclusions
| happened to be correct. He was right to do so, just as
| Gelman was right to criticize his serosurvey--it's nice
| when you get the right answer by luck, but that doesn't
| help you or anyone else get the right answer next time.
|
| It's also hard to determine whether that serosurvey (or
| any other study) got the right answer. The IFR is
| typically observed to decrease over the course of a
| pandemic. For example, the IFR for COVID is much lower
| now than in 2020 even among unvaccinated patients, since
| they almost certainly acquired natural immunity in prior
| infections. So high-quality later surveys showing lower
| IFR don't say much about the IFR back in 2020.
| mike_hearn wrote:
| There were people saying right at the time in 2020 that
| the 1% IFR was nonsense and far too high. It wasn't
| something that only became visible in hindsight.
|
| Epidemiology tends to conflate IFR and CFR, that's one of
| the issues Ioannidis was highlighting in his work. IFR
| estimates do decline over time but they decline even in
| the absence of natural immunity buildup, because doctors
| start becoming aware of more mild cases where the patient
| recovered without being detected. That leads to a higher
| number of infections with the same number of fatalities,
| hence lower IFR computed even retroactively, but there's
| no biological change happening. It's just a case of data
| collection limits.
|
| That problem is what motivated the serosurvey. A
| theoretically perfect serosurvey doesn't have such
| issues. So, one would expect it to calculate a lower IFR
| and be a valuable type of study to do well. Part of the
| background of that work and why it was controversial is
| large parts of the public health community didn't
| actually want to know the true IFR because they knew it
| would be much lower than their initial back-of-the-
| envelope calculations based on e.g. news reports from
| China. Surveys like that _should_ have been commissioned
| by governments at scale, with enough data to resolve any
| possible complaint, but weren 't because public health
| bodies are just not incentivized that way. Ioannidis
| didn't play ball and the pro lockdown camp gave him a
| public beating. I think he was much closer to reality
| than they were, though. The whole saga spoke to the very
| warped incentives that come into play the moment you put
| the word "public" in front of something.
| doctorpangloss wrote:
| Does the IFR matter? The public thinks lives are
| infinitely valuable. Lives that the public pays attention
| to. 0.1% or 1%, it doesn't really matter, right, it gets
| multiplied by infinity in an ROI calculation. Or whatever
| so called "objective" criteria people try to concoct for
| policymaking. I like Ioannidis's work, and his results
| about serotypes (or whatever) were good, but it was being
| co-opted to make a mostly political policy (some
| Republicans: compulsory public interaction during a
| pandemic and uncharitably, compulsory transmission of a
| disease) look "objective."
|
| I don't think the general idea of co-opting is hard to
| understand, it's quite easy to understand. But there is a
| certain personality type, common among people who earn a
| living by telling Claude what to do, out there with a
| defect to have to "prove" people on the Internet "wrong,"
| and these people are constantly, blithely mobilized to
| further someone's political cause who truly doesn't give
| a fuck about them. Ioannidis is such a personality type,
| and as you can see, a victim.
| zahlman wrote:
| > The public thinks lives are infinitely valuable.
|
| In rhetoric, yes. (At least, except when people are given
| the opportunity to appear virtuous by claiming that they
| would sacrifice themselves for others.)
|
| In actions and revealed preferences, not so much.
|
| It would be rather difficult to be a functional human
| being if one took that principle completely seriously, to
| its logical conclusion.
|
| I can't recall ever hearing any calls for _compulsory
| public interaction_ , only calls to _stop forbidding_
| various forms of public interaction.
| doctorpangloss wrote:
| The SHOW UP act was congressional republicans forcing the
| end of telework for federal workers, not for any rational
| basis. Teachers in Texas and Florida, where Republicans
| run things, staff were faced with show up in person (no
| remote learning) or quit.
| Nezteb wrote:
| > So Wikipedia is corrupt as per usual. Grokipedia's
| article is significantly less biased and more accurate.
|
| I hope this was sarcasm.
| throw310822 wrote:
| I would hope the same. But knowing Wikipedia I'm afraid
| it isn't.
| timr wrote:
| Please don't lazily conclude that he's gone crazy because
| it doesn't align with your prior beliefs. His work on Covid
| was just as rigorous as anything else he's done, but it's
| been unfairly villainized by the political left in the USA.
| If you disagree with his conclusions on a topic, you'd do
| well to have better reasoning than "the experts said the
| opposite".
|
| Ioannidis' work during Covid _raised_ him in my esteem. It
| 's rare to see someone in academics who is willing to set
| their own reputation on fire in search of truth.
| giardini wrote:
| Yeah, and lucky you! You gain all this insight b/c you
| logged into Hacker News on the very day someone posted the
| truth! What a coincidence!
| sampo wrote:
| He made a famous career, to being a professor and a director
| in Stanford University, about meta-research on the quality of
| other people's research, and critiquing the methodology of
| other people's studies. Then during Covid he tried to do a
| bit of original empirical research of his own, and his own
| methods and statistical data analysis were even worse than
| what he has critiqued in other people's work.
| Cornbilly wrote:
| This is a great paper but, in my experience, most people in
| tech love this paper because it allows them to say "To hell
| with pursuing reality. Here is MY reality".
| raphman wrote:
| FWIW, Ioannidis never demonstrated that a certain number of
| findings (or most) in a specific discipline are actually false
| - he calculated estimates based on assumptions. While Ioannidis
| work is important, and his claims may be true for many
| disciplines, a more nuanced view is helpful.
|
| For example, here's an article that argues (with data) that
| there is actually little publication bias in medical studies in
| the Cochrane database:
|
| https://replicationindex.com/2020/12/24/ioannidis-is-wrong/
| gus_massa wrote:
| The webpage of the journal [1] only says 109 citations of the
| original article, this count only " _indexed_ " journals, that
| are not guaranty to be ultra high quality but at least filter the
| worse " _pay us to publish crap_ " journals.
|
| ResearchGate says 3936 citations. I'm not sure what they are
| counting, probably all the pdf uploaded to ResearchGate
|
| I'm not sure how they count 6000 citations, but I guess they are
| counting everything, including quotes by the vicepresident.
| Probably 6001 after my comment.
|
| Quoted in the article:
|
| >> _1. Journals should disclose comments, complaints,
| corrections, and retraction requests. Universities should report
| research integrity complaints and outcomes._
|
| All comments, complaints, corrections, and retraction requests?
| Unmoderated? Einstein articles will be full of comments
| explaining why he is wrong, from racist to people that can spell
| Minkowski to save their lives. In /newest there is like one post
| per week from someone that discover a new physics theory with the
| help of ChatGPT. Sometimes it's the same guy, sometimes it's a
| new one.
|
| [1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1964011
|
| [2]
| https://www.researchgate.net/publication/279944386_The_Impac...
| Calavar wrote:
| > All comments, complaints, corrections, and retraction
| requests? Unmoderated? Einstein articles will be full of
| comments explaining why he is wrong, from racist to people that
| can spell Minkowski to save their lives. In /newest there is
| like one post per week from someone that discover a new physics
| theory with the help of ChatGPT. Sometimes it's the same guy,
| sometimes it's a new one.
|
| Judging from PubPeer, which allows people to post all of the
| above anonymously and with minimal moderation, this is not an
| issue in practice.
| bee_rider wrote:
| They mentioned a famous work, which will naturally attract
| cranks to comment on it. I'd also expect to get weird
| comments on works with high political relevance.
| gus_massa wrote:
| Link to PubPerr https://pubpeer.com/publications/F9538AA8AC2E
| CC7511800234CC4...
|
| It has 0 comments, for an article that forgot "not" in "the
| result is *** statistical significative".
| Calavar wrote:
| Isn't a lack of comments the opposite of the problem you
| were previously claiming?
| optionalsquid wrote:
| > I'm not sure how they count 6000 citations, but I guess they
| are counting everything, including quotes by the vicepresident.
| Probably 6001 after my comment.
|
| The number appears to be from Google Scholar, which currently
| reports 6269 citations for the paper
| chmod775 wrote:
| Pretty much all fields have shit papers, but if you ever feel the
| need to develop a superiority complex, take a vacation from your
| STEM field and have a look at what your university offers under
| the "business"-anything label. If anyone in those fields manages
| to produce anything of quality, they're defying the odds and
| should be considered one of the greats along the line of Euclid,
| Galileo Galilei, or Isaac Newton - because they surely didn't
| have many shoulders to stand on either.
| HPsquared wrote:
| I suppose it's to be expected, the business department is built
| around the art of generating profit from cheap inputs. It's
| business thinking in action!
| lordnacho wrote:
| This is exactly how I felt when studying management as part of
| ostensibly an Engineering / Econ / Management degree.
|
| When you added it up, most of the hard parts were Engineering,
| and a bit Econ. You would really struggle to work through tough
| questions in engineering, spend a lot of time on economic
| theory, and then read the management stuff like you were
| reading a newspaper.
|
| Management you could spot a mile away as being soft. There's
| certainly some interesting ideas, but even as students we could
| smell it was lacking something. It's just a bit too much like a
| History Channel documentary. Entertaining, certainly, but it
| felt like false enlightenment.
| seec wrote:
| Econ is the only social science that isn't completely bogus.
| The replication rate isn't too bad, even though it is still
| worse than STEM of course. Everything else is basically like
| rolling a dice or even worse. Special mention to "pedagogy,"
| which manages to be systematically worse than random; in
| other words, they only produce bullshit and not much else.
| Dumblydorr wrote:
| Does it bug anyone else when your article has so many quotes it's
| practically all italics? Change the formatting style so we don't
| have to read pages of italic quotes
| shusaku wrote:
| This drove me nuts, but also the authors should like get to the
| point about what was wrong instead of dancing around it for
| page after page.
| bronlund wrote:
| This likely represents only a fragment of a larger pattern.
| Research contradicting prevailing political narratives faces
| significant professional obstacles, and as this article shows, so
| does critiques of research that don't.
| Biologist123 wrote:
| Not enough is understood about the replication crisis in the
| social sciences. Or indeed in the hard sciences. I do wonder
| whether this is something that AI will rectify.
| moolcool wrote:
| How would AI do anything to rectify it?
| Levitz wrote:
| The same way it would correct typos in a text. It's just a
| tool, you tell it to find inconsistencies, see what results
| that yields, and optimize it for verification of claims.
| buckle8017 wrote:
| it will not, ai reads and "believes" the heavily cited but
| incorrect papers.
| gdevenyi wrote:
| Welcome Ideological science published to support the regime.
| There's a lot more where this came from .
| bluecalm wrote:
| The problem with academia is that it's often more about politics
| and reputation than seeking the truth. There are multiple
| examples of researchers making a career out flawed papers and
| never retracking or even admitting a mistake.
|
| All the talks they were invited to give, all the followers they
| had, all the courses they sold and impact factor they have built.
| They are not going to came forward and say "I misinterpreted the
| data and made long reaching conclusions that are nonsense, sorry
| for misleading you and thousands of others".
|
| The process protects them as well. Someone can publish another
| paper, make different conclusions. There is 0 effort get to the
| truth, to tell people what is and what isn't current consensus
| and what is reasonable to believe. Even if it's clear for anyone
| who digs a bit deeper it will not be communicated to the audience
| the academia is supposed to serve. The consensus will just
| quietly shift while the heavily quoted paper is still there. The
| talks are still out there, the false information is still
| propagated while the author enjoys all the benefits and suffers
| non of the negative consequences.
|
| If it functions like that I don't think it's fair that tax payer
| funds it. It's there to serve the population not to exist in its
| own world and play its own politics and power games.
| throwaway150 wrote:
| > I've been in the car with some drunk drivers, some dangerous
| drivers, who could easily have killed people: that's a bad thing
| to do, but I wouldn't say these were bad people.
|
| If this isn't bad people, then who can ever be called bad people?
| The word "bad" loses its meaning if you explain away every bad
| deed by such people as something else. Putting other people's
| lives at risk by deciding to drive when you are drunk sounds like
| very bad people to me.
|
| > They're living in a world in which doing the bad thing-covering
| up error, refusing to admit they don't have the evidence to back
| up their conclusions-is easy, whereas doing the good thing is
| hard.
|
| I don't understand this line of reasoning. So if people do bad
| things because they know they can get away with it, they aren't
| bad people? How does this make sense?
|
| > As researchers they've been trained to never back down, to
| dodge all criticism.
|
| Exactly the opposite is taught. These people are deciding not to
| back down and admit wrong doing out of their own accord. Not
| because of some "training".
| brabel wrote:
| When everyone else does it, it's extremely hard to be
| righteous. I did it long ago... everyone did it back then. We
| knew the danger and thought we were different, we thought we
| could drive safely no matter our state. Lots of tragedies
| happen because people disastrously misjudge their own
| abilities, and when alcohol is involved doubly so. They are not
| bad people, they're people who live in a flawed culture where
| alcohol is seen as acceptable and who cannot avoid falling for
| the many human fallacies... in this case caused by the Dunning
| Kruger effect. If you think people who fall for fallacies are
| bad, then being human is inherently bad in your opinion.
| throwaway150 wrote:
| I don't think being human is inherently bad. But you have to
| draw the line to consider someone as "bad" somewhere, right?
| If you don't draw a line, then nobody in the world is a bad
| person. So my question is where exactly is that line?
|
| You guys are saying that drink driving does not make someone
| a bad person. Ok. Let's say I grant you that. Where do you
| draw the line for someone being a bad person?
|
| I mean with this line of reasoning you can "explain way"
| every bad deed and then nobody is a bad person. So do you
| guys consider someone to be actually a bad person and what
| did they have to do to cross that line where you can't
| explain away their bad deed anymore and you really consider
| them to be bad?
| ordu wrote:
| _> If you don 't draw a line, then nobody in the world is a
| bad person. So my question is where exactly is that line?_
|
| I don't think that that line can be drawn exactly. There
| are many factors to consider and I'm not sure that even
| considering them will allow you to draw this line and not
| come to claims like '99% of people are bad' or '99% of
| people are not bad'.
|
| 'Bad' is not an innate property of a person. 'Bad' is a
| label that exists only in an observer's model of the world.
| A spherical person in vacuum cannot be 'bad', but if we add
| an observer of the person, then they may become bad.
|
| To my mind, the decision of labeling a person to be bad or
| not labeling them is a decision reflecting how the labeling
| subject cares about the one on the receiving side. So, it
| goes like this: first you decide what to do with bad
| behavior of someone, and if you decide to go about it with
| punishment, then you call them 'bad', if you decide to help
| them somehow to stop their bad behavior, then you don't
| call them bad.
|
| It works like this: when observing some bad behavior I
| decide what to do about it. If I decide to punish a person,
| I declare them to be bad. If I decide to help them stop
| their behavior, I declare them to be not bad, but
| 'confused' or circumstantially forced, or whatever. Y'see:
| you cannot change personal traits of others, so if you
| declare that the reason of bad behavior is a personal trait
| 'bad' then you cannot do anything about it. If you want to
| change things, you need to find a cause of bad behavior,
| that can be controlled.
| fjsocjdjdcisj wrote:
| As writers often say: there's no such thing as a synonym.
|
| "That's a bad thing to do..."
|
| Maybe should be: "That's a stupid thing to do..."
|
| Or: reckless, irresponsible, selfish, etc.
|
| In other words, maybe it has nothing to do with morals and
| ethics. Bad is kind of a lame word with limited impact.
| Jach wrote:
| It's a broad and simple word but it's also a useful word
| because of its generality. It's nice to have such a word that
| can apply to so many kinds and degrees of actions, and saves
| so many pointless arguments about whether something is more
| narrowly evil, for example. Applied empirically to people, it
| has predictive power and can eliminate surprise because the
| actions of bad people are correlated with bad actions in many
| different ways. A bad person does something very stupid
| today, very irresponsible tomorrow, and will unsurprisingly
| continue to do bad things of all sorts of kinds even if they
| stay clear of some kinds.
| spongebobstoes wrote:
| labelling a person as "bad" is usually black and white
| thinking. it's too reductive, most people are both good and bad
|
| > because they know they can get away with it
|
| the point is that the paved paths lead to bad behavior
|
| well designed systems make it easy to do good
|
| > Exactly the opposite is taught.
|
| "trained" doesn't mean "taught". most things are learned but
| not taught
| striking wrote:
| This link may be blogspam of https://www.linkedin.com/pulse/how-
| institutional-failures-un...
| learingsci wrote:
| I was young once too.
|
| "Your email is too long."
|
| This whole thing is filled with "yeah, no s**" and lmao.
|
| More seriously, pretty sure the whole ESG thing has been debunked
| already, and those who care to know the truth already know it.
|
| A good rule of thumb is to be skeptical of results that make you
| feel good because they "prove" what you want them to.
| SeanLuke wrote:
| I developed and maintain a large and very widely used open source
| agent-based modeling toolkit. It's designed to be very highly
| efficient: that's its calling card. But it's old: I released its
| first version around 2003 and have been updating it ever since.
|
| Recently I was made aware by colleagues of a publication by
| authors of a new agent-based modeling toolkit in a different,
| hipper programming language. They compared their system to
| others, including mine, and made kind of a big checklist of who's
| better in what, and no surprise, theirs came out on top. But
| digging deeper, it quickly became clear that they didn't
| understand how to run my software correctly; and in many other
| places they bent over backwards to cherry-pick, and made a lot of
| bold and completely wrong claims. Correcting the record would
| place their software far below mine.
|
| Mind you, I'm VERY happy to see newer toolkits which are better
| than mine -- I wrote this thing over 20 years ago after all, and
| have since moved on. But several colleagues demanded I do so.
| After a lot of back-and-forth however, it became clear that the
| journal's editor was too embarrassed and didn't want to require a
| retraction or revision. And the authors kept coming up with
| excuses for their errors. So the journal quietly dropped the
| complaint.
|
| I'm afraid that this is very common.
| bargle0 wrote:
| If you're the same Sean Luke I'm thinking of:
|
| I was an undergraduate at the University of Maryland when you
| were a graduate student there in the mid nineties. A lot of
| what you had to say shaped the way I think about computer
| science. Thank you.
| domoregood wrote:
| Comments like this are the best part HN.
| sizzle wrote:
| Imagine if you did a bootcamp instead
| mnw21cam wrote:
| A while back I wrote a piece of (academic) software. A couple
| of years ago I was asked to review a paper prior to
| publication, and it was about a piece of software that did the
| same-ish thing as mine, where they had benchmarked against a
| set of older software, including mine, and of course they found
| that theirs was the best. However, their testing methodology
| was fundamentally flawed, not least because there is no "true"
| answer that the software's output can be compared to. So they
| had used a different process to produce a "truth", then trained
| their software (machine learning, of course) to produce results
| that match this (very flawed) "truth", and then of course their
| software was the best because it was the one that produced
| results closest to the "truth", whereas the other software
| might have been closer to the _actual_ truth.
|
| I recommended that the journal not publish the paper, and gave
| them a good list of improvements to give to the authors that
| should be made before re-submitting. The journal agreed with
| me, and rejected the paper.
|
| A couple of months later, I saw it had been published unchanged
| in a different journal. It wasn't even a lower-quality journal,
| if I recall the impact factor was actually higher than the
| original one.
|
| I despair of the scientific process.
| timr wrote:
| If it makes you feel any better, the problem you're
| describing is as old as peer review. The authors of a paper
| only have to get accepted once, and they have a lot more
| incentive to do so than you do to reject their work as an
| editor or reviewer.
|
| This is one of the reasons you should _never_ accept a single
| publication at face value. But this isn't a bug -- it's part
| of the algorithm. It's just that most muggles don't know how
| science actually works. Once you read enough papers in an
| area, you have a good sense of what's in the norm of the
| distribution of knowledge, and if some flashy new result
| comes over the transom, you might be _curious_ , but you're
| not going to accept it without a lot more evidence.
|
| This situation is different, because it's a case where an
| extremely _popular_ bit of accepted wisdom is both wrong, and
| the system itself appears to be unwilling to acknowledge the
| error.
| FeloniousHam wrote:
| Back when I listened to NPR, I shook my fist at the radio
| every time Shankar Vidantim came on to explain the latest
| scientific paper. Whatever was being celebrated, it was
| surely brand new. It's presentation on Morning Edition gave
| it the imprimature of "Proofed Science", and I imagined it
| getting repeated at every office lunch and cocktail party.
| I never heard a retraction.
| a123b456c wrote:
| Many people do not know that Impact Factor is gameable.
| Unethical publications have gamed it. Therefore a higher IF
| may or may not indicate higher prominence. Use Scimago
| journal rankings for non-gameable scores.
| PaulHoule wrote:
| _Science_ and _Nature_ are mol-bio journals that publish
| the occasional physics paper with a title you 'd expect on
| the front page of _The Weekly World News._
| BLKNSLVR wrote:
| It seems that the failure of the scientific process is
| 'profit'.
|
| Schools should be using these kinds of examples in order to
| teach critical thinking. Unfortunately the other side of the
| lesson is how easy it is to push an agenda when you've got a
| little bit of private backing.
| trogdor wrote:
| > it became clear that the journal's editor was too embarrassed
|
| How sad. Admitting and correcting a mistake may feel difficult,
| but it makes you credible.
|
| As a reader, I would have much greater trust in a journal that
| solicited criticism and readily published corrections and
| retractions when warranted.
| steveklabnik wrote:
| Unfortunately, academia is subject to the same sorts of
| social things that anything else is. I regularly see people
| still bring up a hoax article sent to a journal in 1996 as a
| reason to dismiss the entire field that one journal publishes
| in.
|
| Personally, I would agree with you. That's how these things
| are supposed to work. In practice, people are still people.
| oawiejrlij wrote:
| When I was a grad student I contacted a journal to tell them my
| PI had falsified their data. The journal never responded. I
| also contacted my university's legal department. They invited
| me in for an hour, said they would talk to me again soon, and
| never spoke to me or responded to my calls again after that.
| This was in a Top-10-in-the-USA CS program. I have close to
| zero trust in academia. This is why we have a "reproducibility
| crisis".
| bflesch wrote:
| Name and shame these frauds. Let me guess, was it Stanford?
| neilv wrote:
| PSA for any grad student in this situation: get a lawyer,
| ASAP, to protect your own career.
|
| Universities care about money and reputation. Individuals at
| universities care about their careers.
|
| With exceptions of some saintly individual faculty members, a
| university is like a big for-profit corporation, only with
| less accountability.
|
| Faculty bring in money, are strongly linked to reputation
| (scandal news articles may even say the university name in
| headlines rather than the person's name), and faculty are
| hard to get rid of.
|
| Students are completely disposable, there will always be
| undamaged replacements standing by, and turnover means that
| soon hardly anyone at the university will even have heard of
| the student or internal scandal.
|
| Unless you're really lucky, the university's position will be
| to suppress the messenger.
|
| But if you go in with a lawyer, the lawyer may help your
| whistleblowing to be taken more seriously, and may also help
| you negotiate a deal to save your career. (For example of
| help, you need the university's/department's help in
| switching advisors gracefully, with funding, even as the
| uni/dept is trying to minimize the number of people who know
| about the scandal.)
| lancewiggs wrote:
| I found mistakes in the spreadsheet backing up 2 published
| articles (corporate governance). The (tenured Ivy)
| professor responded by paying me (after I'd graduated) to
| write a comprehensive working paper that relied on a fixed
| spreadsheet and rebutted the articles.
|
| Integrity is hard, but reputations are lifelong.
| lotsofpulp wrote:
| >PSA for any grad student in this situation: get a lawyer,
| ASAP, to protect your own career.
|
| Back in my day, grad students generally couldn't afford
| lawyers.
| sizzle wrote:
| Name and shame?
| cannonpalms wrote:
| Is this the kind of thing that retractions are typically issued
| for, or would it simply be your responsibility to submit a new
| paper correcting the record? I don't know how these things
| work. Thanks.
| orochimaaru wrote:
| I think the publish or perish academic culture makes it
| extremely susceptible to glossing over things like this -
| especially for statistical analysis. Sharing data, algorithms,
| code and methods for scientific publications will help. For
| papers above a certain citation count, which makes them seem
| "significant", I'm hoping google scholar can provide an
| annotation of whether the paper is reproducible and to what
| degree. While it won't avoid situations like what the author is
| talking about, it may force journal editors to take rebuttals
| and revisions more seriously.
|
| From the perspective of the academic community, there will be
| lower incentive to publish incorrect results if data and code
| is shared.
| consp wrote:
| This reminds me of my former college who asked me to check some
| code from a study, which I did not know it was published, and
| told him I hope he did not write it since it likely produced
| the wrong results. They claimed some process was too
| complicated to do because it was post O(2^n) in complexity,
| decided to do some major simplification of the problem, and
| took that as the truth in their answer. End result was the
| original algorithm was just quadratic, not worse, given the
| data set was easily doable in minutes at best (and not days as
| claimed) and the end result did not support their conclusions
| one tiny bit.
|
| Our conclusion was to never trust psychology majors with
| computer code. And like with any other expertise field they
| should have shown their idea and/or code to some CS majors at
| the very least before publishing.
| ameligrana wrote:
| I take the occasion to say that I helped making/rewriting a
| comparison between various agent-based modelling software at
| https://github.com/JuliaDynamics/ABMFrameworksComparison, not
| sure if this correctly represents all of them fairly enough,
| but if anyone wants to chime in to improve the code of any of
| the frameworks involved, I would be really happy to accept any
| improvement
| ameligrana wrote:
| SeanLuke, I tried to fix an issue about Mason I opened when I
| was looking into this a while back two years ago and tried to
| notify people about that (https://github.com/JuliaDynamics/AB
| MFrameworksComparison/iss...) with https://github.com/JuliaDy
| namics/ABMFrameworksComparison/pul..., hopefully the
| methodology is correct, I know very little about Java...In
| general, I don't think there is any very good comparison on
| performance in this field unfortunately at the moment, though
| if someone is interested in trying to make a correct one, I
| will be happy to contribute
| contrarian1234 wrote:
| maybe naiive but isnt this what "comments" in journals are for?
|
| theyre usually published with a response by the authors
| achillean wrote:
| I had a similar experience where a competitor released an
| academic paper rife with mistakes and misunderstandings of how
| my software worked. Instead of reaching out and trying to
| understand how their system was different than mine they used
| their incorrect data to draw their conclusions. I became rather
| disillusioned with academic papers as a result of how they were
| able to get away with publishing verifiably wrong data.
| pseudohadamard wrote:
| I reviewed for Management Science years ago, once. Once. They
| had a ridiculously baroque review process with multiple layers
| of reviewing and looping within them where a paper gets re-
| reviewed over and over. I couldn't see any indication that it
| improved the quality over the standard three-people-review-then
| vote process. The papers I was given were pure numerology, long
| equations involving a dozen or more terms multiplied out where
| changing any one of them would throw the results in a
| completely different direction. And the weightings in some of
| the equations seemed pretty arbitrary, "we'll put a 0.4 in here
| because it makes the result look about right". It really didn't
| inspire confidence in the quality of the stuff they were
| publishing.
|
| Now I'm not saying that everything in M-S is junk, but the
| small subset I was exposed to was.
| ungreased0675 wrote:
| The paper publishing industry has a tragedy of the commons
| problem. Individual authors benefit from fake or misrepresented
| research. Over time more and more people roll their eyes when
| they hear "a study found..." Over a long period it depreciates
| science and elevates superstition.
|
| For example, look at how people interact with LLMs. Lots of
| superstition (take a deep breath) not much reading about the
| underlying architecture.
| nickpsecurity wrote:
| I think what these papers prove is my newer theory that organized
| science isn't scientific at all. It's mostly unverified claims by
| people rewarded for throwing papers out that look scientific,
| have novelty, and achieve policy goals of specific groups.
| There's also little review with dissent banned in many places.
| We've been calling it scientism since it's like a self-
| reinforcing religion.
|
| We need to throw all of this out by default. From public policy
| to courtrooms, we need to treat it like any other eyewitness
| claim. We shouldn't beleive anything unless it has strong
| arguments or data backing it. For science, we need the scientific
| method applied with skeptical review and/or replication. Our
| tools, like statistical methods and programs, must be vetted.
|
| Like with logic, we shouldn't allow them to go beyond what's
| proven in this way. So, only the vetted claims are allowed as
| building blocks (premises) in newly-vetted work. The premises
| must be used how they were used before. If not, they are re-
| checked for the new circumstances. Then, the conclusions are
| stated with their preconditions and limitations to only he
| applied that way.
|
| I imagine many non-scientists and taxpayers assumed what I
| described is how all these "scientific facts" and "consensus"
| vlaims were done. The opposite was true in most cases. So, we
| need to not onoy redo it but apply scientific method to the
| institutions themselves assessing their reliability. If they
| don't get reliable, they loose their funding and quickly.
|
| (Note: There are groups in many fields doing real research and
| experimental science. We should highlight them as exemplars.
| Maybe let them take the lead in consulting for how to fix these
| problems.)
| esseph wrote:
| I have a Growing Concern with our legal systems.
|
| > We need to throw all of this out by default. From public
| policy to courtrooms, we need to treat it like any other
| eyewitness claim.
|
| If you can't trust eyewitness claims, if you can't trust video
| or photographic or audio evidence, then how does one Find
| Truth? Nobody really seems to have a solid answer to this.
| nickpsecurity wrote:
| It's specific segments of people saying we can't trust
| eyewitness claims. They actually work well enough that we run
| on them from childhood to adulthood. Accepting that truth is
| the first step.
|
| Next, we need to understand why that is, which should be
| trusted, and which can't be. Also, what methods to use in
| what contexts. We need to develop education for people about
| how humanity actually works. We can improve steadily over
| time.
|
| On my end, I've been collecting resources that might be
| helpful. That includes Christ-centered theology with real-
| world application, philosophies of knowledge with guides on
| each one, differences between real vs organized science,
| biological impact on these, dealing with media bias (eg
| AllSides), worldview analyses, critical thinking (logic),
| statistical analyses (esp error spotting), writing correct
| code, and so on.
|
| One day, I might try to put it together into a series that
| equips people to navigate all of this stuff. For right now,
| I'm using it as a refresher to improve my own abilities ahead
| of entering the Data Science field.
| esseph wrote:
| > It's specific segments of people saying we can't trust
| eyewitness claims.
|
| Scientists that have studied this over long periods of
| times and diverse population groups?
|
| I've done this firsthand - remembered an event a particular
| way only to see video (in the old days, before easy video
| editing) and find out it... didn't quite happen as I
| remembered.
|
| That's because human beings aren't video recorders. We're
| encoding emotions into sensor data, and get blinded by
| things like Weapon Focus and Selective Attention.
| nickpsecurity wrote:
| Ok, let me give you examples.
|
| Much of what many learned about life came from their
| parents. That included lots of foundational knowledge
| that was either true or worked well enough.
|
| You learned a ton in school from textbooks that you
| didn't personally verify.
|
| You learned lots from media, online experts, etc. Much of
| which you couldn't verify.
|
| In each case, they are making _eyewitness claims_ that
| are a mix of first-hand and _hearsay_. Many books or
| journals report others ' claims. So, even most education
| involves tons of hearsay claims.
|
| So, how do scientists raised, educated, and informed by
| eyewitness claims write reports saying eyewitness
| testimony isn't reliable? How do scientists educated by
| tons of hearsay not believe eyewitness testimony is
| trustworthy?
|
| Or did they personally do the scientific method on every
| claim, technique, machine, circuit, etc they ever
| considered using? And make all of it from first
| principles and raw materials? Did they never believe
| another person's claims?
|
| Also, "scientists that have studied this over long
| periods of times and diverse population groups" is itself
| an eyewitness claim and hearsay if you want us to take
| your word for it. If we look up the studies, we're
| believing _their_ eyewitness claims on faith while we 've
| validated your claim that theirs exist.
|
| It's clear most people have no idea how much they act on
| faith in others' word, even those scientists who claim to
| refute the value of it.
| glitchc wrote:
| This is simply a case of appeal to authority. No reviewer or
| editor would reject a paper from either HBS or LBS, let alone a
| joint paper between the two. Doing so would be akin to career
| suicide.
|
| And therein lies the uncomfortable truth: Collaborative
| opportunities take priority over veracity in publications every
| time.
| cloud-oak wrote:
| That's why double-blind review shohld be the norm. It's wild to
| me that single-blind is still the norm in kost disciplines.
| drob518 wrote:
| We've developed a "leaning tower of science." Someday, it's going
| to fall.
| efitz wrote:
| The paper touches on a point ("sustainability ") that is a sacred
| cow for many people.
|
| Even if you support sustainability, criticizing the paper will be
| treated as heresy by many.
|
| Despite our idealistic vision of Science(tm), it is a human
| process done by humans with human motivations and human
| weaknesses.
|
| From Galileo to today, we have repeatedly seen the enthusiastic
| willingness by majorities of scientists to crucify heretics (or
| sit by in silence) and to set aside scientific thinking and
| scientific process when it clashes against belief or orthodoxy or
| when it makes the difference whether you get tenure or
| publication.
| petesergeant wrote:
| I studied a Masters from Cambridge Judge Business School, and my
| takeaway is that "Management Science" is to Science what
| "Software Engineering" is to Engineering.
| gyulai wrote:
| I expect that over the next 10 years, one of two things is
| going to happen: Either Software Engineering is going to
| reinvent itself as an actual engineering discipline, or Civil
| Engineering is going to cease to be one and we'll be driving
| over vibe-constructed bridges (and plunging to our certain
| deaths, in case the sarcasm wasn't clear).
| tokai wrote:
| Google Scholar citation numbers are unreliable and and cannot be
| used in bibliometric evaluation. They are auto generated and are
| not limited to the journal literature. This critique is
| completely unserious. At the same time bad papers also tend to
| get more citations on average than middling papers, because they
| are cited in critiques. This effect should be even larger in a
| dataset that includes more than the citations from journal
| papers. This blog post will in time also add to the Google
| Scholar citation count.
|
| Citation studies are problematic and can and their use should be
| criticized. But this here is just warm air build on a fundamental
| misunderstanding of how to measure and interpret citation data.
| jackconsidine wrote:
| Anyone know the VP who referenced the paper? Doesn't seem to be
| mentioned. My best guess is Gore.
|
| Living VPs Joe Biden -- VP 2009-2017 (became President in 2021;
| after that he's called a former VP and former president)
|
| Not likely the one referenced after 2017 because he became
| president in 2021, so later citations would likely call him a
| former president instead of former VP.
|
| Dan Quayle -- VP 1989-1993, alive through 2026
|
| Al Gore -- VP 1993-2001, alive through 2026
|
| Mike Pence -- VP 2017-2021, alive through 2026
|
| Kamala Harris -- VP 2021-2025, alive through 2026
|
| J.D. Vance -- VP 2025-present (as of 2026)
| kittikitti wrote:
| The gatekeepers were able to convince the American public of such
| heinous things like circumcision at birth based on "science" and
| now they're having to deal with the corruption. People like RFK
| Jr. are able to be put into top positions because what they're
| spewing has no less scientific merit than what's accepted and
| recommended. The state of scientific literature is incredibly sad
| and mainly a factor of politics and money than of scientific
| evidence.
| cloche wrote:
| > Because published articles frequently omit key details
|
| This is a frustrating aspect of studies. You have to contact the
| authors for full datasets. I can see why it would not be possible
| to publish them in the past due to limited space in printed
| publications. In today's world though every paper should be
| required to have their full datasets published to a website for
| others to have access to in order to verify and replicate.
| lbcadden3 wrote:
| >There's a horrible sort of comfort in thinking that whatever
| you've published is already written and can't be changed.
| Sometimes this is viewed as a forward-looking stance, but science
| that can't be fixed isn't past science; it's dead science.
|
| Actually it's not science at all.
| mwkaufma wrote:
| Conservatives very concerned about academic reproducability*
| (*except when the paper helps their agenda)
| SegfaultSeagull wrote:
| For all the outrage at Trump, RFK, and their Know-Nothing posture
| toward the world, we should recognize that the ground for their
| rise was fertilized by manure produced in academia.
| recursivecaveat wrote:
| I don't understand why it has been acceptable to not upload a
| tarball of your data with the paper in the internet age. Maybe
| the Asset4 database is only available with license and they can't
| publish too much. However, the key concern with the method is a
| pairwise matching of companies which is an invention of the paper
| authors and should be totally clear to publish. The number of
| stories I've heard from people forensically investigating PDF
| plots to uncover key data from a paper is absurd.
|
| Of course doing so is not free and it takes time. A paper
| represents at least months of work in data collection, analysis,
| writing, and editing though. A tarball seems like a relatively
| small amount of effort to provide an huge increase in confidence
| for the result.
| bradley13 wrote:
| This. I did my dissertation in the early '90s, so very early
| days of the internet. All of my data and code was online.
|
| IMHO this should be expected for any, literally any
| publication. If you have secrets, or proprietary information,
| fine - but then, you don't get to publish.
| platz wrote:
| What exactly is 'sustainability'
| bradley13 wrote:
| _" We should distinguish the person from the deed"_
|
| No, we shouldn't. Research fraud is committed by people, who must
| be held accountable. In this specific case, if the issues had
| truly been accidental, the author's would have responded and
| revised their paper. They did not, ergo their false claims were
| likely deliberate.
|
| That the school and the journal show no interest - equally bad,
| and deserving of public shaming.
|
| Of course, this is also a consequence of "publish or perish."
| slow_typist wrote:
| The problem is in parts, how confirmatory statistics work, and
| how journals work. Most journals wouldn't publish ,,we really
| tried very hard to get significance that x causes y but found
| nothing. Probably, and contrary to our prior beliefs, y is
| completely independent of x."
|
| Even if nobody would cheat and massage data, we would still have
| studies that do not replicate on new data. 95 % confidence means
| that one in twenty surveys finds an effect that is only noise.
| The reporting of failed hypothesis testing would really help to
| find these cases.
|
| So pre-registration helps, and it would also help to establish
| the standard that everything needed to replicate must be
| published, if not in the article itself, then in an accompanying
| repository.
|
| But in the brutal fight for promotion and resources, of course
| labs won't share all their tricks and process knowledge. Same
| problem if there is an interest in using the results
| commercially. E.g. in EE often the method is described in general
| but crucial parts of the code or circuit design are held back.
| niccl wrote:
| obligatory xkcd https://xkcd.com/882/
| slow_typist wrote:
| Haha yeah pretty much nails it.
| burgen wrote:
| The discussion has mostly revolved around the scientific system
| (it definitely has plenty of problems), but how about ethics?
|
| The paper in question shows - credibly or not - that companies
| focusing on sustainability perform better in a variety of
| metrics, including generating revenue. In other words: Not only
| can you have companies that do less harm, but these ethically
| superior companies also make more money. You can have your cake
| and eat it too. It likely has given many people a way to align
| their moral compass with their need to gain status and perform
| well within our system.
|
| Even if the paper is a completely fabrication, I'm convinced it
| has made the world a better a place. I can't help but wonder if
| Gelman and King paused to consider the possible repercussions of
| their actions, and of what kinds of motivations they might have
| had. The linked post briefly dips into ethics, benevolently
| proclaiming that the original authors of the paper are not
| necessarily bad people.
|
| Which feels ironic, as it seems to me that Gelman and King are
| doing the wrong here.
| tejtm wrote:
| It has been a viable strategy at least since Taylor 1911
| j45 wrote:
| Creators of Studies reflect their own human flaws and
| shortcomings.
|
| This can directly undermine the scientific process.
|
| There has to be a better path forward.
| wtcactus wrote:
| There's no such thing as management "science".
|
| Social "sciences" are completely bastardizing the word science.
| Then, they come complaining that "society doesn't trust science
| anymore". They, the social "scientists", the ones responsible for
| removing all meaning from the word science,
| gyulai wrote:
| I came to this discussion, specifically looking for the term
| "management 'science'", with the quotation marks where they
| belong, and I found it right here, so thanks for that :-) ...I
| don't think I'd be capable of letting the term roll off my
| tongue without doing the airquotes either.
| invig wrote:
| We tried to scale "University". Turns out it doesn't scale well.
| thayne wrote:
| > and that replicators should tread very lightly
|
| That is not at all how science is supposed to work.
|
| If a result can't be replicated, it is useless. Replicators
| should not be told to "tread lightly", they should be encouraged.
| And replication papers should be published, regardless of the
| result (assuming they are good quality).
| wisty wrote:
| So 6000 people cited a paper, and either didn't properly read it
| (IMO that's academic dishonesty) or weren't able to determine
| that the methdology was infeasible.
|
| No real surprise. I'm pretty sure most academics spend little
| time critically reading sources and just scan to see if it
| broadly supports their point (like an undergrad would). Or just
| cite a source if another paper says it supports a point.
|
| I've heard the most brutal thing an examiner can do in a viva
| vocce is to ask what a cited paper is about, lol.
| globalnode wrote:
| non researcher here -- how does one go about checking if a paper
| or article has been reproduced? just google it?
| f30e3dfed1c9 wrote:
| Without looking, first thought was "Are the authors from Harvard
| Business School?" Sure enough, two out of three are. Something's
| gone really wrong at that place, they just keep churning out
| horseshit.
| spwa4 wrote:
| The problem with psychology and the social sciences in general is
| that they're not neutral. The _original_ justification for having
| management at all is something called "scientific management".
|
| The argument is that if you have a company, in the original
| meaning of a group of people working together with people
| individually paid per item they produce, where "new employees"
| (between quotes because they're not paid at that point)
| essentially train with more experienced employees to start
| producing more. The idea of management, introduced by Frederick
| Winslow Taylor, is to have people specifically dedicated to
| studying and improving the workflow of people, to become experts
| at making the workflow better, now known as "Taylorism". That
| justified middle management, in that this optimization would
| increase a company's productivity, and that lead to people doing
| what middle management still does. Before Taylorism, outside of
| the company owners, employees competed for wages in a competition
| like in "Monsters, Inc", with no-one reporting to anyone.
|
| There's a slight issue: Frederick Winslow Taylor was a con man.
| The experiment, introducing management, in reality lowered
| productivity by about 20%. It did not raise it. He kept
| "scientific records", measurements of productivity in a notebook
| and that notebook was presented to the owners of the railway.
| Turns out, he faked the numbers, both directly by just presenting
| fake numbers and by paying the company (as in the individual
| workers) more by faking accounts, resulting in a temporary boost
| in productivity. Oops.
|
| Repeated experiments showed the same. Having everyone in a
| company directly responsible for the functioning of the company
| as a whole, by being held responsible, financially, for their own
| work, works ... better than having management layers, according
| to the experiments done on the subject. You will find the social
| sciences defend an entirely different view. Oops.
|
| Has psychology or social sciences changed social sciences
| (specifically organizational psychology's) view on either Taylor
| or Scientific Management? No. They used it as one of the bases of
| the rest of psychology, of the rest of social sciences as if it
| was good science.
|
| This was not the first, not the last, and certainly not the most
| serious problem in psychology or social sciences.
|
| Some other famous problematic science. The Stanford prison
| experiment was faked [1]. Oops. No, that is _not_ why people
| attack each other, it turns out it works far more direct. The
| Freudian view of psychology is not only thoroughly discredited,
| it is now strongly suspected that Sigmund Freud deliberately
| created this view to allow raping of women [2] (Freud is the
| person that created modern psychoanalysis, and he earned the
| equivalent of billions of dollars for getting rapists of the hook
| in court, he even had a few "successes", cases were rape victims
| got imprisoned, by order of a court that knew they were rape
| victims, in cases were the rapist was on trial. He got paid the
| very big Guilders for that). With that, of course, comes the
| reality that Freud was not an innocent scientist that came up
| with a wrong conclusion but a con man who caused incredible
| suffering for thousands of women, and hundreds of men (usually
| girls and boys that got raped). Autism is not an explanation of a
| condition of the human mind but, in the words of the
| creator/discoverer of Autism, Hans Asperger "serves to purify the
| genes of the noble Aryan race" [3] (note: yes, Autism's purpose
| was to purify genes by executing children). We know that being
| the victim of a crime raises the odds of _the victim_ later
| imitating the perpetrator and committing the same crime, and to
| make matters worse this is a strong effect in unrelated adults,
| but it 's stronger in adolescents, and also again stronger within
| families compared to between unrelated people. Note: this is not
| revenge, it's imitation. _Victims_ commit the crime they were
| victimized by against other people, NOT the perpetrator (although
| if you look at it game theoretically it explains why human
| societies choose revenge punishments). Oops.
|
| Psychology and social sciences are not positivist sciences. The
| purpose is not to explain the human mind, but to justify
| predetermined outcomes. Especially the "discovery of Autism"
| illustrates this perfectly. Autism does not explain the behavior
| of some children, not back then, not now, and back then it
| justified the locking up and even executions of undesirable
| children, something the political climate between the two world
| wars really wanted to happen. Yes, you see the reverse now, but
| only because the political climate has changed again, not because
| the attitude of those sciences has changed, and you should not be
| surprised that if Trump stays and, say, Le Pen gets into power in
| France, new "psychological discoveries" will ... suddenly turn
| out to justify what ICE is doing, and no doubt, worse. In fact
| I'd argue that's exactly what's starting to happen [4].
|
| [1] https://pubmed.ncbi.nlm.nih.gov/31380664/
|
| [2] https://en.wikipedia.org/wiki/Freudian_Coverup (although
| frankly, look up what Freud's theory _actually says_ and do you
| really need someone to tell you that is bullshit? Freud claims
| the _only_ source of motivation for men and boys is to kill their
| father and rape their mother. And the _only_ source of motivation
| for women and girls is to seduce men to rape them, preferably
| their own family. The point of Freud 's theories, according to
| Freud's colleagues, is that it played _really_ well in court: if
| a father raped his daughters or granddaughters or nieces or ...
| then he could not help it, it is human nature, and those
| daughters and nieces (and occasionally sons and nephews) really
| were really behind getting him to do it. Hence how does it make
| sense to punish him? Oh and Freud also offered services to treat
| /imprison those children/girls/women, of course at very high
| prices)
|
| [3] https://www.nature.com/articles/d41586-018-05112-1
|
| [4] https://www.nytimes.com/2026/01/24/us/children-genetics-
| race...
___________________________________________________________________
(page generated 2026-01-26 15:01 UTC)