[HN Gopher] Collusion rings threaten the integrity of computer s...
___________________________________________________________________
Collusion rings threaten the integrity of computer science research
Author : djoldman
Score : 127 points
Date : 2021-05-26 20:49 UTC (2 hours ago)
(HTM) web link (m-cacm.acm.org)
(TXT) w3m dump (m-cacm.acm.org)
| temp329192 wrote:
| I long for the long lost time when research was less of an
| industry. If you read 19th and early 20th research, it comes of
| from an alien world. You had to be curious about your subject and
| smart back then.
| lurker619 wrote:
| Why would exposing the names of the reviewers/conferences do more
| harm than good? We want to discourage such behavior don't we.
| sfink wrote:
| Because of the strong tendency to scapegoat the specific people
| named, drive them out of academia, and then celebrate victory
| while things continue in exactly the same way. (Ok, not
| _exactly_ -- it improves for a while, people get sneakier, and
| _then_ it continues in exactly the same way.)
|
| Chipping off the tip of an iceberg isn't a good long term
| strategy.
| buitreVirtual wrote:
| Collusion is one of two major problems with modern research in
| CS. The other one, perhaps even bigger, is its lack of substance
| and relevance. Most research is meant to fill out resumes with
| long lists of papers on impressive-sounding venues and bump
| institutional numbers in order to get more money and promotions.
| Never mind how naive, irrelevant, inapplicable or
| unrepresentative that research is.
|
| It's, of course, a very hard problem to solve. It takes a lot of
| effort to evaluate the real impact of research.
| ncmncm wrote:
| It is probably too late to save Computer Science Research.
| Efforts in that direction are likely wasted. More important is to
| keep the contagion from spreading to allied fields. Grants
| probably should stop immediately. People doing serious work will
| need to move to another area where they might be able to
| contribute. People evaluating work in these other areas will need
| to guard against allowing theirs to be overtaken by the same
| downward spiral.
|
| In perhaps a generation, a similar specialty might be
| bootstrapped and begin to take on problems had been of interest
| in the old one. What to call the new specialty will be its
| smallest problem.
| wmf wrote:
| Why shouldn't we assume that similar collusion exists in every
| scientific field?
| wolverine876 wrote:
| > It is probably too late to save Computer Science Research.
|
| Why do you say that? Do you have any experience in the field?
| Twisol wrote:
| Without any personal context, this stance appears very "baby
| meets bathwater". In particular, I'm not sure I see anything
| about this particular situation that renders it purely a
| problem of CS research. What makes other fields (presumably)
| immune or less predisposed to these kinds of issues?
|
| Your position also paints CS in very broad strokes; in my
| experience, the only commonality between some subfields of
| computer science is that they use computers. Graphics, hardware
| architecture, programming languages, networks, and so on, are
| all essentially loosely coupled with their own organizing
| communities and directions. Some of these subfields are more
| closely tied to mathematics or electrical engineering than
| strictly to other parts of computer science. If there is an
| incurable "contagion" that afflicts all of these, I must admit
| it hard to believe that this contagion would not prove (if not
| already be proven) effective beyond the artificial confines of
| the term "computer science".
| sfink wrote:
| I'm not in academia, but in the grand tradition of "why don't you
| just..." solutions crossed with "technical solutions to people
| problems":
|
| Would it help at all if rather than participants reviewing 3
| papers, each reviewed 2 papers and validated the review of 3 more
| papers?
|
| This is computer science here, with things like the set NP whose
| defining characteristic is that it's easier to check a solution
| than generate it.
|
| I'm imagining having some standard that reviews are held to in
| order to make them validatable. When validating a review, you are
| just confirming that the issues brought up are reasonable. Same
| for the compliments.
|
| Sure, it's not perfect because the validators wouldn't dive in as
| deep or have as much context as the reviewers, but sitting here
| in my obsidian tower of industry, it seems like it would at least
| make collusion attacks more difficult. Hopefully without
| increasing the already heavy load on reviewers.
|
| (It very much seems like an incomplete solution -- we only have
| to look at politics and regulatory capture to see how far wrong
| things can go, in ways immune to straightforward interventions.
| Really, you need to tear down as many of the obstacles to a
| culture of trust as you can. Taping over the holes in a leaking
| bucket doesn't work for long.)
| cg563 wrote:
| We actually have a paper at ICML this year on exactly defending
| against these collusion rings: https://arxiv.org/abs/2102.06020
|
| One critical vulnerability in the current reviewing pipeline is
| that the reviewer assignment algorithm places too much weight on
| the bids. Imagine if you bid on only your friend's paper. The
| assignment system, if they assign you to any paper at all, is
| highly likely to assign you to your friend's paper. If you
| register duplicate accounts or if there are enough colluders, the
| chance of being assigned to that paper is extremely high.
|
| Fortunately, this is also easy to detect because your bid should
| reflect your expertise, and in this case it doesn't. What we
| showed in our paper is that you can reliably remove these
| abnormal bids. It's not a perfect solution, but it helps.
| tomlockwood wrote:
| It is very interesting to me that people can look at this, the
| replication crisis, and Sokal Squared, and not see that there is
| a fundamental flaw in the current model of academia as a for-
| profit publish-or-perish warzone, and instead declare some
| disciplines are somehow more bloody.
| version_five wrote:
| The bigger issue here, and what is threatening the integrity of
| the research, is blind reliance on conference publication count
| as a proxy for research quality.
|
| Maybe it's time to move on from some of these conferences, and
| focus on interactions that maximize sharing research findings. I
| know that is unrealistic, but like every other metric, conference
| acceptance ceases to have value once the metric itself is what
| people care about.
| geraltofrivia wrote:
| The playing field where one can share research findings is far
| more uneven, more tilted in favor of the prestigious labs and
| individuals more than the current field of conferences and the
| blind review process.
|
| Findings and papers from well known individuals (read: twitter
| accounts) do get far more attention , more citations. Of
| course, one can argue that, broadly, well known labs and
| individuals are wel known because of their tendency to do great
| work, write better papers. And that's true. However, the above
| still holds, in my experience as PhD student in ML.
| Anecdotally, I have seen instances where a less interesting
| paper from a renowned lab got more attention and eventually
| more citations than a better paper accepted at the same venue
| by a less renowned lab on the same topic.
| psychomugs wrote:
| I often go back to this talk from Michael Stonebraker about, in
| his terms, the diarrhea of papers [1]. It's difficult to
| justify time towards anything that doesn't lead to a near-term
| publication or another line on your CV.
|
| [1] https://youtu.be/DJFKl_5JTnA?t=853
| pvelagal wrote:
| why to put any artificial limit on the number of papers accepted.
| If it is quality research, then it should be accepted. A simple
| majority system should be good enough.
| schleiss wrote:
| It's sad to hear that. Other research areas like medicine,
| pharmacy or history probalby have the same problem but nobody is
| looking for it yet. My guess is, the more money is to be made or
| raised, the higher the chances for nefarious practices.
| Jimmc414 wrote:
| Is it possible that computer science research is making collusion
| rings more evident?
| hprotagonist wrote:
| academic cliques are so old that they've spawned their own
| subfields of meta-science just to analyze them.
| darig wrote:
| Collusion Rings Threaten the Integrity of Humanity
| cryptica wrote:
| I've discovered that the software industry is made up of some of
| the most dishonest, insecure, power-hungry people on the face of
| this godless earth.
|
| Only a tiny percentage of developers seem to actually enjoy
| coding - Most of them have no interest in it and only see it as a
| mechanism to acquire money, power and influence.
|
| Disinformation is rampant because contrarians are punished and
| conformists are rewarded. The rot is deep in the guts of the
| industry. Those who have the most power and the loudest voices
| hoard all the attention for themselves and are unwilling to give
| exposure to any alternative views - Their deep insecurity drives
| them to surround themselves only with yes-people and to block out
| all critics; avoiding disagreement at all costs.
|
| Powerful people in this industry need to put aside their
| insecurity by embracing disagreement, allow themselves to change
| their minds, and give a voice to contrarian views and ideas; even
| when it risks hurting their interests.
|
| Powerful people should seek the truth and try to promote the
| narratives which make the most sense; not the narratives which
| happen to benefit them the most. Everyone is free to move their
| money to match the changing narratives, so why do powerful people
| invest so much effort in controlling the narrative? To protect
| their friends? To protect the system? That is immoral -
| Capitalism was not designed for this kind of arbitrary altruism.
| For every person you 'help', you hurt 100 others.
| ljhsiung wrote:
| The article mentions a student taking his own life. I remember
| this was huge news a couple years ago at ISCA '19 and something
| that really shook my decision to pursue academia.
|
| After that event, SIGARCH launched an investigation. After a
| couple years, here were the results of that investigation.
|
| https://www.sigarch.org/other-announcements/isca-19-joint-in...
|
| Worth noting is that the investigation actually initially found
| __no__ misconduct. Imagine that? A student kills himself, and you
| conclude it was the victim's fault, and not the environment that
| drove him.
|
| It was only until this post [1] emerged that they relaunched the
| investigation.
|
| [1] https://huixiangvoice.medium.com/evidence-put-doubts-on-
| the-...
| jacquesm wrote:
| That is very good work from the ACM. They don't whitewash
| anything, in fact they even keep the option open to re-assess
| their position should further details come to light. Impressed.
| tchalla wrote:
| An interesting question : Did ACM require the followup Medium
| article to update their position? I don't know the details of
| the case. However, merely updating positions when situations
| are black and white are some of the easiest scenarios. I
| wouldn't be impressed if black and white situations are
| assessed as black and white. This doesn't mean that one
| shouldn't do so. I'd expect those scenarios to be a bare
| minimum requirement.
| jacquesm wrote:
| Yes, _but_ you can 't really fault them for that because
| without any evidence to go on it would have been a fishing
| expedition. So compared to some of these other
| investigations that I'm familiar with I think they did it
| by the book.
| anon_tor_12345 wrote:
| 2 years ago Prof at my old school had a student commit suicide
| over being pressured to go along with this. You can Google your
| way to figuring out where and who. Last month Prof finally
| resigned. No other repercussions. People think academia is some
| priesthood it's not. It's a business like any other with a law of
| averages determined number of bad actors.
| the_benno wrote:
| That incident is the same one referred to in the CACM article
| and elsewhere throughout this thread.
| anon_tor_12345 wrote:
| Ya coming back to this post I saw the other comments. Guilty
| as charged of not reading the article.
| djoldman wrote:
| FYI the author is one of the founding professors of the main ML
| course at Georgia Tech's online CS masters program:
|
| https://omscs.gatech.edu/cs-7641-machine-learning
| version_five wrote:
| I took an online course instructed by those guys and it was
| great. Worth watching the videos just for the banter between
| them.
| timy2shoes wrote:
| Yeah, they had one of the best remote talks I've ever seen at
| last year's NeurIPS:
| https://nips.cc/virtual/2020/public/invited_16166.html
| turminal wrote:
| We need a block chain for peer reviews!
| wolverine876 wrote:
| There's a 'hot' subculture in our society that says 'if you ain't
| cheating, you ain't trying', that refers to lies as 'hustle',
| that rewards and embraces deception as just agressiveness and
| boldness, as a norm for life and business and even a celebration
| of human nature - as if the worst elements of human nature define
| us any more than the best, as if we don't have that choice (at
| least, that's how I am trying to articulate it).
|
| It has predictable results. Where are we going to get reliable
| research, and anything else, if we can't trust each other. Trust
| is an incredible business tool - highly efficient when you can
| take risks, be vulnerable, and don't have worry about the other
| person. Trust is an incredible tool for personal relationships,
| for the same reasons, and because if you can't trust them and
| can't be vulnerable, you have a very limited relationship.
| woodruffw wrote:
| > There's a 'hot' subculture in our society that says 'if you
| ain't cheating, you ain't trying', that refers to lies as
| 'hustle', that rewards and embraces deception as just
| agressiveness and boldness, as a norm for life and business and
| even a celebration of human nature - as if the worst elements
| of human nature define us any more than the best, as if we
| don't have that choice (at least, that's how I am trying to
| articulate it).
|
| I'm not sure that you intended it this way, but this reads as
| very oblique (i.e., "wink and nudge"). _Which_ subculture are
| you referring to, and what particular relationship do you think
| they have to research in Computer Science?
| wolverine876 wrote:
| > I'm not sure that you intended it this way, but this reads
| as very oblique (i.e., "wink and nudge"). _Which_ subculture
| are you referring to, and what particular relationship do you
| think they have to research in Computer Science?
|
| I am referring to no particular subculture. Lots of people
| around me embrace it, including from all over the political
| spectrum (if that's what you are thinking).
|
| I think the broader society sets the norms for computer
| science, as with everything else. For example, when star
| athletes like Barry Bonds, or entire teams like the Houston
| Astros, or much of college sports, cheat with few
| reprocussions (and in the past, that wasn't the case -
| players were banned and school sports programs were basically
| shut down, etc.) that affects computer science research.
| SQueeeeeL wrote:
| Yeah, honestly go to any conference in the last decade and
| you'll see some people who are just... out of place in an
| academic, like, when they were 19 they listened to a
| podcast that claimed PhDs made XX% more money, so they
| decided to do that. These people don't care about research,
| don't care to understand research, they just want to
| publish, get their degree, and get paychecks from
| Google/Facebook/Apple. Luckily I've seen a number of these
| types of people fail to find any high profile jobs after
| they graduate, so I guess something is still working.
| charwalker wrote:
| Could apply to research, application development, web
| development, credential management...
| vmception wrote:
| Lou Pai is the greatest of all time and every man (or
| breadwinner) would be a fool to follow any other script in
| society that is presented to them
|
| Getting a divorce court judge to force you to sell your assets
| at the top, nuking the regulator's ability to charge you with
| insider trading, while you elope with your younger hotter high
| libido nymph to the mountain you bought?
|
| These are our role models
| Causality1 wrote:
| Reap what you sow. We spend our whole lives being told it's
| perfectly OK that every corporation and person in power
| behaving like a sociopath is totally fine and this is the
| culture you get in return.
| SQueeeeeL wrote:
| Academia WAS designed to be an insulated castle from all
| that, that's why Newton lived in a shitty apartment while
| getting money from his mom. The institution of research was
| supposed to be a rich man's game, people who didn't give a
| shit about practicality, just one upping each other. Once
| people realized academics could be leveraged to do cool shit
| like build A-Bombs, it was over.
| [deleted]
| pitaj wrote:
| Normalization play a huge role in creating this. There are far
| too many rules in our society at all levels that make no sense
| and exist seemingly to benefit the rich and powerful. Highly
| demanded goods and services are prohibited (drugs, gambling,
| prostitution). Bureaucratic nonsense at every turn. So much
| schoolwork that it's practically required to cheat to succeed.
| Testing with barely any resemblance of real-world conditions.
| Abstract methods that bore students to death.
| wolverine876 wrote:
| Those aren't reasons to act dishonestly and abuse other
| people. In fact, those are reasons to do the opposite - if
| you find the world so terrible, do something to make it
| better.
|
| It's up to you and me. Nobody else is coming to save us.
| pitaj wrote:
| When people are surrounded by rules that make no sense,
| many start to question _all_ rules. It 's not a
| justification for acting dishonestly, but it is the cause.
|
| One way we can make the world better is by fixing the
| rules. Getting rid of ones that are unjust, and making the
| rest more consistent.
| andreilys wrote:
| _" If a law is unjust, a man is not only right to disobey
| it, he is obligated to do so."_
|
| The question of course is, peoples interpretations of which
| laws are just and unjust are subject to bias and individual
| incentives.
| dragonwriter wrote:
| > "If a law is unjust, a man is not only right to disobey
| it, he is obligated to do so."
|
| Incidentally, while I recognize thr popularity of this
| quote, its fairly ridiculous taken literally for laws
| which are prohibitory rather than obligatory.
|
| Viewing a prohibition as unjust does not obligate me to
| violate the prohibition; believing people should be free
| from government constraint to do something doesn't
| require me to do that thing.
|
| "Disregard" or "discount" in place of "disobey" would be
| more generally valid.
| triska wrote:
| The point about bureaucracy is particularly relevant and
| interesting. Joseph Tainter argues in _The Collapse of
| Complex Societies_ that the diminishing returns of complexity
| and increasing layers of bureaucracy is an important reason
| for the collapse of societies:
|
| https://en.wikipedia.org/wiki/Joseph_Tainter
| omgwtfbbq wrote:
| Doesn't this make the case that we should be building
| institutions and systems resistant to this type of cheating?
|
| Why should it be possible at all to game Journals in this way?
| Particularly in Computer Science journals where people think
| about edge cases for a living...
| munk-a wrote:
| > 'if you ain't cheating, you ain't trying'
|
| I'm a US expat and escaping this culture is one of the things
| that's made me happiest - I tend to call it the bullshit
| culture because my favorite example is... writing a good paper
| for class is admired - but what's really praised is writing a
| paper that gets good marks without ever having read the subject
| matter. Being able to spin lies about a topic you've no
| understanding of and turn that into a marketable skill is a
| dark potent for the future of America. I think it's always been
| somewhat present, but since emerging strongly out of the
| business world in the eighties it's gained a lot of steam.
|
| We are a society that can benefit from cooperation where
| everyone gets a fair slice of the pie, but that society is
| eroded if we praise and not shame those people who betray
| societal trust and cheat the system.
| etaioinshrdlu wrote:
| YCombinator seems to encourage this. Here's what they say about
| their dinner events: "Talks are strictly off the record to
| encourage candor, because the inside story of most startups is
| more colorful than the one presented later to the public.
| Because YC has been around so long and we have personal
| relationships with most of the speakers, they trust that what
| they say won't get out and tell us a lot of medium-secret
| stuff."
|
| Anecdotally, I've heard stories about Zuckerberg
| confessing/bragging about all sorts of nasty things at these
| dinners.
|
| Really, this stuff should just be shamed. Sadly, too often
| calling out bad behavior just gets you called a "hater"...
| lovich wrote:
| It's just a general breakdown of the rule of law and trust all
| over our society. When everyone around you is breaking the
| rules and receiving no repercussions, then following the rules
| yourself is equivalent to choosing to lose.
|
| We used to have strong institutions that were supposed to help
| push groups into not choosing the bad corner of the prisoners
| dilemma, but they all seemed to have degraded. I suppose they
| could have always been like this and the curtain has just been
| removed, but I'd argue that the perception that following the
| rules was the best personal choice is almost as valuable as it
| being true
| Retric wrote:
| There is a huge short vs long term bias here. Constantly
| burning everyone around you requires a long line of new
| suckers. But, within a community trust can have great long
| term benefits.
| Spooky23 wrote:
| This has always existed and always will.
|
| In the old days you had to know your place. WASPs smoked
| cigars and ran things. Those old guys drinking sherry and
| wearing tweed helped each other out. The Irish were cops,
| Italians firemen.
|
| In tech it's pretty obvious to see various constituencies
| doing dishonest shit help others out.
| rhacker wrote:
| Indeed. I wonder if the internet also has something to do
| with this feeling. In a way, people are looking for
| justifications to doing things the wrong/illegal way. And in
| the past the information you got about how the world works is
| from other people around you. Now you can look up people in
| the same boat and apply whatever logic they did.
| toomim wrote:
| Yes. Here are some stats on the breakdown of trust in
| society: https://medium.com/@slowerdawn/the-decline-of-trust-
| in-the-u...
|
| Here's an animated gif making it clear:
| https://invisible.college/reputation/declining-trust.gif
| slx26 wrote:
| Can be trusted _with what_? Given how complicated life is,
| I 'm actually impressed with how morally people behaves. In
| that sense, I trust that most people are doing what they
| can, and they are trying to do no harm to others. But that
| doesn't mean I trust people to be very competent anyway.
| Trust _what_.
| 908B64B197 wrote:
| Reminds me of Jugaad [0] or Chabuduo mindset in mainland China.
|
| Basically if all it takes is getting citations, then forming a
| citation ring is "Chabuduo" and you'll only lose face if you
| are caught (not good enough).
|
| [0] https://en.wikipedia.org/wiki/Jugaad
| BadInformatics wrote:
| This kind of collusion is driven far more by perverse
| incentives than some alleged cultural phenomenon of half-
| assery people like to attach an incorrect but exotic-sounding
| foreign phrase to. I can't speak for Jugaad, but Chai Bu Duo
| ("Chabuduo") is not at all appropriate here [1]. Just call it
| what it is: cheating, collusion and conspiracy.
|
| [1] https://news.ycombinator.com/item?id=27052249. This
| reminds me of the Japanese buzzword bingo of earlier decades.
| bakul wrote:
| Jugaad is very different from this sort of collusion.
| eindiran wrote:
| From the wiki article it sounds like it just refers to
| extremely improvised engineering, something akin to "bubble
| gum and baling wire" or "MacGyvering" something.
| hristov wrote:
| Unfortunately it is not even a subculture. It is often openly
| touted in mainstream culture. Steve Jobs is often brought up as
| a hero of that type of thinking.
|
| You are absolutely right that trust and reliability are very
| valuable. Societies with high trust tend to be richer and much
| more productive than societies overriden by cheating and
| corruption.
| sjg007 wrote:
| Oooh.. this sounds like a great computer science problem.
|
| "How to get an objective rating in the presence of adversaries"
|
| It is probably extensible to generic reviews as well... so things
| like the Amazon scam. But in contrast to Amazon, conference
| participants are motivated to review.
|
| I honestly don't see why all participants can't be considered as
| part of the peer review pool and everybody votes. I'd guess you
| run a risk of being scooped but maybe a conference should consist
| of all papers with the top N being considered worthy of
| publication. Maybe the remaining could be considered pre-
| publication... I mean everything is on ArviX anyways.
|
| So instead of bids you have randomization. Kahneman's latest book
| talks about this and it's been making the rounds on NPR, NyTimes
| etc...
|
| https://www.amazon.com/Noise-Human-Judgment-Daniel-Kahneman/...
| anon_tor_12345 wrote:
| This is not a CS problem (unless everything is a CS problem)
| but a very well known market design problem
|
| https://en.m.wikipedia.org/wiki/Collusion
| PeterisP wrote:
| In many such events all participants are required to be part of
| the peer review pool.
|
| However, they review a limited amount of papers (e.g. 3) -
| "everybody votes" presumes that everybody has an opinion on the
| rating of every paper. That does not scale - getting a
| reasonable opinion about a random paper, i.e. reviewing it,
| takes significant effort; an event may have 1000 or 10000
| papers, having every participant review 3 papers is already a
| significant amount of work, and getting much more "votes" than
| that for every paper is impractical.
|
| It's unfeasable and even undesirable for everyone to even skim
| all the submitted papers in their subfield - one big purpose of
| peer review is to filter out papers so that everyone else can
| focus on reading only a smaller selection of best papers
| instead of sifting through everything submitted. The deluge of
| papers (even "diarrhea of papers" as called in a lecture linked
| in another comment) is a real problem, I'm a full-time
| researcher and I can still have time to read only a fraction of
| what's getting written.
| foota wrote:
| In theory you could probably do something like have three
| runoff rounds, such that low-scoring papers are eliminated
| before people do their second review.
| tomaskafka wrote:
| One of solutions is to let individuals choose who to trust. I can
| pick to not trust the certain person, or certain publication or
| conference, and have my personalized scientist ranking recounted.
|
| And of course, I could choose to delegate the trust, and to
| "follow" someone, which would mean to incorporate their rankings,
| especially in areas where I don't orient that much.
|
| Do you think this would work?
___________________________________________________________________
(page generated 2021-05-26 23:00 UTC)