[HN Gopher] "They introduce kernel bugs on purpose"
       ___________________________________________________________________
        
       "They introduce kernel bugs on purpose"
        
       Author : kdbg
       Score  : 1905 points
       Date   : 2021-04-21 10:32 UTC (12 hours ago)
        
 (HTM) web link (lore.kernel.org)
 (TXT) w3m dump (lore.kernel.org)
        
       | BTCOG wrote:
       | Now I'm not one for cancel culture, but fuck these guys. Put
       | their fuckin' names out there to get blackballed. Bunch of
       | clowns.
        
       | nspattak wrote:
       | WTF? They are experimenting with people without their consent?
       | And they haven't been kicked out of the academic community????
        
       | shiyoon wrote:
       | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
       | 
       | Seemed to have posted some clarifications around this. worth a
       | read
        
       | shiyoon wrote:
       | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
       | 
       | posted some clarifications around this, worth a read
        
       | limaoscarjuliet wrote:
       | To me it was akin to spotting volunteers cleaning up streets and,
       | right after they passed, dumping more trash on the same street to
       | see if they come and clean it up again. Low blow if you ask me.
        
       | liendolucas wrote:
       | Could have this happened also on other open source projects like
       | FreeBSD, OpenBSD, etc or other popular open source software?
        
         | twic wrote:
         | This is a really important question, and the way to answer it
         | is for someone to try it.
        
       | dcchambers wrote:
       | So I won't lie, this seems like an interesting experiment and I
       | can understand why the professor/research students at UMN wanted
       | to do it, but my god the collateral damage against the University
       | is massive. Banning all contributions from a major University is
       | no joke. I also completely understand the scorched earth response
       | from Greg. Fascinating.
        
       | coward76 wrote:
       | Make an ethics complaint with the state and get their
       | certification and charter pulled.
        
         | dylan604 wrote:
         | That's a worse death sentence than SMU's for paying players.
         | Even the NCAA didn't kill the school, just the guilty sport
         | program. You're asking the state to pull entire university's
         | charter for a rogue department? Sure, pull the CS department,
         | but I'm sure the other schools at the university had absolutely
         | zero culpibility.
        
           | bluGill wrote:
           | As a graduate of the UMN, other departments have had their
           | share of issues as well. When I was there they were trying to
           | figure out how to deal with a professor selling medical drugs
           | without FDA permission (the permission did exist in the past,
           | and the drug probably was helpful, but FDA approval was not
           | obtained).
           | 
           | I suspect that all of the issues I'm aware of are within
           | normal bounds for any university of that size. That is if
           | kill the UMN you also need to kill Berkley, MIT, and Harvard
           | for their issues of similar magnitude that we just by chance
           | haven't heard about. This is a guess though, I don't know how
           | bad things are.
        
             | dylan604 wrote:
             | That was the same thing said of the SMU punishment. They
             | were not the only school doing it. They were just the ones
             | that got caught.
        
               | bluGill wrote:
               | Which is probably true. Doesn't excuse it.
               | 
               | Note, that I'm not trying to excuse the UMN either.
        
       | kdbg wrote:
       | I don't think there have been any recent comments from anyone at
       | U.Mn. So, back when the original research (happened last year)
       | the following clarification was offered by Qiushi Wu and Kangjie
       | Lu which atleast paints their research in somewhat better light:
       | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
       | 
       | That said the current incident seems to have gone beyond the
       | limits of that one and is a new incident. I just thought it would
       | be fair to include their "side"
        
         | kstenerud wrote:
         | From their explanation:
         | 
         | (3). We send the incorrect minor patches to the Linux community
         | through email to seek their feedback.
         | 
         | (4). Once any maintainer of the community responds to the
         | email, indicating "looks good", we immediately point out the
         | introduced bug and request them to not go ahead to apply the
         | patch. At the same time, we point out the correct fixing of the
         | bug and provide our proper patch. In all the three cases,
         | maintainers explicitly acknowledged and confirmed to not move
         | forward with the incorrect patches. This way, we ensure that
         | the incorrect patches will not be adopted or committed into the
         | Git tree of Linux.
         | 
         | ------------------------
         | 
         | But this shows a distinct lack of understanding of the problem:
         | 
         | > This is not ok, it is wasting our time, and we will have to
         | report this,
         | 
         | > AGAIN, to your university...
         | 
         | ------------------------
         | 
         | You do not experiment on people without their consent. This is
         | in fact the very FIRST point of the Nuremberg code:
         | 
         | 1. The voluntary consent of the human subject is absolutely
         | essential.
        
           | unyttigfjelltol wrote:
           | In the last year when it came to experimental Covid-19
           | projections, modeling and population-wide recommendations
           | from major academic centers, the IRB's were silent and
           | academics did essentially whatever they wanted, regardless of
           | "consent" from the populations that were the subjects of
           | their speculative hypotheses.
        
           | bezout wrote:
           | You could argue that they are doing the maintainers a favor.
           | Bad actors could exploit this, and the researchers are
           | showing that maintainers are not paying enough attention.
           | 
           | If I were at the receiving end, I'd think checking a patch
           | multiple times before accepting it.
        
             | UncleMeat wrote:
             | I'm sure that they thought this. But this is a bit like
             | doing unsolicited pentests or breaking the locks on
             | somebody's home at night without their permission. If
             | people didn't ask for it and consent, it is unethical.
             | 
             | And further, pretty much everybody knows that malicious
             | actors - if they tried hard enough - would be able to sneak
             | through hard to find vulns.
        
             | jnxx wrote:
             | > Bad actors could exploit this, and the researchers are
             | showing that maintainers are not paying enough attention.
             | 
             | And this is anything new?
             | 
             | And if I blow a hammer over your head while you are not
             | suspecting it, does this prove anything else than that I am
             | thug? Does it help you? Honestly?
        
           | chenzhekl wrote:
           | Yeah, it is a bit disrespectful for kernel maintainers
           | without gaining their approvals ahead of time.
        
             | moron4hire wrote:
             | Disrespecting some programmers on the internet is, while
             | not nice, also not a high crime.
        
           | fouric wrote:
           | I'm confused - how is this an experiment on humans? Which
           | humans? As far as I can tell, this has nothing to do with
           | humans, and everything to do with the open-source review
           | _process_ - and if one thinks that it counts as a human
           | experiment because humans are involved, wouldn 't that logic
           | apply equally to pentesting?
           | 
           | For that matter, what's the difference between this and
           | pentesting?
        
             | db48x wrote:
             | Penetration testing is only ethical when you are hired by
             | the organization you are testing.
             | 
             | Also, IRB review is only for research funded by the federal
             | government. If you're testing your kid's math abilities,
             | you're doing an experiment on humans, and you're entirely
             | responsible for determining whether this is ethical or not,
             | and without the aid of an IRB as a second opinion.
             | 
             | Even then, successfully getting through the IRB process
             | doesn't guarantee that your study is ethical, only that it
             | isn't egregiously unethical. I suspect that if this
             | researcher got IRB approval, then the IRB didn't realize
             | that these patches could end up in a released kernel. This
             | would adversely affect the users of billions of Linux
             | machines world-wide. Wasting half an hour of a reviewer's
             | time is not a concern by comparison.
        
           | throwawaybbq1 wrote:
           | Holy cow!! I'm a researcher and don't understand how they
           | thought it would be okay to not do an IRB, and how an IRB
           | would not catch this. The linked PDF by the parent post is
           | quite illustrative. The first few paras seem to be
           | downplaying the severity of what they did (did not introduce
           | actual bugs into the kernel) but that is not the bloody
           | problem. They experimented on people (maintainers) without
           | consent and wasted their time (maybe other effects too ..
           | e.g. making them vary of future commits from universities)!
           | I'm appalled.
        
             | ORioN63 wrote:
             | It's not _the_ problem, but it's an actual problem. If you
             | follow the thread, it seems they did manage to get a few
             | approved:
             | 
             | https://lore.kernel.org/linux-
             | nfs/YH%2F8jcoC1ffuksrf@kroah.c...
             | 
             | I agree this whole thing paints a really ugly picture, but
             | it seems to validate the original concerns?
        
               | Throwaway951200 wrote:
               | Open Source is not water proof if known committer, from
               | well known faculty (in this case University of Minnesota)
               | decides to send buggy patches. However, this was catched
               | relatively quickly, but the behavior even after being
               | caught is reprehensible:
               | 
               | > You, and your group, have publicly admitted to sending
               | known-buggy patches to see how the kernel community would
               | react to them, and published a paper based on that work.
               | > > Now you submit a new series of obviously-incorrect
               | patches again, so what am I supposed to think of such a
               | thing?
               | 
               | If they kept doing it even after being caught, is beyond
               | understandable.
        
               | varjag wrote:
               | Even if those they did get approved were actual security
               | holes (not benign decoys), all that it validates is no
               | human is infallible. Well CONGRATULATIONS.
        
               | Tempest1981 wrote:
               | Right. And you would need a larger sample size to
               | determine what % of the time that occurs, on average. But
               | even then, is that useful and valid information? And is
               | it actionable? (And if so, what is the cost of the
               | action, and the opportunity cost of lost fixes in other
               | areas?)
        
             | fao_ wrote:
             | If you actually read the PDF linked in this thread:
             | 
             | * Is this human research? This is not considered human
             | research. This project studies some issues with the
             | patching process instead of individual behaviors, and we
             | did not collect any personal information. We send the
             | emails to the Linux community and seek community feedback.
             | The study does not blame any maintainers but reveals issues
             | in the process. The IRB of UMN reviewed the study and
             | determined that this is not human research (a formal IRB
             | exempt letter was obtained).
        
             | Waterluvian wrote:
             | IRB review: "Looks good!"
        
               | yorwba wrote:
               | Maybe they should conduct a meta-experiment where they
               | submit unethical experiments for IRB review. Immediately
               | when the IRB approves the proposal, they withdraw,
               | pointing out the ways in which it would be unethical.
               | 
               | Meta-meta-experiment: submit the proposal above for IRB
               | review and see what happens.
        
               | makotoNagano wrote:
               | Absolutely incredible
        
             | bokchoi wrote:
             | The irony is that the IRB process failed in the same way
             | that the commit review process did. We're just missing the
             | part where the researchers tell the IRB board they were
             | wrong immediately after submitting their proposal for
             | review.
        
             | nerdponx wrote:
             | Do IRBs typically have a process by which you can file a
             | complaint from outside the university? Maybe they never
             | thought they would need to even check up on computer
             | science faculty...
        
             | chromatin wrote:
             | They did go to the UMN IRB per their paper and received a
             | human subjects exempt waiver.
             | 
             | Edit: I am not defending the researchers who may have
             | misled the IRB, or the IRB who likely have little
             | understanding of what is actually happening
        
           | jeroenhd wrote:
           | In any university I've ever been to, this would be a gross
           | violation of ethics with very unpleasant consequences.
           | Informed consent is crucial when conducting experiments.
           | 
           | If this behaviour is tolerated by the University of Minnesota
           | (and it appears to be so) then I suppose that's another
           | institution on my list of unreliable research.
           | 
           | I do wonder what the legal consequences are. Would knowingly
           | and willfully introducing bad code constitute a form of
           | vandalism?
        
             | xucheng wrote:
             | IMNAL. In addition to possibly cause the research paper
             | retracted due to the ethical violation, I think there are
             | potentially civil or even criminal liability here. The US
             | law on hacking is known to be quite vague (see Aaron
             | Swartz's case for example)
        
             | dd82 wrote:
             | >>>On the Feasibility of Stealthily Introducing
             | Vulnerabilities in Open-Source Software via Hypocrite
             | Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings
             | of the 42nd IEEE Symposium on Security and Privacy
             | (Oakland'21). Virtual conference, May 2021.
             | 
             | from Lu's list of publications at https://www-
             | users.cs.umn.edu/~kjlu/
             | 
             | Seems like a conference presentation at IEEE at minimum?
        
               | xucheng wrote:
               | IEEE S&P is actually one of the top conferences in the
               | field of computer security. It does mention some guidance
               | on ethical consideration.
               | 
               | > If a paper raises significant ethical and/or legal
               | concerns, it might be rejected based on these concerns.
               | 
               | https://www.ieee-security.org/TC/SP2021/cfpapers.html
               | 
               | So if the kernel maintainers report the issue to the S&P
               | PC, the paper could potentially be rejected.
        
               | anonymousDan wrote:
               | Wow, that is basically the top computer security
               | conference.
        
               | corty wrote:
               | Which shows that IEEE also has a problem with research
               | ethics if they accepted such a paper.
        
               | svarog-run wrote:
               | IEEE is a garbage organization. Or atleast their India
               | chapter is. 3 out of 5 professors in our university would
               | recommend to avoid any paper published by Indians from
               | IEEE. Here in India, publishing trash papers with the
               | help of one's 'influence' is a common occurrence
        
               | [deleted]
        
           | jan_Inkepa wrote:
           | In this post they say the patches come from a static analyser
           | and they accuse the other person of slander for their
           | criticisms
           | 
           | > I respectfully ask you to cease and desist from making wild
           | accusations that are bordering on slander.
           | 
           | > These patches were sent as part of a new static analyzer
           | that I wrote and it's sensitivity is obviously not great. I
           | sent patches on the hopes to get feedback. We are not experts
           | in the linux kernel and repeatedly making these statements is
           | disgusting to hear.
           | 
           | ( https://lore.kernel.org/linux-
           | nfs/YH%2FfM%2FTsbmcZzwnX@kroah... )
           | 
           | How does that fit in with your explanation?
        
             | op00to wrote:
             | It's a lie, that's how it fits.
        
             | azernik wrote:
             | From GKH's response, which you linked:
             | They obviously were _NOT_ created by a static analysis tool
             | that is of         any intelligence, as they all are the
             | result of totally different         patterns, and all of
             | which are obviously not even fixing anything at
             | all.  So what am I supposed to think here, other than that
             | you and your         group are continuing to experiment on
             | the kernel community developers by         sending such
             | nonsense patches?              When submitting patches
             | created by a tool, everyone who does so submits
             | them with wording like "found by tool XXX, we are not sure
             | if this is         correct or not, please advise." which is
             | NOT what you did here at all.         You were not asking
             | for help, you were claiming that these were
             | legitimate fixes, which you KNEW to be incorrect.
        
             | temp wrote:
             | > _I sent patches on the hopes to get feedback_
             | 
             | They did not say that they were hoping for feedback on
             | their tool when they submitted the patch, they lied about
             | their code doing something it does not.
             | 
             | > _How does that fit in with your explanation?_
             | 
             | It fits in the narrative of doing hypocritical changes to
             | the project.
        
               | jan_Inkepa wrote:
               | But lashing out when confronted after the fact? (I can't
               | figure out how to browse to the messages that contain
               | said purported 'slander' - maybe it is indeed terrible
               | slander). Normally after the show is over one stops with
               | the performance...
               | 
               | edit: oh, ok I guess that post with the accusations was
               | mid-performance? Not inconsistent, so, maybe (I'm still
               | not clear what the timeline is).
        
             | jedimastert wrote:
             | > (3). We send the incorrect minor patches to the Linux
             | community through email to seek their feedback.
             | 
             | Sounds like they knew exactly what they were doing.
        
           | jedimastert wrote:
           | They apparently didn't consider this "human research"
           | 
           | As I understand it, any "experiment" involving other people
           | that weren't explicitly informed of the experiment before
           | hand needs to be a lot more carefully considered than what
           | they did here.
        
             | lithos wrote:
             | Makes sense considering how open source people are treated.
        
           | canadianfella wrote:
           | > This is in fact the very FIRST point of the Nuremberg code
           | 
           | Stretch Armstrong over here.
        
           | ajb wrote:
           | > You do not experiment on people without their consent.
           | 
           | Exactly this. Research involving human participants is
           | supposed to have been approved by the University's
           | Institutional Review Board; the kernel developers can
           | complain to it: https://research.umn.edu/units/irb/about-
           | us/contact-us
           | 
           | It would be interesting to see what these researches told the
           | IRB they were doing (if they bothered).
           | 
           | Edited to add: From the link in GP: "The IRB of UMN reviewed
           | the study and determined that this is not human research (a
           | formal IRB exempt letter was obtained)"
           | 
           | Okay so this IRB needs to be educated about this. Probably
           | someone in the kernel team should draft an open letter to
           | them and get everyone to sign it (rather than everyone
           | spamming the IRB contact form)
           | 
           | T
        
             | hobofan wrote:
             | According to their website[0]:
             | 
             | > IRB exempt was issued
             | 
             | [0]: https://www-users.cs.umn.edu/~kjlu/
        
               | throwawaybbq1 wrote:
               | These two sentences seem contradictory from the author's
               | response is contradictory: " The IRB of UMN reviewed the
               | study and determined that this is not human research (a
               | formal IRB exempt letter was obtained). Throughout the
               | study, we honestly did not think this is human research,
               | so we did not apply for an IRB approval in the
               | beginning."
               | 
               | I would guess their IRB had a quick sanity check process
               | to ensure there was no human subject research in the
               | experiment. This is actually a good thing if scientists
               | use their ethics and apply good judgement. Now, whoever
               | makes that determination does so based on initial
               | documentation supplied by the researchers. If so, the
               | researchers should show what they submitted to get the
               | exemption.
               | 
               | Again, the implication is their University will likely
               | make it harder to get exemptions after this fiasco. This
               | mistake hurts everyone (be it indirectly). Although, and
               | this is being quite facetious and macabre, the
               | researchers have inadvertently exposed a bug in their own
               | institutions IRB process!
        
               | DetroitThrow wrote:
               | Combined with their lack of awareness of a possible
               | breach of ethics in their response to Greg, I find it
               | hard to believe they did not mislead the UMN IRB.
               | 
               | I hope they release what they submitted to the IRB to
               | receive that exemption and there are some form of
               | consequences if the mistake is on their part.
        
               | pacbard wrote:
               | A few things about IRB approval.
               | 
               | 1. You _have to_ submit for review any work involving
               | human subjects _before_ you start interacting with them.
               | The authors clearly state that they sought retroactive
               | approval after being questioned about their work. That
               | would be a big red flag for my IRB and they wouldn 't
               | approve work retroactively.
               | 
               | 2. There are multiple levels of IRB approval. The lowest
               | is non regulated, which means that the research falls
               | outside of human subject research. Individual researchers
               | can self-certify work as non regulated or get a non-
               | regulated letter from their IRB.
               | 
               | From there, it goes from exempt to various degrees of
               | regulated. Exempt research means that it is research
               | involving human subjects that is exempt from continued
               | IRB review past the initial approval. That means that IRB
               | has found that their research involves human subjects but
               | falls within one (or more) of the exceptions for
               | continued review.
               | 
               | In order to be exempt, a research project must meet one
               | of the exemptions categories (see here
               | https://hrpp.msu.edu/help/required/exempt-categories.html
               | for a list). The requirements changed in 2018, so what
               | they had to show depends on when they first received
               | their exempt status.
               | 
               | The bottom line is that the research needs to (a) have
               | less than minimal risks for participants and (b) needs to
               | be benign in nature. In my opinion, this research doesn't
               | meet these requirements as there are significant risks to
               | participants to both their professional reputation and
               | future employability for having publicly merged a
               | malicious patch. They also pushed intentionally malicious
               | patches, so I am not sure if the research is benign to
               | begin with.
               | 
               | 3. Even if a research project is found exempt from IRB
               | review, participants still need to consent to participate
               | in it and need to be informed of the risks and benefits
               | of the research project. It seems that they didn't
               | consent their participants before their participation in
               | the research project. Consent letters usually use a
               | common template that clearly states the goals for the
               | research project, lists the possible risks and benefits
               | of participating in it, states the name and contact
               | information of the PI, and data retention policies. IRB
               | could approve projects without proactive participant
               | consent but those are automatically "bumped up" to full
               | IRB approval and approvals are given only in very
               | specific circumstances. Plus, once a participant removes
               | their consent to participate in a research project, the
               | research team needs to stop all interactions with them
               | and destroy all data collected from them. It seems that
               | the kernel maintainers did not receive the informed
               | consent materials before starting their involvement with
               | the research project and have expressed their desire not
               | to participate in the research after finding out they
               | were participating in it, so the interaction with them
               | should stop and any data collected from them should be
               | destroyed.
               | 
               | 4. My impression is that they got IRB approval on a
               | technicality. That is, their research is on the open
               | source community and its processes rather than the
               | individual people that participate in them. My impression
               | of their paper is that they are very careful in
               | addressing the "Linux community" and they really never
               | talk about their interaction with people in the paper
               | (e.g., there is no data collection section or a
               | description of their interactions on the mailing list).
               | Instead, it's my impression that they present the patches
               | that they submitted as happening "naturally" in the
               | community and that they are describing publicly available
               | interactions. That seems to be a little misleading of
               | what actually happened and their role in producing and
               | submitting the patches.
        
               | db48x wrote:
               | I'm interested in MSU's list of exempt categories. Most
               | of them are predicated on the individual subjects not
               | being identifiable. Since this research is being done on
               | a public mailing list that is archived and available for
               | all to read, it is trivial to go through the archive and
               | find the patches they quote in their paper to find out
               | who reviewed them, and their exact responses. Would that
               | disqualify the research from being exempt, even if the
               | researchers themselves do not record that data or present
               | it in their paper?
               | 
               | What if they did a survey of passers-by on a public
               | street, that might be in view of CCTV operated by someone
               | else?
        
               | pacbard wrote:
               | The federal government has updated the rules for
               | exemption in 2018. The MSU link is more of a summary than
               | the actual rules.
               | 
               | The fact that a mailing list is publicly available is
               | what made me worry about the applicability of any sort of
               | exemption. In order for human subject research to be
               | exempt from IRB review, the research needs to be deemed
               | less than minimal risk to participants.
               | 
               | The fact that their experiment happens in public and that
               | anyone can find their patches and individual maintainers'
               | responses (and approval) of them makes me wonder if the
               | participants are at risk of losing professional
               | reputation (in that they approved a patch that was
               | clearly harmful) or even employment (in that their
               | employer might find out about their participation in this
               | study and move them to less senior positions as they
               | clearly cannot properly vet a patch). This might be
               | extreme, but it is still a likely outcome given the
               | overall sentiment of the paper.
               | 
               | All research that poses any harm to participants has to
               | be IRB approved and the researchers have to show that the
               | benefits to participants (and the community at large)
               | surpass the individual risks. I am still not sure what
               | benefits this work has to the OSS community and I am very
               | surprised that this work did not require IRB supervision
               | at all.
               | 
               | As far as work on a public street is concerned, IRB
               | doesn't regulate common activities that happen in public
               | and for which people do not have a reasonable expectation
               | of privacy. But, as soon as you start interacting with
               | them (e.g., intervene in their environment), IRB review
               | is required.
               | 
               | You can read and analyze a publicly available mailing
               | list (and this would even qualify as non human subject
               | research if the data is properly anonymized) without IRB
               | review or at most a deliberation of exempt status but you
               | cannot email the mailing list yourself as a researcher as
               | the act of emailing is an intervention that changes other
               | people's environment, therefore qualifying as human
               | subject research.
        
               | ajb wrote:
               | Thanks (This thread may now read a bit confusingly as I
               | independently found that and edited my comment above)
        
           | ret2plt wrote:
           | > You do not experiment on people without their consent. This
           | is in fact the very FIRST point of the Nuremberg code:
           | 
           | > 1. The voluntary consent of the human subject is absolutely
           | essential.
           | 
           | The Nuremberg code is explicitly about medical research, so
           | it doesn't apply here. More generally, I think that the
           | magnitude of the intervention is also relevant, and that an
           | absolutist demand for informed consent in all - including the
           | most trivial - cases is quite silly.
           | 
           | Now, in this specific case I would agree that wasting
           | people's time is an intervention that's big enough to warrant
           | some scrutiny, but the black-and-white way of some people to
           | phrase this really irks me.
           | 
           | PS: I think people in these kinds of debate tend to talk past
           | one another, so let me try to illustrate where I'm coming
           | from with an experiment I came across recently:
           | 
           | To study how the amount of tips waiters get changes in
           | various circumstances, some psychologists conducted an
           | experiment where the waiter would randomly either give the
           | guests some chocolate with the bill, or not (control
           | condition)[0] This is, of course, perfectly innocuous, but an
           | absolutist claim about research ethics ("You do not
           | experiment on people without their consent.") would make
           | research like this impossible without any benefit.
           | 
           | [0] https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1559-1
           | 816...
        
           | shoto_io wrote:
           | _> indicating "looks good"_
           | 
           | I wonder how many zero days have been included already, for
           | example by nation state actors...
        
           | [deleted]
        
           | splithalf wrote:
           | There is sometimes an exception for things like interviews
           | when n is only a couple of people. This was clearly unethical
           | and it's certain that at least some of those involved knew
           | that. It's common knowledge universities.
        
           | dcolkitt wrote:
           | > You do not experiment on people without their consent.
           | 
           | Applied strictly, wouldn't every single A/B test done by a
           | product team be considered unethical?
           | 
           | From a common sense standpoint, it seems to me this is more
           | about _medical_ experiments. Yesterday I put some of my kids
           | toys away without telling them to see if they'd notice and
           | still play with them. I don't think I need IRB approval.
        
             | Mordisquitos wrote:
             | > Applied strictly, wouldn't every single A/B test done by
             | a product team be considered unethical?
             | 
             | I would argue that ordinary A/B tests, by their very
             | nature, are not "experiments" in the sense that restriction
             | is intended for, so there is no reason for them to be
             | considered unethical.
             | 
             | The difference between an A/B test and an actual experiment
             | that should require the subjects' consent is that either of
             | the test conditions, _A_ or _B_ , could have been
             | implemented ordinarily as part of business as usual. In
             | other words, neither _A_ nor _B_ by themselves would need a
             | prior justification as to why they were deployed, and if
             | the reasoning behind either of them was to be disclosed to
             | the subjects, they would find them indistinguishable from
             | any other business decision.
             | 
             | Of course, this argument would not apply if the A/B test
             | involved any sort of artificial inconvenience (e.g. mock
             | errors or delays) applied to either of the test conditions.
             | I only mean A/B tests designed to compare features or
             | behaviours which could both legitimately be considered
             | beneficial, but the business is ignorant of which.
        
             | atq2119 wrote:
             | > wouldn't every single A/B test done by a product team be
             | considered unethical?
             | 
             | Potentially yes, actually.
             | 
             | I still think it should be possible to run some A/B tests,
             | but a lot depends on the underlying motivation. The
             | distance between such tests and malicious psychological
             | manipulation can be very, very small.
        
             | yowlingcat wrote:
             | > Applied strictly, wouldn't every single A/B test done by
             | a product team be considered unethical?
             | 
             | Assuming this isn't being asked as a rhetorical question, I
             | think that's exactly what turned the now infamous Facebook
             | A/B test into a perceived unethical mass manipulation of
             | human emotions. A lot of folks are now justifiably upset
             | and skeptical of Facebook (and big tech) as a result.
             | 
             | So to answer your question: yes, if that test moves into
             | territory that would feel like manipulation once the
             | subject is aware of it. Maybe especially so because users
             | are conceivably making a /choice/ to use said product and
             | may switch to an alternative (or simply divest) if trust is
             | lost.
        
             | pacbard wrote:
             | IRB (as in Institutional Review Board) is a local (as in
             | each research institution has one) regulatory board that
             | ensures that any research conducted by people employed by
             | the institution follows the federal government's common
             | rule for human subject research. Most institutions
             | receiving federal funding for research activities have to
             | show that the funded work follows common rule guidelines
             | for interaction with human subjects.
             | 
             | It is unlikely that a business conducting A/B testing or a
             | parent interacting with their children are receiving
             | federal funds to support it. Therefore, their work is not
             | subject to IRB review.
             | 
             | Instead, if you are a researcher who is funded by federal
             | funds (even if you are doing work on your own children),
             | you have to receive IRB approval for any work involving
             | human interaction before you start conducting it.
        
             | WindyLakeReturn wrote:
             | It should be for all science done for the sake of science,
             | not just medical work. When I did experiments that just
             | involved people playing an existing video game I still had
             | to get approval from IRB and warn people of all the risks
             | that playing a game is associated with (like RSI, despite
             | the gameplay lasting < 15 minutes).
             | 
             | Researchers at a company could arguably be deemed as
             | engaging in unethical research and barred from contributing
             | to the scientific community due to unethical behavior. Even
             | doing experiments on your kids may be deemed crossing the
             | line.
             | 
             | The question I have is when does it apply. If you research
             | on your own kids but never publish, is it okay? Does the
             | act of attempting to publish results retroactively make an
             | experiment unethical? I'm not certain these things have
             | been worked out because of how rare people try to publish
             | anything that wasn't part of an official experiment.
        
             | avisser wrote:
             | > it seems to me this is more about medical experiments
             | 
             | Psychology and sociology are both subject to the IRB as
             | well.
             | 
             | Regardless of their department, this feels like a
             | psychology experiment.
        
               | bezout wrote:
               | This is a huge stretch. It's more of a technical or
               | operational experiment. They are testing the review
               | process, not the maintainers.
        
               | Werewolf255 wrote:
               | "I was testing how the bank processes having a ton of
               | cash taken out by someone without an account, I wasn't
               | testing the staff or police response, geez!"
        
           | tziki wrote:
           | > You do not experiment on people without their consent.
           | 
           | By this logic eg. resume callback studies aiming to study
           | bias in the workforce would be impossible.
        
           | simias wrote:
           | It does seem rather unethical, but I must admit that I find
           | the topic very interesting. They should definitely have asked
           | for consent before starting with the "attack", but if they
           | did manage to land security vulnerabilities despite the
           | review process it's a very worrying result. And as far as I
           | understand they did manage to do just that?
           | 
           | I think it shows that this type of study might well be
           | needed, it just needs to be done better and with the consent
           | of the maintainers.
        
             | bezout wrote:
             | "Hey, we are going to submit some patches that contain
             | vulnerabilities. All right?"
             | 
             | If they do so, the maintainers become more vigilant and the
             | experiment fails. But, the key to the experiment is that
             | maintainers are not vigilant as they should be. It's not an
             | attack to the maintainers though, but to the process.
        
               | tapland wrote:
               | In penetration testing you are doing the same thing, but
               | you get the go-ahead for someone responsible for the
               | project or organization since they are interested in the
               | results as well.
               | 
               | A red team without approval is just a group of criminals.
               | They must have been able to find active projects with a
               | centralized leadership they could ask for permission.
        
               | bezout wrote:
               | I don't know much about penetration testing so excuse me
               | for the dumb question: are you required to disclose the
               | exact methods that you're going to use?
        
               | baremetal wrote:
               | usually the discussion is around the end goals, rather
               | than the means. But both are game for discussion.
        
               | hn8788 wrote:
               | It depends on the organization. Most that I've worked
               | with have said everything is fine except for social
               | engineering, but some want to know every tool you'll be
               | running, and every type of vulnerability you'll try to
               | exploit.
        
               | tapland wrote:
               | Yes, and a bank branch for example could be very
               | interested in some social engineering to test physical
               | security.
               | 
               | It is very varied. There are a lot of good and enjoyable
               | stories out there on youtube and podcasts for anyone
               | interested.
        
               | sss111 wrote:
               | I tried google much but there were too many results haha.
               | Do you have a few that you recommend?
        
               | tapland wrote:
               | Yes. You have agreements about what is fair game and what
               | is off limits. It can be that nothing can be physically
               | altered, what times of day or office locations are OK, if
               | it should only be a test against web services or anything
               | in between.
        
               | SkyBelow wrote:
               | Do you? You have agreement with part of the company and
               | work it out with them, but does this routinely include
               | the people who would be actively looking for your
               | intrusion and trying to catch it? Often that is handled
               | by automated systems which are not updated to have any
               | special knowledge about the up coming penetration test
               | and most of those supporting the application aren't made
               | aware of the details either. The organization is aware,
               | but not all of the people who may be impacted.
        
               | tapland wrote:
               | Exactly. That's answered higher up in the comment tree
               | you are responding to.
        
               | kokx wrote:
               | What you do during pentesting is against the law, if you
               | do not discuss this with your client. You're trying to
               | gain access to a computer system that you should have no
               | access to. The only reason this is OK, is that you have
               | prior permission from the client to try these methods.
               | Thus, it is important to discuss the methods used when
               | you are executing a pentest.
               | 
               | With every pentesting engagement I've had, there always
               | were rules of engagement, and what kind of things you are
               | and are not allowed to do. They even depend on what kind
               | of test you are doing. (for example: if you're testing
               | bank software, it matters a lot if you test against their
               | production environment or their testing environment)
        
               | zaarn wrote:
               | "We're going to, as part of a study, submit various
               | patches to the kernel and observe the mailing list and
               | the behavior of people in response to these patches, in
               | case a patch is to be reverted as part of the study, we
               | immediately inform the maintainer."
        
               | bezout wrote:
               | Your message would push maintainers to put even more
               | focus on the patches, thus invalidating the experiment.
        
               | DetroitThrow wrote:
               | >Your message would push maintainers to put even more
               | focus on the patches, thus invalidating the experiment.
               | 
               | The Tuskegee Study wouldn't have happened if its
               | participants were voluntarily, and it's effects still
               | haunt the scientific community today. The attitude of
               | "science by any means, including by harming other people"
               | is reprehensible and has lasting consequences for the
               | entire scientific community.
               | 
               | However, unlike the Tuskegee Study, it's totally possible
               | to have done this ethically by contacting the leadership
               | of the Linux project and having them announce to
               | maintainers that anonymous researchers may experiment
               | with the contribution process, and allowing them to opt
               | out if they do not consent, and to ensure that harmful
               | commits never reach stable from these researchers.
               | 
               | The researchers chose to instead lie to the Linux project
               | and introduce vulnerabilities to stable trees, and this
               | is why their research is particularly deplorable - their
               | ethical transgressions and possibly lies made to their
               | IRB were not done out of any necessity for empirical
               | integrity, but rather seemingly out of convenience or
               | recklessness.
               | 
               | And now the next group of researchers will have a harder
               | time as they may be banned and every maintainer now more
               | closely monitors academics investigating open source
               | security :)
        
               | ret2plt wrote:
               | I don't want to defend what these researchers did, but to
               | equate infecting people with syphilis to wasting a bit of
               | someones time is disingenuous. Informed consent is
               | important, but only if the magnitude of the intervention
               | is big enough to warrant reasonable concerns.
        
               | DetroitThrow wrote:
               | >to wasting a bit of someones time is disingenuous
               | 
               | This introduced security vulnerabilities to stable
               | branches of the project, the impact of which could have
               | severely affected Linux, its contributors, and its users
               | (such as those who trust their PII data to be managed by
               | Linux servers).
               | 
               | The potential blast radius for their behavior being
               | poorly tracked and not reverted is millions if not
               | billions of devices and people. What if a researcher
               | didn't revert one of these commits before it reached a
               | stable branch and then a release was built? Linux users
               | were lucky enough that Greg was able to revert the
               | changes AFTER they reached stable trees.
               | 
               | There was a clear need of informed consent of *at least*
               | leadership of the project, and to say otherwise is very
               | much in defense of or downplaying the recklessness of
               | their behavior.
               | 
               | I acknowledged that lives are not at play, but that
               | doesn't mean that the only consequence or concern here
               | was wasting the maintainers time, especially when they
               | sought an IRB exemption for "non-human research" when
               | most scientists would consider this very human research.
        
               | zaarn wrote:
               | But it wouldn't let maintainers know what is happening,
               | it only informs them that someone will be submitting some
               | patches, some of which might not be merged. It doesn't
               | push people into vigilance onto a specific detail of the
               | patch and doesn't alert them that there is something
               | specific. If you account for that in your experiment
               | priors, that is entirely fine.
        
               | simias wrote:
               | If the attack surface is large enough and the duration of
               | the experiment long enough it'll return to baseline soon
               | enough I think. It's a reasonable enough compromise.
               | After all if the maintainers are not already considering
               | that they might be under attack I'd argue that something
               | is wrong with the system, a zero-day in the kernel would
               | be invaluable indeed.
               | 
               | And well, if the maintainers become more vigilant in the
               | long run it's a win/win in my book.
        
               | ale_jrb wrote:
               | The maintainers are the process, as they are reviewing
               | it, so it's absoutely attacking the maintainers.
        
           | qPM9l3XJrF wrote:
           | Meh, this means a lot of viral social experiments on Youtube
           | violate the Nuremberg code...
        
             | XorNot wrote:
             | Yes and?
             | 
             | This isn't a "gotcha" - people shouldn't do this.
        
               | qPM9l3XJrF wrote:
               | Yes, and people generally don't seem upset by viral
               | Youtube social experiments. The Nuremberg code may be the
               | status quo and nothing more. No one here is trying to
               | justify the code on its merits, just blindly quoting it
               | as an authority.
               | 
               | Here's another idea: If it's ethical to do it in a non-
               | experimental context, it's also ethical to do it in an
               | experimental context. So if it's OK to walk up to a
               | stranger and ask them a weird question, it's also OK to
               | do it in the context of a Youtube social experiment.
               | Anything other than this is blatantly anti-scientific
               | IMO.
               | 
               | It is IRBs that need reform. They're self-justifying
               | bureaucratic cruft:
               | https://slatestarcodex.com/2017/08/29/my-irb-nightmare/
        
               | vntok wrote:
               | Nah. They aren't experimenting on people, they are
               | experimenting on organizational processes. A very
               | different thing.
        
           | jeltz wrote:
           | But this is all a lie. If you read the linked thread you till
           | see that they refused to admit to their experiment and even
           | sent a new, differently broken patch.
        
           | Blikkentrekker wrote:
           | > _You do not experiment on people without their consent.
           | This is in fact the very FIRST point of the Nuremberg code:_
           | 
           | > _1. The voluntary consent of the human subject is
           | absolutely essential._
           | 
           | Which is rather useless, as for many experiments to work,
           | participants have to either be lied to, or kept in the dark
           | as to the nature of the experiment, so whatever "consent"
           | they give is not informed consent. They simply consent to
           | "participate in an experiment" without being informed as to
           | the qualities thereof so that they truly know what they are
           | signing up for.
           | 
           | Of course, it's quite common in the U.S.A. to perform
           | practice medical checkups on patients who are going under
           | narcosis for an unrelated operations, and they never
           | consented to that, but the hospitals and physicians that
           | partake in that are not sanctioned as it's "tradition".
           | 
           | Know well that so-called "human rights" have always been, and
           | shall always be, a show of air that lack substance.
        
             | everybodyknows wrote:
             | > quite common in the U.S.A. to perform practice medical
             | checkups on patients who are going under narcosis for an
             | unrelated operations
             | 
             | Fascinating. Can you provide links?
        
               | Blikkentrekker wrote:
               | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7223770/
               | 
               | https://ctexaminer.com/2021/03/20/explicit-consent-for-
               | pelvi...
               | 
               | https://www.forbes.com/sites/paulhsieh/2018/05/14/pelvic-
               | exa...
               | 
               | Most one can find of it also only deals with "intimate
               | parts"; I am quite sceptical that this is the only thing
               | that medical students require practice on and I think it
               | more likely that the media only cares in this case and
               | that in fact it is routine with many more body parts.
        
         | bogwog wrote:
         | This does paint there side better, but it also makes me wonder
         | if they're being wrongly accused of this current round of
         | patches? That clarification says that they only submitted 3
         | patches, and that they used a random email address when doing
         | so (so presumably no @umn.edu).
         | 
         | These ~200 patches from UMN being reverted might have nothing
         | to do with these researchers at all.
         | 
         | Hopefully someone from the university clarifies what's
         | happening soon before the angry mob tries to eat the wrong
         | people.
        
           | db48x wrote:
           | The study you're quoting was a previous study by the same
           | research group, from last year.
        
         | detaro wrote:
         | The fact that they took the feedback last time and decided
         | "lets do more of that" is already a big red flag.
        
           | dd82 wrote:
           | >>>On the Feasibility of Stealthily Introducing
           | Vulnerabilities in Open-Source Software via Hypocrite Commits
           | Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the
           | 42nd IEEE Symposium on Security and Privacy (Oakland'21).
           | Virtual conference, May 2021.
           | 
           | from https://www-users.cs.umn.edu/~kjlu/
           | 
           | If the original research results in a paper and IEEE
           | conference presentation, why not? There's no professional
           | consequences for this conduct, apparently.
        
             | hobofan wrote:
             | Given that this conference hasn't happened yet, there
             | should still be time for the affected people to report the
             | inappropriate conduct to the organizers and possibly get
             | the paper pulled.
        
               | throwawaybbq1 wrote:
               | FYI .. many ACM conferences are now asking explicitly if
               | an IRB was required, and if so, was it received. This
               | does not prevent researchers from saying IRB doesn't
               | apply, but perhaps it can be caught during peer review.
               | 
               | Btw .. I posted a few times on the thread, and want to
               | acknowledge that researchers are humans, and humans do
               | make mistakes. Thankfully in this case, the direct
               | consequence was time wasted, and this is a teaching
               | moment for all involved. In my humble opinion, the
               | researchers should acknowledge in stronger terms they
               | screwed up, do a post-mortem on how this happened, and
               | everyone (including the researchers) should move on with
               | their lives.
        
               | dd82 wrote:
               | Given current academia which puts a significant negative
               | on discussing why research failed, I doubt your idea of
               | post-mortems, public or private, will gain any traction.
               | 
               | https://academia.stackexchange.com/questions/732/why-
               | dont-re.... seems to list out reasons why not to do
               | postmortems
        
               | anonymousDan wrote:
               | There are some venues, e.g. this Asplos workshop:
               | https://nope.pub/
        
               | detaro wrote:
               | The same group did the same thing last year (that's what
               | the paper is about - may 2021 paper obviously got
               | written/submitted last year), when the preprint got
               | published they got criticized publicly. And now they are
               | doing it again, so its not just a matter of "acknowledge
               | they screwed up".
        
             | ENOTTY wrote:
             | I just wanted to highlight that S&P/Oakland is one of the
             | top 3 or 4 security conferences in the security community
             | in academia. This is a prestigious venue lending its
             | credibility to this paper.
        
               | UncleMeat wrote:
               | I would go even further and say that Oakland is _the_
               | most prestigious security conference. That this kind of
               | work was accepted is fairly baffling to me, since I 'd
               | expect both ethical concerns and also concerns about the
               | "duh" factor.
               | 
               | I'm a little salty because I personally had two papers
               | rejected by Oakland on the primary concern that their
               | conclusions were too obvious already. I'd expect
               | everybody to already believe that it wouldn't be too hard
               | to sneak vulns into OSS patches.
        
             | fencepost wrote:
             | If this is actually presented, someone present should also
             | make the following clear: "As a result of the methods used
             | by the presenters, the entire University of Minnesota
             | system has been banned from the kernel development process
             | and the kernel developers have had to waste time going back
             | and re-evaluating all past submissions from the university
             | system. The kernel team would also like to advise other
             | open-source projects to carefully review all UMN
             | submissions in case these professors have simply moved on
             | to other projects."
        
         | Igorvelky wrote:
         | they are mentally retarded
         | 
         | END OF STATEMENT
        
         | andi999 wrote:
         | Their first suggestion to the process is pure gold:"OSS
         | projects would be suggested to update the code of conduct,
         | something like "By submitting the patch, I agree to not intend
         | to introduce bugs""
         | 
         | Like somebody picking your locks, and suggesting, 'to stop this
         | one approach would be to post a sign "do not pick"'
        
           | op00to wrote:
           | The sign is to remind honest people that the lock is
           | important, and we do not appreciate game playing here.
        
             | thedanbob wrote:
             | Honest people don't see a lock and think, "Ok, they don't
             | want me going in there, but I bet they would appreciate
             | some free pentesting."
        
             | andi999 wrote:
             | It is ok to put the sign. But not for the person who
             | transgressed to suggest 'why dont you put a sign'
        
       | jvanderbot wrote:
       | The most recent possible-double-free was from a bad static
       | analyzer wasn't it? That could have been a good-faith commit,
       | which is unfortunate given the deliberate bad-faith commits
       | prior.
        
       | gumby wrote:
       | This is the kind of study (unusual for CS) that requires IRB
       | approval. I wonder if they thought to seek approval, and if they
       | received it?
        
       | ineedasername wrote:
       | I don't know how their IRB approved this, although we also don't
       | know what details the researchers gave the IRB.
       | 
       | It had a high human component because it was humans making many
       | decisions in this process. In particular, there was the potential
       | to cause maintainers personal embarrassment or professional
       | censure by letting through a bugged patch.
       | 
       | If the researchers even considered this possibility, I doubt the
       | IRB would have approved this experimental protocol if laid out in
       | those terms.
        
       | ilammy wrote:
       | So many comments here refrain, "They should have asked for
       | consent first". But would not that be detrimental to the research
       | subject? Specifically, _stealthily_ introducing security
       | vulnerabilities. How should a consent request look to preserve
       | the surprise factor? A university approaches you and says, "Would
       | it be okay for us to submit some patches with vulnerabilities for
       | review, and you try and guess which ones are good and which ones
       | have bugs?" Of course you would be extra careful when reviewing
       | those specific patches. But real malicious actors would be so
       | kind and ethical as to announce their intentions beforehand.
        
         | plesiv wrote:
         | Ethics in research matters. You don't see vaccine researchers
         | shooting up random unconsenting people from the street with
         | latest vaccine prototypes. Researchers have to come up with a
         | reasonable research protocol. Just because the ethical way to
         | do what UMN folks intended to do isn't immediately obvious to
         | you - doesn't mean that it doesn't exist.
        
         | hn8788 wrote:
         | It could have been done similar to how typosquatting research
         | was done for ruby and python packages. The owners of the
         | package repositories were contacted, and the researchers waited
         | for approval before starting. I wasn't a fan of that experiment
         | either for other reasons, but hiding it from everyone isn't the
         | only option. Also, "you wouldn't have allowed me to experiment
         | on you if I'd asked first" is a pretty disgusting attitude to
         | have.
        
           | DetroitThrow wrote:
           | "you wouldn't have allowed me to experiment on you if I'd
           | asked first"
           | 
           | I'm shocked the researchers thought this wasn't textbook a
           | violation of research ethics - we talk about the effects of
           | the Tuskegee Study on the perception of the greater
           | scientific community today.
           | 
           | This is a smaller transgression that hasn't resulted in
           | deaths, but when it's not difficult to have researched
           | ethically AND we now spend the time to educate on the
           | importance of ethics, it's perhaps more frustrating.
        
         | enneff wrote:
         | Well, yeah, but the priority here shouldn't be to allow the
         | researchers to do their work. If they can't do their research
         | ethically then they just can't do it; too bad for them.
        
         | DetroitThrow wrote:
         | >So many comments here refrain, "They should have asked for
         | consent first".
         | 
         | The Linux kernel is a very large space with many maintainers.
         | It would be possible to reach out to the leadership of the
         | project to ask for approval without notifying maintainers and
         | have the leadership announce "Hey, we're going to start
         | allowing experiments on the contribution process, please let us
         | know if you'd like to opt out", or at least _work towards_
         | creating such a process to allow experiments on maintainers
         | /commit approval process while also under the overall
         | expectation that experiments may happen but that *they will be
         | reverted before they reach stable trees*.
         | 
         | The way they did their work could impact more than just the
         | maintainers and affect the reputation of the Linux project, and
         | to me it's very hard to see how it couldn't have been done in a
         | way that meets standards for ethical research.
        
         | Werewolf255 wrote:
         | Yeah we get to hold people who are claiming to act in good
         | faith to a higher standard than active malicious attackers.
         | Their actions do not comport with ethical research practices.
        
       | mryalamanchi wrote:
       | They just wasted the community's time. No wonder Linus Trovalds
       | goes batshit crazy on these kind of people!
        
       | bloat wrote:
       | It's been a long time since I saw this usage of the word "plonk".
       | Brought back some memories.
       | 
       | https://en.wikipedia.org/wiki/Plonk_(Usenet)
        
       | rzwitserloot wrote:
       | The professor gets exactly what they want here, no?
       | 
       | "We experimented on the linux kernel team to see what would
       | happen. Our non-double-blind test of 1 FOSS maintenance group has
       | produced the following result: We get banned and our entire
       | university gets dragged through the muck 100% of the time".
       | 
       | That'll be a fun paper to write, no doubt.
       | 
       | Additional context:
       | 
       | * One of the committers of these faulty patches, Aditya Pakki,
       | writes a reply taking offense at the 'slander' and indicating
       | that the commit was in good faith[1].
       | 
       | Greg KH then immediately calls bullshit on this, and then
       | proceeds to ban the entire university from making commits [2].
       | 
       | The thread then gets down to business and starts coordinating
       | revert patches for everything committed by University of
       | Minnesota email addresses.
       | 
       | As was noted, this obviously has a bunch of collateral damage,
       | but such drastic measures seem like a balanced response,
       | considering that this university decided to _experiment_ on the
       | kernel team and then lie about it when confronted (presumably,
       | that lie is simply continuing their experiment of 'what would
       | someone intentionally trying to add malicious code to the kernel
       | do')?
       | 
       | * Abhi Shelat also chimes in with links to UMN's Institutional
       | Review Board along with documentation on the UMN policies for
       | ethical review. [3]
       | 
       | [1]: Message has since been deleted, so I'm going by the content
       | of it as quoted in Greg KH's followup, see footnote 2
       | 
       | [2]: https://lore.kernel.org/linux-
       | nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
       | 
       | [3]: https://lore.kernel.org/linux-
       | nfs/3B9A54F7-6A61-4A34-9EAC-95...
        
         | gregkh wrote:
         | Thanks for the support.
         | 
         | I also now have submitted a patch series that reverts the
         | majority of all of their contributions so that we can go and
         | properly review them at a later point in time:
         | https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
        
           | Cpoll wrote:
           | A lot of people are talking about the ethical aspects, but
           | could you talk about the security implications of this
           | attack?
           | 
           | From a different thread: https://lore.kernel.org/linux-
           | nfs/CADVatmNgU7t-Co84tSS6VW=3N... > A lot of these have
           | already reached the stable trees.
           | 
           | Apologies in advance if my questions are off the mark, but
           | what does this mean in practice?
           | 
           | 1. If UNM hadn't brought any attention to these, would they
           | have been caught, or would they have eventually wound up in
           | distros? 'stable' is the "production" branch?
           | 
           | 2. What are the implications of this? Is it possible that
           | other malicious actors have done things like this without
           | being caught?
           | 
           | 3. Will there be a post-mortem for this attack/attempted
           | attack?
        
             | WmyEE0UsWAwC2i wrote:
             | I agree with the sentiment. For a project of this magnitude
             | maybe it comes to develop some kind of static analysis
             | along with refactoring the code to make the former
             | possible.
             | 
             | As per the attack surface described in the paper (section
             | IV). Because (III, the acceptance process) is a manpower
             | issue.
        
               | spullara wrote:
               | Ironically, one of their attempts were submitting changes
               | that were allegedly recommended by a static analysis
               | tool.
        
             | [deleted]
        
             | neoflame wrote:
             | I don't think the attack described in the paper actually
             | succeeded at all, and in fact the paper doesn't seem to
             | claim that it did.
             | 
             | Specifically, I think the three malicious patches described
             | in the paper are:
             | 
             | - UAF case 1, Fig. 11 => crypto: cavium/nitrox: add an
             | error message to explain the failure of
             | pci_request_mem_regions, https://lore.kernel.org/lkml/20200
             | 821031209.21279-1-acostag.... The day after this patch was
             | merged into a driver tree, the author suggested calling
             | dev_err() before pci_disable_device(), which presumably was
             | their attempt at maintainer notification; however, the code
             | as merged doesn't actually appear to constitute a
             | vulnerability because pci_disable_device() doesn't appear
             | to free the struct pci_dev.
             | 
             | - UAF case 2, Fig. 9 => tty/vt: fix a memory leak in
             | con_insert_unipair, https://lore.kernel.org/lkml/2020080922
             | 1453.10235-1-jameslou... This patch was not accepted.
             | 
             | - UAF case 3, Fig. 10 => rapidio: fix get device imbalance
             | on error, https://lore.kernel.org/lkml/20200821034458.22472
             | -1-acostag.... Same author as case 1. This patch was not
             | accepted.
             | 
             | This is not to say that open-source security is not a
             | concern, but IMO the paper is deliberately misleading in an
             | attempt to overstate its contributions.
             | 
             | edit: wording tweak for clarity
        
               | ununoctium87 wrote:
               | > the paper is deliberately misleading in an attempt to
               | overstate its contributions.
               | 
               | Welcome to academia. Where a large number of students are
               | doing it just for the credentials
        
               | DSingularity wrote:
               | What else do you expect? The incentive structure in
               | academia pushes students to do this.
               | 
               | Immigrant graduate students with uncertain future if they
               | fail? Check.
               | 
               | Vulnerable students whose livelihood is at mercy of their
               | advisor? Check.
               | 
               | Advisor whose career depends on a large number of
               | publication bullet points in their CV? Check.
               | 
               | Students who cheat their way through to publish? Duh.
        
             | cutemonster wrote:
             | I wonder about this me too.
             | 
             | To me, seems to indicate that nation state supported evil
             | hacker org (maybe posing as an individual) could place
             | their own exploits in the kernel. Let's say they contribute
             | 99.9% useful code, solve real problems, build trust over
             | some years, and only rarely write an evil hard to notice
             | exploit bug. And then, everyone thinks that obviously it
             | was just an ordinary bug.
             | 
             | Maybe they can pose as 10 different people, in case some of
             | them gets banned.
        
               | Spooky23 wrote:
               | You're still in a better position with open source. The
               | same thing happens in closed source companies.
               | 
               | See: https://www.reuters.com/article/us-usa-security-
               | siliconvalle...
               | 
               |  _" As U.S. intelligence agencies accelerate efforts to
               | acquire new technology and fund research on
               | cybersecurity, they have invested in start-up companies,
               | encouraged firms to put more military and intelligence
               | veterans on company boards, and nurtured a broad network
               | of personal relationships with top technology
               | executives."_
               | 
               | Foreign countries do the same thing. There are numerous
               | public accounts of Chinese nationals or folks with
               | vulnerable family in China engaging in espionage.
        
               | hanselot wrote:
               | Plus, wouldn't it be much easier to do this under the
               | guise of equality with some quickly thought up trash
               | contract enforced on all developers?
               | 
               | One might even say that while this useless attack is
               | taking place, actual people with lifelong commitment to
               | open source software and user freedom get taken out by
               | the "NaN" flavour "NaN" koolaid of the week.
               | 
               | Soon all that is left that is legal to say is whatever is
               | approved by the "NaN" board. Eventually the number 0 will
               | be found to be exclusionary or accused of "NaN" and we
               | will all be stuck coding unary again.
        
               | baby wrote:
               | Read into the socat diffie-hellman backdoor, I found it
               | fascinating at the time.
        
               | TheSpiceIsLife wrote:
               | Isn't what you've described pretty much the very
               | definition of _advanced persistent threat_?
               | 
               | It's _difficult_ to protect against _trusted parties_
               | whom you assume, with good reason, and good-faith actors.
        
             | mk89 wrote:
             | I have the same questions. So far we have focused on how
             | bad these "guys" are. Sure, they could have done it
             | differently, etc. However, they proved a big point: how
             | "easy" it is to manipulate the most used piece of software
             | on the planet.
             | 
             | How to solve this "issue" without putting too much process
             | around it? That's the challenge.
        
               | corty wrote:
               | They proved nothing that wasn't already obvious. A
               | malicious actor can get in vulnerabilities the same way a
               | careless programmer can. Quick, call the press!
               | 
               | And as for the solutions, their contribution is nil. No
               | suggestions that haven't been suggested, tried and done
               | or rejected a thousand times over.
        
           | turndown wrote:
           | Just wanted you to know that I think you're an amazing
           | programmer
        
           | g42gregory wrote:
           | My deepest thanks for all your work, as well as for keeping
           | the standards high and the integrity of the project intact!
        
           | fossuser wrote:
           | I hope they take this bad publicity and stop (rather than
           | escalating stupidity by using non university emails).
           | 
           | What a joke - not sure how they can rationalize this as
           | valuable behavior.
        
             | nomel wrote:
             | It was a real world penetration test that showed some
             | serious security holes in the code analysis/review process.
             | Penetration tests are always only as valuable as your
             | response to them. If they chose to do nothing about their
             | code review/analysis process, with these vulnerabilities
             | that made it in (intentional or not), then yes, the
             | exercise probably wasn't valuable.
             | 
             | Personally, I think all contributors should be considered
             | "bad actors" in open source software. NSA, some university
             | mail address, etc. I consider _myself_ a bad actor,
             | whenever I write code with security in mind. This is why I
             | use fuzzing and code analysis tools.
             | 
             | Banning them was probably the correct action, but not
             | finding value requires intentionally ignoring the very real
             | result of the exercise.
        
               | ben0x539 wrote:
               | A real world penetration test is coordinated with the
               | entity being tested.
        
               | fossuser wrote:
               | Yeah - and usually stops short of causing actual damage.
               | 
               | You don't get to rob a bank and then when caught say "you
               | should thank us for showing your security weaknesses".
               | 
               | In this case they merged actual bugs and now they have to
               | revert that stuff which depending on how connected those
               | commits are to other things could cost a lot of time.
               | 
               | If they were doing this in good faith, they could have
               | stopped short of actually letting the PRs merge (even
               | then it's rude to waste their time this way).
               | 
               | This just comes across to me as an unethical academic
               | with no real valuable work to do.
        
               | dragonwriter wrote:
               | > You don't get to rob a bank and then when caught say
               | "you should thank us for showing your security
               | weaknesses".
               | 
               | Yeah, there's a reason the US response to 9/11 wasn't to
               | name Osama bin Laden "Airline security researcher of the
               | Millenium", and it isn't that "2001 was too early to make
               | that judgement".
        
               | wildmanx wrote:
               | The result is to make sure not to accept anything with
               | the risk of introducing issues.
               | 
               | Any patch coming from somebody having intentionally
               | introduced an issue falls into this category.
               | 
               | So, banning their organization from contributing is
               | exactly the lesson to be learned.
        
               | nomel wrote:
               | I agree, but I would say the better result, most likely
               | unachievable now, would be to fix the holes that required
               | a humans feelings to ensure security. Maybe some shift
               | towards that direction could result from this.
        
               | pertymcpert wrote:
               | What exactly did they find?
        
           | tcelvis wrote:
           | Putting the ethical question of the researcher aside, the
           | fact you want to "properly review them at a later point in
           | time" seems to suggest a lack of confidence in the kernel
           | review process.
           | 
           | Since this researcher is apparently not an established figure
           | in the kernel community, my expectation is the patches have
           | gone through the most rigorous review process. If you think
           | the risk of malicious patches from this person have got in is
           | high, it means that an unknown attacker deliberately
           | concerting complex kernel loop hole would have a even higher
           | chance got patches in.
           | 
           | While I think the researcher's actions are out of line for
           | sure. This "I will ban you and revert all your stuff"
           | retaliation seems emotional overaction.
        
             | lacker wrote:
             | _The fact you want to "properly review them at a later
             | point in time" seems to suggest a lack of confidence in the
             | kernel review process._
             | 
             | Basically, _yes_. The kernel review process does not catch
             | 100% of intentionally introduced security flaws. It isn 't
             | perfect, and I don't think anyone is claiming that it is
             | perfect. Whenever there's an indication that a group has
             | been intentionally introducing security flaws, it is just
             | common sense to go back and put a higher bar on reviewing
             | it for security.
        
             | FrameworkFred wrote:
             | In a perfect world, I would agree that the work of a
             | researcher who's not an established figure in the kernel
             | community would be met with a relatively high level of
             | scrutiny in review.
             | 
             | But realistically, when you find out a submitter had
             | malicious intent, I think it's 100% correct to revisit any
             | and all associated submissions since it's quite a different
             | thing to inspect code for correctness, style, etc. as you
             | would in a typical code review process versus trying to
             | find some intentionally obfuscated security hole.
             | 
             | And, frankly, who has time to pick the good from the bad in
             | a case like this? I don't think it's an overreaction at
             | all. IMO, it's a simplification to assume that all
             | associated contributions may be tainted.
        
             | wildmanx wrote:
             | > This "I will ban you and revert all your stuff"
             | retaliation seems emotional overaction.
             | 
             | Fool me once. Why should they waste their time with extra
             | scrutiny next time? Somebody deliberately misled them, so
             | that's it, banned from the playground. It's just a no-
             | nonsense attitude, without which you'd get nothing done.
             | 
             | If you had a party in your house and some guest you don't
             | know and whom you invited in assuming good faith, turned
             | out to deliberately poop on the rug in your spare guest
             | room while nobody was looking .. next time you have a
             | party, what do you do? Let them in but keep an eye on them?
             | Ask your friends to never let this guest alone? Or just
             | simply to deny entrance, so that you can focus on having
             | fun with people you trust and newcomers who have not shown
             | any malicious intent?
             | 
             | I know what I'd do. Life is too short for BS.
        
               | girvo wrote:
               | You're completely right, except in this case it's banning
               | anyone who happened to live in the same house as the
               | offender, at any point in time...
        
               | hanselot wrote:
               | Sounds more or less like the way punishment is handled in
               | modern society.
        
             | matheusmoreira wrote:
             | > seems to suggest a lack of confidence in the kernel
             | review process
             | 
             | The reviews were done by kernel developers who assumed good
             | faith. That assumption has been proven false. It makes
             | sense to review the patches again.
        
             | philjackson wrote:
             | I mean, it's the linux kernel. Think about what it's
             | powering and how much risk there is involved with these
             | patches. Review processes obviously aren't perfect, but
             | usually patches aren't constructed to sneak sketchy code
             | though. You'd usually approach a review in good faith.
             | 
             | Given that some patches may have made it though with holes,
             | you pull them and re-approach them with a different
             | mindset.
        
             | renewiltord wrote:
             | > _Since this researcher is apparently not an established
             | figure in the kernel community, my expectation is the
             | patches have gone through the most rigorous review process_
             | 
             | I think the best way to make this expectation reality is
             | putting in the work. The second best way is paying. Doing
             | neither and holding the expectation is a way to exist
             | certainly, but has no impact on the outcome.
        
             | nwiswell wrote:
             | > the fact you want to "properly review them at a later
             | point in time" seems to suggest a lack of confidence in the
             | kernel review process.
             | 
             | I don't think this necessarily follows. Rather it is
             | fundamentally a resource allocation issue.
             | 
             | The kernel team obviously doesn't have sufficient resources
             | to conclusively verify that every patch is bug-free,
             | particularly if the bugs are intentionally obfuscated.
             | Instead it's a more nebulous standard of "reasonable
             | assurance", where "reasonable" is a variable function of
             | what must be sacrificed to perform a more thorough review,
             | how critical the patch appears at first impression, and
             | information relating to provenance of the patch.
             | 
             | By assimilating new information about the provenance of the
             | patch (that it's coming from a group of people known to add
             | obfuscated bugs), that standard rises, as it should.
             | 
             | Alternatively stated, there is some desired probability
             | that an approved patch is bug-free (or at least free of any
             | bugs that would threaten security). Presumably, the review
             | process applied to a patch from an anonymous source
             | (meaning the process you are implying suffers from a lack
             | of confidence) is sufficient such that the Bayesian prior
             | for a hypothetical "average anonymous" reviewed patch
             | reaches the desired probability. But the provenance raises
             | the likelihood that the source is malicious, which drops
             | the probability such that the typical review for an
             | untrusted source is not sufficient, and so a "proper
             | review" is warranted.
             | 
             | > it means that an unknown attacker deliberately concerting
             | complex kernel loop hole would have a even higher chance
             | got patches in.
             | 
             | That's hard to argue with, and ironically the point of the
             | research at issue. It does imply that there's a need for
             | some kind of "trust network" or interpersonal vetting to
             | take the load off of code review.
        
             | tcelvis wrote:
             | I guess what I am trying to get at is that this
             | researcher's action does have its merit. This event does
             | rise awareness of what sophisticated attacker group might
             | try to do to kernel community. Admitting this would be the
             | first step to hardening the kernel review process to
             | prevent this kind of harm from happening again.
             | 
             | What I strongly disapprove of the researcher is that
             | apparently no steps are taken to prevent real world
             | consequences of malicious patches getting into kernel, I
             | think the researcher should:
             | 
             | - Notify the kernel community promptly once malicious
             | patches got past all review processes.
             | 
             | - Time these actions well such that malicious patches won't
             | not get into a stable branch before they could be reverted.
             | 
             | ----------------
             | 
             | Edit: reading the paper provided above, it seems that they
             | did do both actions above. From the paper:
             | 
             | > Ensuring the safety of the experiment. In the experiment,
             | we aim to demonstrate the practicality of stealthily
             | introducing vulnerabilities through hypocrite commits. Our
             | goal is not to introduce vulnerabilities to harm OSS.
             | Therefore, we safely conduct the experiment to make sure
             | that the introduced UAF bugs will not be merged into the
             | actual Linux code. In addition to the minor patches that
             | introduce UAF conditions, we also prepare the correct
             | patches for fixing the minor issues. We send the minor
             | patches to the Linux community through email to seek their
             | feedback. Fortunately, there is a time window between the
             | confirmation of a patch and the merging of the patch. Once
             | a maintainer confirmed our patches, e.g., an email reply
             | indicating "looks good", we immediately notify the
             | maintainers of the introduced UAF and request them to not
             | go ahead to apply the patch. At the same time, we point out
             | the correct fixing of the bug and provide our correct
             | patch. In all the three cases, maintainers explicitly
             | acknowledged and confirmed to not move forward with the
             | incorrect patches. All the UAF-introducing patches stayed
             | only in the email exchanges, without even becoming a Git
             | commit in Linux branches. Therefore, we ensured that none
             | of our introduced UAF bugs was ever merged into any branch
             | of the Linux kernel, and none of the Linux users would be
             | affected.
             | 
             | So, unless the kernel maintenance team have another side of
             | the story. The questions of ethics could only go as far as
             | "wasting kernel community's time" rather than creating real
             | world loop holes.
        
               | db48x wrote:
               | That paper came out a year ago, and they got a lot of
               | negative feedback about it, as you might expect. Now they
               | appear to be doing it again. It's a different PHD student
               | with the same advisor as last time.
               | 
               | This time two reviewers noticed that the patch was
               | useless, and then Greg stepped in (three weeks later)
               | saying that this was a repetition of the same bad
               | behavior from the first study. This got a response from
               | the author of the patch, who said that this and other
               | statements were "wild accusations that are bordering on
               | slander".
               | 
               | https://lore.kernel.org/linux-
               | nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
        
               | MR4D wrote:
               | > The questions of ethics could only go as far as
               | "wasting kernel community's time" rather than creating
               | real world loop holes.
               | 
               | Under that logic, it's ok for me to run a pen test
               | against your computers, right? ...because I'm only
               | wasting your time.... Or maybe to hack your bank account,
               | but return the money before you notice.
               | 
               | Slippery slope, my friend.
        
               | nomel wrote:
               | Ethics aside, warning someone that a targeted penetration
               | test is coming will change their behavior.
               | 
               | > Under that logic, it's ok for me to run a pen test
               | against your computers, right?
               | 
               | I think the standard for an individual user should be
               | different than that for the organization who is, in the
               | end, responsible for the security of millions of those
               | individual users. One annoys one person, one prevents
               | millions from being annoyed.
               | 
               | Donate to your open source projects!
        
               | josefx wrote:
               | > Ethics aside, warning someone that a targeted
               | penetration test is coming will change their behavior.
               | 
               | They could discuss the idea and then perform the test
               | months later? With the amount of patches that had to be
               | reverted as precaution the test would have been well
               | hidden in the usual workload even if the maintainers knew
               | that someone at some point in the past mentioned the
               | possibility of a pen test. How long can the average human
               | stay vigilant if you tell them they will be robbed some
               | day this year?
        
               | ssivark wrote:
               | To add... Ideally, they should have looped in Linus, or
               | someone high-up in the chain of maintainers _before_
               | running an experiment like this. Their actions might have
               | been in good faith, but the approach they undertook
               | (including the email claiming slander) is seriously
               | irresponsible and a sure shot way to wreck relations.
        
               | anonymousiam wrote:
               | Greg KH is "someone high-up in the chain." I remember
               | submitting patches to him over 20 years ago. He is one of
               | Linus's trusted few.
        
               | ssivark wrote:
               | Yes, and the crux of the problem is that they didn't get
               | assent/buy-in from someone like that before running the
               | experiment.
        
               | camjohnson26 wrote:
               | We threw people off buildings to gauge how they would
               | react, but were able to catch all 3 subjects in a net
               | before they hit the ground.
               | 
               | Just because their actions didn't cause damage doesn't
               | mean they weren't negligent.
        
               | nomel wrote:
               | Strangers submitting patches to the kernel is completely
               | normal, where throwing people off is not. A better
               | analogy would involve decades of examples of bad actors
               | throwing people off the bridge, then being surprised when
               | someone who appears friendly does it.
        
               | hervature wrote:
               | Your analogy also isn't the best because it heavily
               | suggests the nefarious behavior is easy to identify
               | (throwing people off a bridge). This is more akin to
               | people helping those in need to cross a street. At first,
               | it is just people helping people. Then, someone comes
               | along and starts to "help" so that they can steal money
               | (introduce vulnerabilities) to the unsuspecting targets.
               | Now, the street-crossing community needs to introduce
               | processes (code review) to look out for these bad actors.
               | Then, someone who works for the city and is wearing the
               | city uniform (University of Minnesota CS department)
               | comes along saying there here to help and the community
               | is a bit more trustful as they have dealt with other city
               | workers before. The city worker then steals from the
               | people in need and then proclaims "Aha, see how easy it
               | is!" No one is surprised and just thinks they are
               | assholes.
               | 
               | Sometimes, complex situations don't have simple
               | analogies. I'm not even sure mine is 100% correct.
        
               | camjohnson26 wrote:
               | While submitting patches is normal submitting malicious
               | patches is abnormal and antisocial. Certainly bad actors
               | will do it, but by that logic these researchers are bad
               | actors.
               | 
               | Just like bumping into somebody on the roof is normal,
               | but you should always be aware that there's a chance they
               | might try to throw you off. A researcher highlighting
               | this fact by doing it isn't helping, even if they
               | mitigate their damage.
               | 
               | A much better way to show what they are attempting to is
               | to review historic commits and try to find places where
               | malicious code slipped through, and how the community
               | responded. Or to solicit experimenters to follow normal
               | processes on a fake code base for a few weeks.
        
               | detaro wrote:
               | > _This event does rise awareness of what sophisticated
               | attacker group might try to do to kernel community._
               | 
               | The limits of code review are quite well known, so it
               | appears very questionable what scientific knowledge is
               | actually gained here. (Indeed, especially because of the
               | known limits, you could very likely show them without
               | misleading people, because even people knowing to be
               | suspicious are likely to miss problems, if you really
               | wanted to run a formal study on some specific aspect. You
               | could also study the history of in-the-wild bugs to learn
               | about the review process)
        
               | Supermancho wrote:
               | > The limits of code review are quite well known
               | 
               | That's factually incorrect. The arguments over what
               | constitutes proper code reviews continues to this day
               | with few comprehensive studies about syntax, much less
               | code reviews - not "do you have them" or "how many
               | people" but methodology.
               | 
               | > it appears very questionable what scientific knowledge
               | is actually gained here
               | 
               | The knowledge isn't from the study existing, but the
               | analysis of the data collected.
               | 
               | I'm not even sure why people are upset at this, since
               | it's a very modern approach to investigating how many
               | projects are structured to this day. This was a daring
               | and practical effort.
        
               | acrispino wrote:
               | Does experimenting on people without their knowledge or
               | consent pose an ethical question?
        
               | nixpulvis wrote:
               | Obviously.
        
             | kevingadd wrote:
             | Not all kernel reviewers are being paid by their employer
             | to review patches. Kernel reviews are "free" to the
             | contributor because everyone operates on the assumption
             | that every contributor wants to make Linux better by
             | contributing high-quality patches. In this case, multiple
             | people from the University have decided that reviewers'
             | time isn't valuable (so it's acceptable to waste it) and
             | that the quality of the Kernel isn't important (so it's
             | acceptable to make it worse on purpose). A ban is a
             | completely appropriate response to this, and reverting
             | until you can review all the commits is an appropriate
             | safety measure.
             | 
             | Whether or not this indicates flaws in the review process
             | is a separate issue, but I don't know how you can justify
             | not reverting all the commits. It'd be highly irresponsible
             | to leave them in.
        
             | theknocker wrote:
             | >I've never performed any meaningful debugging or
             | postmortem ever in my life and might not even know how to
             | program at all.
        
           | inglor wrote:
           | Just wanted to say thanks for your work!
           | 
           | As an OSS maintainer (Node.js and a bunch of popular JS libs
           | with millions of weekly downloads) - I feel how _tempting_ it
           | is to trust people and assume good faith. Often since people
           | took the time to contribute you want to be "on their side"
           | and help them "make it".
           | 
           | Identifying and then standing up to bad-faith actors is
           | extremely important and thankless work. Especially ones that
           | apparently seem to think it's fine to experiment on humans
           | without consent.
           | 
           | So thanks. Keep it up.
        
           | saghm wrote:
           | Thank you for all your excellent work!
        
           | nwelna wrote:
           | As an Alumni of the University of Minnesota's program I am
           | appalled this was even greenlit. It reflects poorly on all
           | graduates of the program, even those uninvolved. I am
           | planning to email the department head with my disapproval as
           | an alumni, and I am deeply sorry for the harm this caused.
        
             | ksec wrote:
             | I am wondering if UMN will now get a bad name in Open
             | Source and any contribution with their email will require
             | extra care.
             | 
             | And if this escalate to MSM Media it might also damage
             | future employment status from UMN CS students.
             | 
             | Edit: Looks like they made a statement.
             | https://cse.umn.edu/cs/statement-cse-linux-kernel-
             | research-a...
        
               | djbebs wrote:
               | It should. Ethics begins at the top, and if the
               | university has shown itself to be this untrustworthy then
               | no trust can be had on them or any students they
               | implicitly endorse.
               | 
               | As far as I'm concerned this university and all of its
               | alumni are radioactive.
        
               | binbag wrote:
               | That's a bit much, surely. I think the ethics committee
               | probably didn't do a great job in understanding that this
               | was human research.
        
               | fighterpilot wrote:
               | Their graduates have zero culpability here (unless they
               | were involved). Your judgement of them is unfair.
        
             | [deleted]
        
             | anp wrote:
             | Based on my time in a university department you might want
             | to cc whoever chairs the IRB or at least oversees its
             | decisions for the CS department. Seems like multiple
             | incentives and controls failed here, good on you for
             | applying the leverage available to you.
        
               | tw04 wrote:
               | I'm genuinely curious how this was positioned to the IRB
               | and if they were clear that what they were actually
               | trying to accomplish was social engineering/manipulation.
               | 
               | Being a public university, I hope at some point they
               | address this publicly as well as list the steps they are
               | (hopefully) taking to ensure something like this doesn't
               | happen again. I'm also not sure how they can continue to
               | employ the prof in question and expect the open source
               | community to ever trust them to act in good faith going
               | forward.
        
               | detaro wrote:
               | first statement + commentary from their associate
               | department head: https://twitter.com/lorenterveen/status/
               | 1384954220705722369
        
               | woofie11 wrote:
               | Wow. Total sleazeball. This appears to not be his first
               | time with using unintentional research subjects.
               | 
               | Source:
               | 
               | https://scholar.google.com/scholar?hl=en&as_sdt=0%2C22&q=
               | Lor...
               | 
               | This is quite literally the first point of the Nuremberg
               | code research ethics are based on:
               | 
               | https://en.wikipedia.org/wiki/Nuremberg_Code#The_ten_poin
               | ts_...
               | 
               | This isn't an individual failing. This is an
               | institutional failing. This is the sort of thing which
               | someone ought to raise with OMB.
               | 
               | He literally points to how Wikipedia needed to respond
               | when he broke the rules:
               | 
               | https://en.wikipedia.org/wiki/Wikipedia:What_Wikipedia_is
               | _no...
        
               | chx wrote:
               | They claim they got the IRB to say it's IRB-exempt.
        
               | woofie11 wrote:
               | A lot of IRBs are a joke.
               | 
               | The way I've seen Harvard, Stanford, and a few other
               | university researchers dodge IRB review is by doing
               | research in "private" time in collaboration with a
               | private entity.
               | 
               | There is no effective oversight over IRBs, so they really
               | range quite a bit. Some are really stringent and some
               | allow anything.
        
           | devit wrote:
           | Well, you or whoever was the responsible maintainer
           | completely failed in reviewing these patches, which is your
           | whole job as a maintainer.
           | 
           | Just reverting those patches (which may well be correct)
           | makes no sense, you and/or other maintainers need to properly
           | review them after your previous abject failure at doing so,
           | and properly determine whether they are correct or not, and
           | if they aren't how they got merged anyway and how you will
           | stop this happening again.
           | 
           | Or I suppose step down as maintainers, which may be
           | appropriate after a fiasco of this magnitude.
        
             | lacker wrote:
             | On the contrary, it would be the easy, lazy way out for a
             | maintainer to say "well this incident was a shame now let's
             | forget about it." The extra work the kernel devs are
             | putting in here should be commended.
             | 
             | In general, it is the wrong attitude to say, oh we had a
             | security problem. What a fiasco! Everyone involved should
             | be fired! With a culture like that, all you guarantee is
             | that people cover up the security issues that inevitably
             | occur.
             | 
             | Perhaps this incident actually does indicate that kernel
             | code review procedures should be changed in some way. I
             | don't know, I'm not a kernel expert. But the right way to
             | do that is with a calm postmortem after appropriate
             | immediate actions are taken. Rolling back changes made by
             | malicious actors is a very reasonable immediate action to
             | take. After emotions have cooled, then it's the right time
             | to figure out if any processes should be changed in the
             | future. And kernel devs putting in extra work to handle
             | security incidents should be appreciated, not criticized
             | for their imperfection.
        
         | dfrankow wrote:
         | https://twitter.com/UMNComputerSci/status/138496371833373082...
         | 
         | "The University of Minnesota Department of Computer Science &
         | Engineering takes this situation extremely seriously. We have
         | immediately suspended this line of research."
        
         | jvanderbot wrote:
         | So, the patch was about a possible double-free, detected
         | presumably from a bad static analyzer. Couldn't this patch have
         | been done in good faith? That's not at all impossible.
         | 
         | However, the prior activity of submitting bad-faith code is
         | indeed pretty shameful.
        
           | machello13 wrote:
           | I'm not a linux kernel maintainer but it seems like the
           | maintainers all agree it's extremely unlikely a static
           | analyzer could be so wrong in so many different ways.
        
         | killjoywashere wrote:
         | Not that I approve of the methods, but why would an IRB be
         | involved in a computer security study? IRBs are for human
         | subjects research. If we have to run everything that looks like
         | any kind of research through IRBs, the Western gambit on
         | technical advantage is going to run into some very hard times.
        
         | kop316 wrote:
         | Hearing how you phrased it reminds me of a study that showed
         | how parachutes do not in fact save lives (the study was more to
         | show the consequences of extrapolating data, so the result
         | should not be taken seriously):
         | 
         | https://www.bmj.com/content/363/bmj.k5094
        
           | tpoacher wrote:
           | This is now my second favourite paper after the atlantic
           | salmon in fmri
        
             | CommieBobDole wrote:
             | I'm a big fan of Doug Zongker's excellent paper on chicken:
             | 
             | https://isotropic.org/papers/chicken.pdf
        
             | kevmo wrote:
             | link?
        
               | hprotagonist wrote:
               | https://blogs.scientificamerican.com/scicurious-
               | brain/ignobe...
               | 
               | it was, iirc, a poster not a paper.
        
               | etiam wrote:
               | Apparently there was an article as followup:
               | 
               | "Neural Correlates of Interspecies Perspective Taking in
               | the Post-Mortem Atlantic Salmon: An Argument For Proper
               | Multiple Comparisons Correction" (2010, in Journal of
               | Serendipitous and Unexpected Results)
               | 
               | We had the good fortune to have discussion of the study
               | with comments from the author a few years back:
               | 
               | https://news.ycombinator.com/item?id=15598429
        
             | legostormtroopr wrote:
             | I still prefer the legal article examining the Fourth
             | Amendment examination of Jay-Z's 99 Problems.
             | 
             | http://pdf.textfiles.com/academics/lj56-2_mason_article.pdf
        
             | optimalsolver wrote:
             | My favorite is "Possible Girls":
             | 
             | https://philpapers.org/archive/sinpg
        
             | slohr wrote:
             | My gateway pub to this type of research was the Stork
             | paper: https://pubmed.ncbi.nlm.nih.gov/14738551/
        
           | joosters wrote:
           | The original referenced paper is also very good:
           | http://elucidation.free.fr/parachuteBMJ.pdf (can't find a
           | better formatted link, sorry)
           | 
           |  _Conclusions: As with many interventions intended to prevent
           | ill health, the effectiveness of parachutes has not been
           | subjected to rigorous evaluation by using randomised
           | controlled trials.Advocates of evidence based medicine have
           | criticised the adoption of interventions evaluated by using
           | only observational data. We think that everyone might benefit
           | if the most radical protagonists of evidence based medicine
           | organised and participated in a double blind, randomised,
           | placebo controlled, crossover trial of the parachute._
           | 
           | With the footnote: _Contributors: GCSS had the original idea.
           | JPP tried to talk him out of it. JPP did the first literature
           | search but GCSS lost it. GCSS drafted the manuscript but JPP
           | deleted all the best jokes. GCSS is the guarantor, and JPP
           | says it serves him right_
        
           | rrmm wrote:
           | I liked this bit, from the footnotes: "Contributors: RWY had
           | the original idea but was reluctant to say it out loud for
           | years. In a moment of weakness, he shared it with MWY and
           | BKN, both of whom immediately recognized this as the best
           | idea RWY will ever have."
        
         | furyofantares wrote:
         | To be clear, the quoted text in your post is presumably your
         | own words, not a quote?
        
         | spullara wrote:
         | Perhaps the Linux kernel team should actively support a Red
         | Team to do this with a notification when it would be merged
         | into the stable branch.
        
         | Floegipoky wrote:
         | Setting aside the ethical aspects which others have covered
         | pretty thoroughly, they may have violated 18 U.S.C.
         | SS1030(a)(5) or (b). This law is infamously broad and intent is
         | easier to "prove" than most people think, but #notalawyer
         | #notlegaladvice. Please don't misinterpret this as a suggestion
         | that they should or should not be prosecuted.
        
         | ww520 wrote:
         | Well part of the experiment is to see how deliberate malicious
         | commits are handled. Banning is the result. They got what they
         | wanted. Play stupid game. Win stupid pri[z]e.
        
           | failwhaleshark wrote:
           | Nit: The expression is "Play stupid games, win stupid
           | prizes."
           | 
           | As heard frequently on ASP, along with "Room Temperature
           | Challenge."
        
           | brodock wrote:
           | Isn't trying to break security of the "entire internet" some
           | kind of crime (despite whatever the excuse is)?
           | 
           | People got swatted for less.
        
           | gervwyk wrote:
           | Well said.
        
         | gher-shyu3i wrote:
         | > The thread then gets down to business and starts coordinating
         | revert patches for everything committed by University of
         | Minnesota email addresses.
         | 
         | What's preventing those bad actors from not using a UMN email
         | address?
        
           | failwhaleshark wrote:
           | Were all of the commits from UMN emails GPG signed with
           | countersigned/trusted keys?
        
           | twic wrote:
           | Nothing. I think the idea is 60% deterrence via collective
           | punishment - "if we punish the whole university, people will
           | be less likely to do this in future" - and 40% "we must do
           | something, and this is something, therefore we must do it".
        
           | blippage wrote:
           | > What's preventing those bad actors from not using a UMN
           | email address?
           | 
           | Technically none, but by banning UMN submissions, the kernel
           | team have sent an unambiguous message that their original
           | behaviour is not cool. UMN's name has also been dragged
           | through the mud, as it should be.
           | 
           | Prof Lu exercised poor judgement by getting people to submit
           | malicious patches. To use further subterfuge knowing that
           | you've been already been called out on it would be
           | monumentally bad.
           | 
           | I don't know how far Greg has taken this issue up with the
           | university, but I would expect that any reasonable university
           | would give Lu a strong talking-to.
        
           | tremon wrote:
           | see https://lore.kernel.org/linux-
           | nfs/YIAmy0zgrQW%2F44Hz@kroah.c...
           | 
           |  _If they just want to be jerks, yes. But they can 't then
           | use that type of "hiding" to get away with claiming it was
           | done for a University research project as that's even more
           | unethical than what they are doing now._
        
           | mcdonje wrote:
           | How would you catch those?
        
           | Avamander wrote:
           | Literally nothing. Instead of actual actions to improve the
           | process it's only feel-good actions without any actual
           | benefit to the kernel's security.
        
             | burnished wrote:
             | Well, it seems unlikely that any other universities will
             | fund or support copy cat studies. And I don't mean in the
             | top-down institutional sense I mean in the self-selecting
             | sense. Students will not see messing with the linux kernel
             | as being a viable research opportunity and will not do it.
             | That doesn't seem to be 'feel-good without any actual
             | benefit to the kernel's security'. Sounds like it could
             | function as an effective deterent.
        
             | yowlingcat wrote:
             | I think you're getting heavily downvoted with your comments
             | on this submission because you seem to be missing a
             | critical sociological dimension of assumed trust. If you
             | submit a patch from a real name email, you get an extra
             | dimension of human trust and likewise an extra dimension of
             | human repercussions if your actions are deemed to be
             | malicious.
             | 
             | You're criticizing the process, but the truth is that
             | without a real name email and an actual human being's
             | "social credit" to be burned, there's no proof these
             | researchers would have achieved the same findings. The more
             | interesting question to me is if they had used anonymous
             | emails, would they have achieved the same results? If so,
             | there might be some substance to your contrarian views that
             | the process itself is flawed. But as it stands, I'm not
             | sure that's the case.
             | 
             | Why? Well, look at what happened. The maintainers found out
             | and blanket banned bad actors. Going to be a little hard to
             | reproduce that research now, isn't it? Arbitraging societal
             | trust for research doesn't just bring ethical challenges
             | but /practical/ ones involving US law and standards for
             | academic research.
        
               | Avamander wrote:
               | > actual human being's "social credit" to be burned
               | 
               | How are kernel maintainers competent in detecting a real
               | person vs. fake real person? Why is there any inherit
               | trust?
               | 
               | It's clear the system is fallible, but at least now
               | people are humbled enough to not instantly dismiss the
               | risk.
               | 
               | > The maintainers found out and blanket banned bad
               | actors.
               | 
               | With collateral damage.
        
               | burnished wrote:
               | the mail server is usually a pretty good indicator. I'm
               | not an expert but you generally can't get a university
               | email address without being enrolled.
        
             | thenoblesunfish wrote:
             | The point is to make it very obviously not worth it to
             | conduct this kind of unethical research. I don't think UMN
             | is going to be eager to have this kind of attention again.
             | People could always submit bogus patches from random email
             | addresses - this removes the ability to do it under the
             | auspices of a university.
        
               | Avamander wrote:
               | The ethical aspect is separate from the practical aspect
               | that is kernel security.
               | 
               | Sabotage is a very real risk but we're discussing ethics
               | of demonstrating the risk instead of potential
               | remediation, that's dangerous and foolish.
        
             | rincebrain wrote:
             | You keep posting all over this discussion about how the
             | Linux maintainers are making a poor choice and shooting the
             | messenger.
             | 
             | What would you like them to do instead or in addition to
             | this?
        
               | GoblinSlayer wrote:
               | Indeed the situation is bad, nothing can be done. At the
               | very least as long as they can make unintentional
               | vulnerabilities, they are defenseless against intentional
               | ones, and fixing only the former is already a very big
               | deal.
        
               | Avamander wrote:
               | > What would you like them to do instead or in addition
               | to this?
               | 
               | Update the processes and tools to try and catch such
               | malicious infiltrators. Lynching researchers isn't fixing
               | the actual issue right now.
        
               | rincebrain wrote:
               | I saw at least one developer lamenting that they were
               | going to potentially bring up mechanisms for having to
               | treat every committer as malicious by default instead of
               | not at the next kernel summit, so it's quite possible
               | that's going to take place.
        
               | yjftsjthsd-h wrote:
               | > Update the processes and tools to try and catch such
               | malicious infiltrators.
               | 
               | How?
        
               | Avamander wrote:
               | That's what I'm saying kernel maintainers should figure
               | out.
        
               | michaelt wrote:
               | Step 1: When a malicious infiltrator is identified, mount
               | their head on a spike as a warning to others.
        
           | saul_goodman wrote:
           | Nothing. However if they can't claim ownership of the drama
           | they have caused it's not useful for research that's
           | publishable so it does nix these idiots from causing further
           | drama while working at this institution. For now.
        
             | nickalaso wrote:
             | They don't need to claim ownership of the drama to write
             | the paper, in fact, my first thought was that they would
             | specifically try to avoid taking ownership and instead
             | write a paper "discovering" the vulnerability(ies).
        
           | NotEvil wrote:
           | If they submit them from personal or anonymous email the
           | patches may have come under more sucutiny.
           | 
           | They gain some trust comming from university email addresses
        
             | slowmovintarget wrote:
             | Exactly. Users contributing from the University addresses
             | were borrowing against the reputation of the institution.
             | That reputation is now destroyed and each individual
             | contributor must build their own reputation to earn trust.
        
               | nomel wrote:
               | I can't understand this. Why would a university be more
               | trustworthy? Foreign actors have been known to plant
               | students, for malicious intent.
               | 
               | edit: Reference to combat the downvote:
               | https://www.nbcnews.com/news/china/american-universities-
               | are...
        
         | flowerlad wrote:
         | But this raises an obvious question: Doesn't Linux need better
         | protection against someone intentionally introducing security
         | vulnerabilities? If we have learned anything from the
         | SolarWinds hack, it is that if there is a way to introduce a
         | vulnerability then someone will do it, sooner or later. And
         | they won't publish a paper about it, so that shouldn't be the
         | only way to detect it!
        
           | corty wrote:
           | That question has been obvious for quite some time. It is
           | always possible to introduce subtle vulnerabilities. Research
           | has tried for decades to come up with a solution, to no real
           | avail.
        
             | danielbot wrote:
             | Assassinating the researchers doesn't help.
        
               | thelean12 wrote:
               | The Linux team found the source of a security threat and
               | have taken steps to prevent that security threat from
               | continuing to attack them.
               | 
               | You can't break into someone's house through their back
               | window, tell the owners what you did, and not expect to
               | get arrested.
        
           | bsder wrote:
           | > Doesn't Linux need better protection against someone
           | intentionally introducing security vulnerabilities?
           | 
           | Yes, it does.
           | 
           | Now, how do you do that other than having fallible people
           | review things?
        
         | nitrogen wrote:
         | Isn't this reaction a bit like the emperor banishing anyone who
         | tells him that his new clothes are fake? Are the maintainers
         | upset that someone showed how easy it is to subvert kernel
         | security?
        
           | enumjorge wrote:
           | It's more like the emperor banning a group of people who put
           | the citizens in danger just so they could show that it could
           | be done. The researchers did something unethical and acted in
           | a self-serving manner. It's no surprise that someone would
           | get kicked out of a community after seriously breaking the
           | trust of that community.
        
           | Mordisquitos wrote:
           | More like the emperor banishing anyone who tries to sell him
           | fake clothes to prove that the emperor will buy fake clothes.
        
         | bluGill wrote:
         | If the IRB is any good the professor doesn't get that.
         | Universities are publish or perish, and the IRB should force
         | the withdrawal of all papers they submitted. This is might be
         | enough to fire the professor with cause - including remove any
         | tenure protection they might have - which means they get a bad
         | reference.
         | 
         | I hope we hear from the IRB in about a year stating exactly
         | what happened. Real investigations of bad conduct should take
         | time to complete correctly and I want them to do their job
         | correctly so I'll give them that time. (there is the
         | possibility that these are good faith patches and someone in
         | the linux community just hates this person - seems unlikely but
         | until a proper independent investigation is done I'll leave
         | that open.)
        
           | duncaen wrote:
           | See page 9 of the already published paper:
           | 
           | https://raw.githubusercontent.com/QiushiWu/qiushiwu.github.i.
           | ..
           | 
           | > We send the emails to the Linux communityand seek their
           | feedback. The experiment is not to blame any maintainers but
           | to reveal issues in the process. The IRB of University of
           | Minnesota reviewed the procedures of the experiment and
           | determined that this is not human research. We obtained a
           | formal IRB-exempt letter. The experiment will not collect any
           | personal data, individual behaviors, or personal opinions. It
           | is limited to studying the patching process OSS communities
           | follow, instead of individuals.
        
             | protomyth wrote:
             | _The IRB of University of Minnesota reviewed the procedures
             | of the experiment and determined that this is not human
             | research._
             | 
             | How is this not human research? They experimented on the
             | reactions of people in a non-controlled environment.
        
               | temp8964 wrote:
               | For IRB human research means humans as subject in the
               | research study. The subject of the study is the kernel
               | patch review process. Yes, the review process does
               | involve humans, but the humans (reviewers) are not the
               | research subject. Not defending the study in anyway.
        
               | dragonwriter wrote:
               | > Yes, the review process does involve humans
               | 
               | It doesn't just "involve humans" it is first and foremost
               | the behavior of specific humans.
               | 
               | > but the humans (reviewers) are not the research
               | subject.
               | 
               | The study is exactly studying their behavior in a
               | particular context. They are absolutely the subjects.
        
               | temp8964 wrote:
               | Not sure why you are so obsessed with this. Yes this
               | process does involve humans, but the process has aspects
               | can be examined as independent of humans.
               | 
               | This study does not care about the reviewers, it cares
               | about the process. For example, you can certainly improve
               | the process without replacing any reviewers. It is just
               | blatantly false to claim the process is all about humans.
               | 
               | Another example, the review process can even be totally
               | conducted by AIs. See? The process is not all about
               | humans, or human behavior.
               | 
               | To make this even more understandable, considering the
               | process of building a LEGO, you need human to build a
               | LEGO, but you can examine the process of building the
               | LEGO without examine the humans who build the LEGO.
        
               | fishycrackers wrote:
               | People are obsessed because you're trying to excuse the
               | researchers behavior as ethical.
               | 
               | "Process" in this case is just another word for people
               | because ultimately, the process being evaluated here is
               | the human interaction with the malicious code being
               | submitted.
               | 
               | Put another way, let's just take out the human reviewer,
               | pretend the maintainers didn't exist. Does the patch get
               | reviewed? No. Does the patch get merged into a stable
               | branch? No. Does the patch get evaluated at all? No. The
               | whole research paper breaks down and becomes worthless if
               | you remove the human factor. The human reviewer is
               | _necessary_ for this research, so this research should be
               | deemed as having human participants.
        
               | temp8964 wrote:
               | See my top comment. I didn't "try to excuse the
               | researchers behavior as ethical".
        
               | protomyth wrote:
               | _This study does not care about the reviewers, it cares
               | about the process. For example, you can certainly improve
               | the process without replacing any reviewers. It is just
               | blatantly false to claim the process is all about
               | humans._
               | 
               | This was all about the reaction of humans. They sent in
               | text with a deceptive description and tried to get a
               | positive answer even though the text was not wholly what
               | was described. It was a psych study in an uncontrolled
               | environment with people who did not know they were
               | participating in a study.
               | 
               | How they thought this was acceptable with their own
               | institutions Participant's Bill of Rights
               | https://research.umn.edu/units/hrpp/research-
               | participants/pa... is a true mystery.
        
               | temp8964 wrote:
               | No. This is not all about the reaction of humans. This is
               | not a psych study. I have explained this clearly in
               | previous comments. If you believe the process of doing
               | something is all about humans, I have nothing to add.
        
               | protomyth wrote:
               | I'm not the only one https://mobile.twitter.com/SarahJami
               | eLewis/status/1384871385...
        
               | temp8964 wrote:
               | How does this link have anything to do with my comments?
        
             | dekhn wrote:
             | This is exactly what I would have said: this sort of
             | research isn't 'human subjects research' and therefore is
             | not covered by an IRB (whose job it is to prevent the
             | university from legal risk, not to identify ethically
             | dubious studies).
             | 
             | It is likely the professor involved here will be fired if
             | they are pre-tenure, or sanctioned if post-tensure.
        
               | dfranke wrote:
               | How in the world is conducting behavioral research on
               | kernel maintainers to see how they respond to subtly-
               | malicious patches not "human subject research"?
        
               | dekhn wrote:
               | are you expecting that science and institutions are
               | rational? If I was on the IRB, I wouldn't have considered
               | this since it's not a sociological experiment on kernel
               | maintainers, it's an experiment to inject vulnerabilities
               | in a source code. That's not what IRBs are qualified to
               | evaluate.
        
               | alxlaz wrote:
               | In the restricted sense of Title 45, Part 46, it's
               | probably not quite human subject research (see
               | https://www.hhs.gov/ohrp/regulations-and-
               | policy/regulations/... ).
               | 
               | Of course, there are other ethical and legal requirements
               | that you're bound to, not just this one. I'm not sure
               | which requirements IRBs in the US look into though, it's
               | a pretty murky situation.
        
               | dlgeek wrote:
               | How so?
               | 
               | It seems to qualify per SS46.102(e)(1)(i) ("Human subject
               | means a living individual about whom an investigator [..]
               | conducting research: (i) Obtains information [...]
               | through [...] interaction with the individual, and uses,
               | studies, or analyzes the information [...]")
               | 
               | I don't think it'd qualify for any of the exemptions in
               | 46.104(d): 1 requires an educational setting, 2 requires
               | standard tests, 3 requires pre-consent and interactions
               | must be "benign", 4 is only about the use of PII with no
               | interactions, 5 is only about public programs, 6 is only
               | about food, 7 is about storing PII and not applicable and
               | 8 requires "broad" pre-consent and documentation of a
               | waiver.
        
               | dekhn wrote:
               | rather than arguing about the technical details of the
               | law, let me just clarify: IRBs would actively reject a
               | request to review this. It's not in their (perceived)
               | purview.
               | 
               | It's not worth arguing about this; if you care, you can
               | try to change the law. In the meantime, IRBs will do what
               | IRBs do.
        
               | shkkmo wrote:
               | If the law, as written, does actually classify this as
               | human research, it seems like the correct response is to
               | sue the University for damages under that law.
               | 
               | Since IRBs exist to minimize liability, it seems like
               | that would be that fastest route towards change (assuming
               | you have legal standing )
        
               | dfranke wrote:
               | If there's some deeply legalistic answer explaining how
               | the IRB correctly interpreted their rules to arrive at
               | the exemption decision, I believe it. It'll just go to
               | show the rules are broken.
               | 
               | IRBs are like the TSA. Imposing annoyance and red tape on
               | the honest vast-majority while failing to actually filter
               | the 0.0001% of things they ostensibly exist to filter.
        
               | Tobu wrote:
               | That's still 10 thousand words you're linking to...
               | 
               | I had a look at section SS46.104
               | https://www.hhs.gov/ohrp/regulations-and-
               | policy/regulations/... since it mentioned exemptions, and
               | at (d) (3) inside that. It still doesn't apply: there's
               | no agreement to participate, it's not benign, it's not
               | anonymous.
        
             | nkurz wrote:
             | > The IRB of University of Minnesota reviewed the
             | procedures of the experiment and determined that this is
             | not human research.
             | 
             | I'm not sure how it affects things, but I think it's
             | important to clarify that they did not obtain the IRB-
             | exempt letter in advance of doing the research, but after
             | the ethically questionable actions had already been taken:
             | 
             |  _The IRB of UMN reviewed the study and determined that
             | this is not human research (a formal IRB exempt letter was
             | obtained). Throughout the study, we honestly did not think
             | this is human research, so we did not apply for an IRB
             | approval in the beginning. ... We would like to thank the
             | people who suggested us to talk to IRB after seeing the
             | paper abstract._
             | 
             | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
             | hc....
        
               | catgary wrote:
               | I'm a bit shocked that the IRB gave an exemption letter -
               | are they hoping that the kernel maintainers won't take
               | the (very reasonable) step towards legal action?
        
             | volta83 wrote:
             | > We send the emails to the Linux communityand seek their
             | feedback.
             | 
             | That's not really what they did.
             | 
             | They sent the patches, the patches where either merged or
             | rejected.
             | 
             | And they never let anybody knew that they had introduced
             | security vulnerabilities on the kernel on purpose until
             | they got caught and people started reverting all the
             | patches from their university and banned the whole
             | university.
        
               | incrudible wrote:
               | > And they never let anybody knew that they had
               | introduced security vulnerabilities on the kernel on
               | purpose...
               | 
               | Yes, that's the whole point! The _real_ malicious actors
               | aren 't going to notify anyone that they're injecting
               | vulnerabilities either. They may be plants at reputable
               | companies, and they'll make it look like an "honest
               | mistake".
               | 
               | Had this not been caught, it would've exposed a major
               | flaw in the process.
               | 
               | > ...until they got caught and people started reverting
               | all the patches from their university and banned the
               | whole university.
               | 
               | Either these patches are valid fixes, in which case they
               | should remain, or they are intentional vulnerabilities,
               | in which case they should've already been reviewed and
               | rejected.
               | 
               | Reverting and reviewing them "at a later date" just makes
               | me question the process. If they haven't been reviewed
               | properly yet, it's better to do it now instead of messing
               | around with reverts.
        
               | simiones wrote:
               | This reminds me of that story about Go Daddy sending
               | everyone "training phishing emails" announcing that they
               | had received a company bonus - with the explanation that
               | this is ok because it is a realistic pretext that real
               | phishing may use.
               | 
               | While true, it's simply not acceptable to abuse trust in
               | this way. It causes real emotional harm to real humans,
               | and while it also may produce some benefits, those do not
               | outweigh the harms. Just because malicious actors don't
               | care about the harms shouldn't mean that ethical people
               | shouldn't either.
        
               | incrudible wrote:
               | This isn't some employer-employee trust relationship. The
               | whole point of the test is that you _can 't trust_ a
               | patch just because it's from some university or some
               | major company.
        
               | simiones wrote:
               | The vast majority of patches are not malicious. Sending a
               | malicious patch (one that is known to introduce a
               | vulnerability) is a malicious action. Sending a buggy
               | patch that creates a vulnerability by accident is not a
               | malicious action.
               | 
               | Given the completely unavoidable limitations of the
               | review and bug testing process, a maintainer has to react
               | very differently when they have determined that a patch
               | is malicious - all previous patches past from that same
               | source (person or even organization) have to be either
               | re-reviewed at a much higher standard or reverted
               | indiscriminately; and any future patches have to be
               | rejected outright.
               | 
               | This puts a heavy burden on a maintainer, so
               | intentionally creating this type of burden is a malicious
               | action regardless of intent. Especially given that the
               | intent was useless in the first place - everyone knows
               | that patches can introduce vulnerabilities, either
               | maliciously or by accident.
        
               | incrudible wrote:
               | > The vast majority of patches are not malicious.
               | 
               | The vast majority of drunk drivers never kill anyone.
               | 
               | > Sending a malicious patch (one that is known to
               | introduce a vulnerability) is a malicious action.
               | 
               | I disagree that it's malicious in this context, but
               | that's irrelevant really. If the patch gets through, then
               | that proves one of the most critical pieces of software
               | could relatively easily be infiltrated by a malicious
               | actor, which means the review process is broken. That's
               | what we're trying to figure out here, and there's no
               | better way to do it than replicate the same conditions
               | under which such patches would ordinarily be reviewed.
               | 
               | > Especially given that the intent was useless in the
               | first place - everyone knows that patches can introduce
               | vulnerabilities, either maliciously or by accident.
               | 
               | Yes, everyone knows that patches can introduce
               | vulnerabilities _if they are not found_. We want to know
               | whether they are found! If they are not found, we need to
               | figure out how they slipped by and how to prevent that
               | from happening in the future.
        
               | c22 wrote:
               | Since humanity still hasn't fixed the problem of drunk
               | drivers I guess I'll start driving drunk on the weekends
               | to illustrate the flaws of the system.
        
               | bluesnowmonkey wrote:
               | Open source runs on trust, of both individuals and
               | institutions. There's no alternative. Processes like code
               | review can supplement but not replace it.
        
               | nomel wrote:
               | So my question is, what is a kernel? Is it a security
               | project? Should security products rely on trust, or
               | assume malicuous intent?
        
               | duncaen wrote:
               | This is not what happened according to them:
               | 
               | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
               | hc....
               | 
               | > (4). Once any maintainer of the community responds to
               | the email, indicating "looks good", we immediately point
               | out the introduced bug and request them to not go ahead
               | to apply the patch. At the same time, we point out the
               | correct fixing of the bug and provide our proper patch.
               | In all the three cases, maintainers explicitly
               | acknowledged and confirmed to not move forward with the
               | incorrect patches. This way, we ensure that the incorrect
               | patches will not be adopted or committed into the Git
               | tree of Linux.
        
               | tecleandor wrote:
               | It'd be great if they pointed to those "please don't
               | merge" messages on the mailing list or anywhere.
               | 
               | Seems like there are some patches already on stable trees
               | [1], so they're either lying, or they didn't care if
               | those "don't merge" messages made anybody react to them.
               | 
               | 1 - https://lore.kernel.org/linux-
               | nfs/CADVatmNgU7t-Co84tSS6VW=3N...
        
               | treesknees wrote:
               | The paper doesn't cite specific commits used. It's
               | possible that any of the commits in stable are actually
               | good commits and not part of the experiment. I support
               | the ban/revert, I'm just pointing out there's a 3rd
               | option you didn't touch on.
        
               | coriny wrote:
               | Patches with built-in bugs made it to stable:
               | https://lore.kernel.org/linux-
               | nfs/YIAta3cRl8mk%2FRkH@unreal/.
        
               | bmcahren wrote:
               | Here's the commit specifically identified by Leon
               | Romanovsky as having a "built-in bug"
               | 
               | https://github.com/torvalds/linux/commit/8e949363f017
        
               | rigden33 wrote:
               | That commit is from Aditya Pakki who I don't believe is
               | affiliated with the paper in question, whose only authors
               | are Qiushi Wu, and Kangjie Lu.
        
               | computershit wrote:
               | Aditya Pakki is an RA under Kangjie Lu.
        
               | rurban wrote:
               | We have 4 people, with the students Quishu Wu and Aditya
               | Pakki intruducing the faulty patches, and the 2 others,
               | Prof Kangjie Lu and Ass.Prof Wengwen Wang patching
               | vulnerabilities in the same area. Banning the leader
               | seems ok to me, even if he produced some good fixes and
               | SW to detect it. The only question is Wang who is now in
               | Georgia, and was never caught. Maybe he left Lu at umn
               | because of his questionable ethics.
        
               | bombcar wrote:
               | At least one of Wang's patches has been double reviewed
               | and the reversion NACK'd - in other words it was a good
               | patch.
        
               | corty wrote:
               | Also, they are talking of three cases. However, the list
               | of patches to be reverted by gregkh is far longer than
               | three, more than a hundred. Most of the first batch look
               | sufficiently similar that I would guess all of them are
               | part of this "research". So the difference in numbers
               | alone points to them most probably lying.
        
               | finnthehuman wrote:
               | I was more ambivalent about their "research" until I read
               | that "clarification." It's weaselly bullshit.
               | 
               | >> The work taints the relationship between academia and
               | industry
               | 
               | > We are very sorry to hear this concern. This is really
               | not what we expected, and we strongly believe it is
               | caused by misunderstandings
               | 
               | Yeah, misunderstandings by the university that anyone,
               | ever, in any line of endeavor would be happy to be
               | purposely fucked with as long as the perpetrator
               | eventually claims it's for a good cause. In this case the
               | cause isn't even good, they're proving the jaw-droppingly
               | obvious.
        
               | dwild wrote:
               | > they're proving the jaw-droppingly obvious.
               | 
               | Yet we do nothing about it? I wouldn't call that jaw-
               | droppingly obvious, if anything, without this, I'm pretty
               | sure that anyone would argue that it would be caught way
               | before making it way into stable.
        
               | rurp wrote:
               | I've literally never come across an open source project
               | that was thought to have a bullet proof review process or
               | had a lack of people making criticisms.
               | 
               | What they do almost universally lack is enough people
               | making positive contributions (in time, money, or both).
               | 
               | This "research" falls squarely into the former category
               | and burns resources that could have been spent on the
               | latter.
        
               | rualca wrote:
               | Even their choice of wording ("We are very sorry to hear
               | this concern.") is the blend of word fuckery that conveys
               | the idea they care nothing about what they did or why it
               | negatively affected others.
        
               | PeterisP wrote:
               | The first step of an apology is admitting the misdeed.
               | Here they are explicitly _not_ acknowledging that what
               | they did was wrong, they are still asserting that this
               | was a misunderstanding.
        
             | karlmdavis wrote:
             | Communities aren't people? What in the actual fuck is going
             | on with this university's IRB?!
        
               | tikiman163 wrote:
               | They weren't studying the community, they were studying
               | the patching process used by that community, which a
               | normal IRB would and should consider to be research on a
               | process and therefore not human Research. That's how they
               | presented it to the IRB so it got passed even if what
               | they were claiming was clearly bullshit.
               | 
               | This research had the potential to cause harm to people
               | despite not being human research and was therefore
               | ethically questionable at best. Because they presented
               | the research as not posing potential harm to real people
               | that means they lied to the IRB, which is grounds for
               | dismissal and potential discreditation of all
               | participants (their post-graduate degrees could be
               | revoked by their original school or simply treated as
               | invalid by the educational community at large).
               | Discreditation is unlikely, but loss of tenure for
               | something like this is not out of the question, which
               | would effectively end the professor's career anyway.
        
               | DetroitThrow wrote:
               | In my experience in university research, the correct
               | portrayal of the ethical impact is the burden of the
               | researchers unfortunately, and the most plausible
               | explanation in my view given their lack of documentation
               | of the request for IRB exemption would be that they
               | misconstrued the impact of the research.
               | 
               | It seems very possible to me that an IRB wouldn't have
               | accepted their proposed methodology if they hadn't
               | received an exemption.
        
             | emeraldd wrote:
             | > The IRB of University of Minnesota reviewed the
             | procedures of the experiment and determined that this is
             | not human research. We obtained a formal IRB-exempt letter.
             | 
             | Is there anyone on hand who could explain how what looks
             | very much like a social engineering attack is not "human
             | research"?
        
           | rzwitserloot wrote:
           | > I hope we hear from the IRB in about a year stating exactly
           | what happened. Real investigations of bad conduct should take
           | time to complete correctly and I want them to do their job
           | correctly so I'll give them that time
           | 
           | That'd be great, yup. And the linux kernel team should then
           | strongly consider undoing the blanket ban, but not until this
           | investigation occurs.
           | 
           | Interestingly, if all that happens, that _would_ be an
           | intriguing data point in research on how FOSS teams deal with
           | malicious intent, heh.
        
             | gomijacogeo wrote:
             | Personally, I think their data points should include
             | "...and we had to explain ourselves to the FBI."
        
           | alxlaz wrote:
           | This is, at the very least, worth an investigation from an
           | ethics committee.
           | 
           | First of all, this is completely irresponsible, what if the
           | patches would've made their way into a real-life device? The
           | paper does mention a process through which they tried to
           | ensure that doesn't happen, but it's pretty finicky. It's one
           | missed email or one bad timezone mismatch away from releasing
           | the kraken.
           | 
           | Then playing the slander victim card is outright stupid, it
           | hurts the credibility of _actual_ victims.
           | 
           | The mandate of IRBs in the US is pretty weird but the debate
           | about whether this was "human subject research" or not is
           | silly, there are many other ethical and legal requirements to
           | academic research besides Title 45.
        
             | kortex wrote:
             | > there are many other ethical and legal requirements to
             | academic research besides Title 45.
             | 
             | Right. It's not just human subjects research. IRBs vet all
             | kinds of research: polling, surveys, animal subjects
             | research, genetics/embryo research (potentially even if not
             | human/mammal), anything which could be remotely interpreted
             | as ethically marginal.
        
               | owlbite wrote:
               | If we took the case into the real world and it became "we
               | decided to research how many supports we could remove
               | from this major road bridge before someone noticed", I'd
               | hope the IRB wouldn't just write it off as "not human
               | research so we don't care".
        
             | fishycrackers wrote:
             | I agree. I personally don't care if it meets the official
             | definition of human subject research. It was unethical,
             | regardless of whether it met the definition or not. I think
             | the ban is appropriate and wouldn't lose any sleep if the
             | ban also enacted by other open-source projects and
             | communities.
             | 
             | It's a real shame because the university probably has good,
             | experienced people who could contribute to various OSS
             | projects. But how can you trust any of them when the next
             | guy might also be running an IRB exempt security study.
        
         | BearsAreCool wrote:
         | I'm really not sure what the motive to lie is. You got caught
         | with your hand in the cookie jar, time to explain what happened
         | before they continue to treat you like a common criminal. Doing
         | a pentest and refusing to state it was a pentest is mind
         | boggling.
         | 
         | Has anyone from the "research" team commented and confirmed
         | this was even them or a part of their research? It seems like
         | the only defense is from people who did google-fu for a
         | potentially outdated paper. At this point we can't even be sure
         | if this isn't a genuinely malicious actor using comprimised
         | credentials to introduce vulnerabilities.
        
           | mort96 wrote:
           | It's also not a pen test. Pen testing is explicitly
           | authorized, where you play the role as an attacker, with
           | consent from your victim, in order to report security issues
           | to your victim. This is just straight-up malicious behavior,
           | where the "researchers" play the role as an attacker, without
           | consent from their victim, for personal gain (in this case,
           | publishing a paper).
        
             | NotEvil wrote:
             | Because of the nature of the research an argument can be
             | made that it was like a bug bounty (not defending them just
             | putting my argument) but they should have come clean when
             | the patched was merged and told the community about the
             | research or at least submitted the right patch.
             | 
             | Intentionally having bugs in kernel only you know about is
             | very bad.
        
               | opnitro wrote:
               | The primary difference being the organization being
               | tested explicitly sets up a bug bounty with terms, as
               | opposed to this.
        
               | [deleted]
        
               | unanswered wrote:
               | I'll take People Who Don't Understand Consent for $400,
               | Alex.
        
               | shkkmo wrote:
               | This is the rare HN joke that not only is hilarious, but
               | susinctly makes the core point that is being disagreed
               | about clear
        
       | kleiba wrote:
       | How is such a ban going to be effective? The "researchers" could
       | easily continue their experiments using different credentials,
       | right?
        
         | NtrllyIntrstd wrote:
         | I think it is more of a message than a solution
        
         | ajross wrote:
         | Arbitrary anonymous submissions don't go into the kernel in
         | general. The point[1] behind the Signed-off-by line is to
         | associate a physical human being with real contact information
         | with the change.
         | 
         | One of the reason this worked is likely that submissions from
         | large US research universities get a "presumptive good faith"
         | pass. A small company in the PRC, for an example, might see
         | more intensive review. But given the history of open source, we
         | trust graduate students maybe more than we should.
         | 
         | [1] Originally legal/copyright driven and not a security
         | feature, though it has value in both domains.
        
           | wjertyj wrote:
           | They do if the patch "looks good" to the right people.
           | 
           | In late January I submitted a patch with no prior
           | contributions, and it was pushed to drm-misc-next within an
           | hour. It's now filtered it's way through drm-next and will
           | likely land in 5.13.
        
             | ajross wrote:
             | But your signed-off-by was a correct email address with
             | your real identity, as per:
             | 
             | https://github.com/torvalds/linux/blob/master/Documentation
             | /...
             | 
             | Right? It's true that all systems can be gamed and you
             | could no doubt fool the right maintainer to take a patch
             | from a fraudulent source. But the point is that it's not as
             | simple as this grad student just resubmitting work under a
             | different name.
        
               | wjertyj wrote:
               | > But your signed-off-by was a correct email address with
               | your real identity, as per
               | 
               | Maybe?
               | 
               | My point with the above comment was more to point out
               | that there is no special '"presumptive good faith" pass'
               | that comes along with a .edu e-mail address, not that
               | it's possible to subvert the system (that's already well
               | known).
               | 
               | Everyone, including some random dude with a Hackers
               | (1995) reference for an e-mail address (myself) gets that
               | "presumptive good faith" pass.
        
           | Cpoll wrote:
           | > A small company in the PRC, for an example, might see more
           | intensive review.
           | 
           | Which is a bit silly, isn't it? Grad students are poor and
           | overworked, it seems easy to find one to trick/bribe into
           | signing off your code, if you wanted to do something
           | malicious.
        
             | whatshisface wrote:
             | Well, there's nothing easier to corrupt than a small
             | company (not just in the PRC), because you could found one
             | specifically to introduce vulnerabilities without breaking
             | any laws in any country I know of.
        
             | angry_octet wrote:
             | Grad students have invested years of their life, for no
             | reward, in research on a niche topic. Any ding to their
             | reputation will adversely effect their entire career. I
             | doubt this guy would get a post doc fellowship anywhere
             | after this.
        
               | Cpoll wrote:
               | > Any ding to their reputation will adversely effect
               | their entire career.
               | 
               | If this is foolproof, then no-one should be talking about
               | the replication crisis.
               | 
               | People don't do bad things _expecting_ to be caught, if
               | they haven't already convinced themselves they're not
               | doing anything bad at all. And I suspect it's
               | surprisingly easy to convince people that they won't get
               | caught.
        
               | angry_octet wrote:
               | But they published papers about their misconduct... I
               | don't know how they haven't been sanctioned already.
               | 
               | Replication is really a different problem. It's possible
               | for you to do nothing wrong, run hundreds of trials, get
               | a great result and publish it. But it was due to
               | noise/error/unknown factors, and can't be replicated. The
               | crisis is also that replication receives no academic
               | recognition.
               | 
               | When people fabricate results they know it's an offence,
               | the problem with these guys is they don't even
               | acknowledge/understand the ethical rule they are
               | breaking.
        
         | krig wrote:
         | Thus moving from merely unethical to actually fraudulent?
         | Although from the email exchanges it seems they are already
         | making fraudulent statements...
         | 
         | At least it might prompt the University to take action against
         | the researchers.
        
         | rrmm wrote:
         | The ban is aimed more at the UMN dept overseeing the reserach
         | than at preventing continued "experiments." I imagine it would
         | also make continued experiments even more unethical.
        
         | unmole wrote:
         | Any data collected from such "research" would be unpublishable
         | and therefore worthless.
        
         | notyourday wrote:
         | > How is such a ban going to be effective?
         | 
         | It trashes University of Minnesota in the press. What is going
         | to happen is that the president of the university now is going
         | to hear about it, so will the provost and so will people in
         | charge of doling money. That will _rapidly_ fix the professor
         | problem.
         | 
         | While people may think that tenure professors get to do what
         | they want, they never win in a war with a president and a
         | provost. That professor is toast. And so are his researchers
        
           | chaosite wrote:
           | The professor's site says that he is an assistant professor,
           | i.e., he doesn't actually have tenure yet.
        
             | voidfunc wrote:
             | Well his career is over. He's now unemployable in academia.
        
               | pumaontheprowl wrote:
               | Oh no, now he'll just have to make twice the salary in
               | private industry. We really stuck it to him.
        
         | nodja wrote:
         | I believe this is so that the university treats the reports
         | seriously. It's basically a "shit's broken, fix it". The
         | researchers are probably under a lot of pressure from the rest
         | of the university right now.
        
         | LegitShady wrote:
         | Their whole department/university just got officially banned.
         | If they attempt to circumvent that, the authorities would
         | probably be involved due to fraud.
        
         | darig wrote:
         | If you're a young hacker that wants to get into kernel
         | development as a career, are you going to consider going to a
         | university that has been banned from officially participating
         | in development for arguably the most prolific kernel?
         | 
         | The next batch of "researchers" won't be attending the
         | University of Minnesota, and other universities scared of the
         | same fate (missing out on tuition money) will preemptively ban
         | such research themselves.
         | 
         | "Effective" isn't binary, and this is a move in the right
         | direction.
        
           | neatze wrote:
           | The kernel that runs on mars now and on home/work desktops.
        
       | atleta wrote:
       | It's already being discussed on HN [1] but for some reason it's
       | down to the 3rd page despite having ~1200 upvotes at the moment
       | and ~600 comments, including from Greg KH. (And the submission is
       | only 5 hours old.)
       | 
       | [1] https://news.ycombinator.com/item?id=26887670
        
         | bitcharmer wrote:
         | This is another example of HN's front page submission getting
         | aggressively moderated for no good reason. It's been happening
         | a lot lately.
        
           | dang wrote:
           | Perhaps you've been seeing it more for some reason, or it has
           | seemed more aggressive to you for some reason, but I can tell
           | you that the way we moderate HN's front page hasn't changed
           | in many years.
           | 
           | It's clear to me now that this case was a moderation mistake.
           | We make them sometimes (alas), but that's also been true for
           | many years. Moderation is guesswork. https://hn.algolia.com/?
           | dateRange=all&page=0&prefix=true&que...
        
         | dang wrote:
         | Sorry, we got that wrong. Fixed now.
        
       | qwertox wrote:
       | How is this any different to littering in order to research if it
       | gets cleaned up properly? Or like dumping hard objects onto a
       | highway to research if they cause harm before authorities notice
       | it?
       | 
       | I mean, the Kernel is now starting to run in cars and even on
       | Mars, and getting those bugs into stable is definitely no
       | achievement one should be proud of.
        
       | mikaeluman wrote:
       | Usually I am very skeptical of "soft" subjects like the
       | humanities; but clearly this is unethical research.
       | 
       | In addition to wasting people's time, you are potentially messing
       | with software that runs the world.
        
         | NationalPark wrote:
         | Considering how often you post about free speech and
         | censorship, maybe you would find some interesting perspectives
         | within the humanities.
        
       | [deleted]
        
       | b0rsuk wrote:
       | Think of potential downstream effects of a vulnerable patch being
       | introduced into Linux kernel. Buggy software in mobile devices,
       | servers, street lights... this is like someone introducing a bug
       | into university grading system.
       | 
       | Someone should look into who sponsored this research. Was there a
       | state agent?
        
       | freewilly1040 wrote:
       | Is there some tool that provides a nicer view of these types of
       | threads? I find them hard to navigate and read.
        
       | ne38 wrote:
       | It is not done for research purpose. NSA is behind them
        
       | Klwohu wrote:
       | Whoa this is some heavy DC, a Chinese spy got busted trying to
       | poison the Linux kernel. And then he came up with an excuse.
        
         | jlduan wrote:
         | just because they chose to use Chinese names, doesnt make them
         | less American. Are you suggesting non-Chinese Americans cant be
         | spies?
        
           | Klwohu wrote:
           | Are you saying, in this particular case, that the Chinese
           | researcher is an American citizen? That's a very bold claim.
           | Source?
        
             | burnished wrote:
             | > a Chinese spy got busted trying to poison the Linux
             | kernel
             | 
             | this you?
        
               | Klwohu wrote:
               | Are you going to pretend that Chinese penetration of
               | academia isn't real? There've been some recent high level
               | prosecutions which prove it's a big problem.
        
       | returningfory2 wrote:
       | Commenters have been reasonably accusing the researchers of bad
       | practice, but I think there's another possible take here based on
       | Hanlon's razor: "never attribute to malice that which is
       | adequately explained by stupidity".
       | 
       | If you look at the website of the PhD student involved [1], they
       | seem to be writing mostly legitimate papers about, for example,
       | using static analysis to find bugs. In this kind of research,
       | having a good reputation in the kernel community is probably
       | pretty valuable because it allows you to develop and apply
       | research to the kernel and get some publications/publicity out of
       | that.
       | 
       | But now, by participating in this separate unethical research
       | about OSS process, they've damaged their professional reputation
       | and probably setback their career somewhat. In this
       | interpretation, their other changes were made in good faith, but
       | now have been tainted by the controversial paper.
       | 
       | [1] https://qiushiwu.github.io/
        
         | MattGaiser wrote:
         | I suppose it depends on what you make of Greg's opinion (I am
         | only vaguely familiar with this topic, so I have none).
         | 
         | > They obviously were _NOT_ created by a static analysis tool
         | that is of any intelligence, as they all are the result of
         | totally different patterns, and all of which are obviously not
         | even fixing anything at all. So what am I supposed to think
         | here, other than that you and your group are continuing to
         | experiment on the kernel community developers by sending such
         | nonsense patches?
         | 
         | Greg didn't think that the static analysis excuse could be
         | legitimate as the quality was garbage.
        
         | qiqing wrote:
         | That looks like a different person from the name in the
         | article.
        
       | nabla9 wrote:
       | If it was up to me, I would
       | 
       | 1) send ethics complaint to the University of Minnesota, and
       | 
       | 2) report this to FBI cyber crime division.
        
       | WaitWaitWha wrote:
       | There is so much disdain for unethical, ivory tower thinking in
       | universities, this is not helping.
       | 
       | But, allow me to pull a different thread. How liable is the
       | professor, the IRB, and the university if there is any calamity
       | caused by the known code?
       | 
       | What is the high level difference between their action, and
       | spreading malware intentionally?
        
       | leeuw01 wrote:
       | In a follow-up [1], the author suggests: OSS projects would be
       | suggested to update the code of conduct, something like "By
       | submitting the patch, I agree to not intend to introduce bugs"
       | 
       | How can one be so short-sighted?...
       | 
       | [1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
       | hc....
        
       | grae_QED wrote:
       | This is insulting. The whole premise behind the paper is that
       | open source developers aren't able to parse comits for malicious
       | code. From a security standpoint, sure, I'm sure a bad actor
       | could attempt to do this. But the fact that he tried this on the
       | linux kernel, an almost sacred piece of software IMO, and
       | expected it to work takes me aback. This guy either has a huge
       | ego or knows very little about those devs.
        
       | karsinkk wrote:
       | Here's a clarification from the Researchers over at UMN[1].
       | 
       | They claim that none of the Bogus patches were merged to the
       | Stable code line :
       | 
       | >Once any maintainer of the community responds to the
       | email,indicating "looks good",we immediately point out the
       | introduced bug and request them to not go ahead to apply the
       | patch. At the same time, we point out the correct fixing of the
       | bug and provide our proper patch. In all the three cases,
       | maintainers explicitly acknowledged and confirmed to not move
       | forward with the incorrect patches. This way, we ensure that the
       | incorrect patches will not be adopted or committed into the Git
       | tree of Linux.
       | 
       | I haven't been able to find out what the 3 patches which the
       | reference are, but the discussions on Greg's UMN Revert patch [2]
       | does indicate that some of the fixes have indeed been merged to
       | Stable and are actually Bogus.
       | 
       | [1] : https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
       | hc....
       | 
       | [2] :
       | https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
        
         | capableweb wrote:
         | In the end, the damage has been done and the Linux developers
         | are now going back and removing all patches from any user with
         | a @umn.edu email.
         | 
         | Not sure how the researchers didn't see how this would
         | backfire, but it's a hopeless misuse of their time. I feel
         | really bad for the developers who now have to spend their time
         | fixing shit that shouldn't even be there, just because someone
         | wanted to write a paper and their peers didn't see any problems
         | either. How broken is academia really?
        
           | q00r42 wrote:
           | This, in of itself, is a finding. The researchers will
           | justify their research with "we were banned which is a
           | possible outcome of this kind of research..." I find this
           | disingenuous. When a community of open source contributors is
           | partially built on trust, then violators can and will be
           | banned.
           | 
           | The researchers should have approached the maintainers got
           | get buy in, and setup a methodology where a maintainer would
           | not interfere until a code merge was immanent, and just play
           | referee in the mean time.
        
             | db48x wrote:
             | I don't mind them publishing that result, as long as they
             | make it clear that everyone from the university was banned,
             | even people not affiliated with their research group. Of
             | course anyone can get around that ban just by using a
             | private email address (and the prior paper from the same
             | research group started out using random gmail accounts
             | rather than @umn.edu accounts), but making this point will
             | hopefully prevent anyone from trying the same bad ideas.
        
           | [deleted]
        
           | croh wrote:
           | I feel the same way. People don't understand how it is
           | difficult to be a maintainer. This is very selfish behaviour.
           | Appreciate Greg's strong stance against it.
        
         | foota wrote:
         | It seems to me like the other patches were submitted in good
         | faith, but that the maintainer no longer trusts them because of
         | the other bad commits.
        
         | bogwog wrote:
         | > I haven't been able to find out what the 3 patches which the
         | reference are, but the discussions on Greg's UMN Revert patch
         | [2] does indicate that some of the fixes have indeed been
         | merged to Stable and are actually Bogus.
         | 
         | That's because those are two separate incidents. The study
         | which resulted in 3 patches was completed some time last year,
         | but this new round of patches is something else.
         | 
         | It's not clear whether the patches are coming from the same
         | professor/group, but it seems like the author of these bogus
         | patches is a Phd student working with the professor who
         | conducted that study last year. So there is at least one
         | connection.
         | 
         | EDIT: also, those 3 patches were supposedly submitted using a
         | fake email address according to the "clarification" document
         | released after the paper was published. So they probably didn't
         | use a @umn.edu email at all.
        
         | lbarrow wrote:
         | The response makes the researchers seem clueless, arrogant, or
         | both - are they really surprised that kernel maintainers would
         | get pissed off at someone deliberately wasting their time?
         | 
         | From the post:                 * Does this project waste
         | certain efforts of maintainers?       Unfortunately, yes. We
         | would like to sincerely apologize to the maintainers involved
         | in the corresponding patch review process; this work indeed
         | wasted their precious time. We had carefully considered this
         | issue, but could not figure out a better solution in this
         | study. However, to minimize the wasted time, (1) we made the
         | minor patches as simple as possible (all of the three patches
         | are less than 5 lines of code changes); (2) we tried hard to
         | find three real bugs, and the patches ultimately contributed to
         | fixing them
         | 
         | "Yes, this wastes maintainers time, but we decided we didn't
         | care."
        
           | ball_of_lint wrote:
           | Or more charitably: "Yes, this spent some maintainers time,
           | but only a small amount and it resulted in bugfixes, which is
           | par for the course of contributing to linux"
        
           | 2pEXgD0fZ5cF wrote:
           | > clueless, arrogant, or both
           | 
           | I'm going to go with "both" here.
        
             | TediusAndBrief wrote:
             | Had him as a TA, can confirm. Rudest and more arrogant TA
             | I've ever worked with. Outright insulted me for asking
             | questions as a transfer who had never used Linux systems
             | before. Him implying he was ignorant and new is laughable
             | when his whole demeanor was that you're an imbecile for not
             | knowing how these things work.
        
           | SirSavary wrote:
           | Fascinating that the research was judged not to involve human
           | subjects....
           | 
           | As someone not part of academia, how could this research be
           | judged to not involve people? It _seems_ obvious to me that
           | the entire premise is based around tricking/deceiving the
           | kernel maintainers.
        
             | dumpsterdiver wrote:
             | Yeah, especially when the researcher begins gaslighting his
             | subjects. He had the gall to call the maintainer's response
             | "disgusting to hear", and then went on to ask for "the
             | benefit of the doubt" after publishing a paper admitting
             | that he decieved them.
             | 
             | For comparison, imagine that you attented a small
             | conference and unknowingly became a test subject, and when
             | you sit down the chair shocks you with a jolt of
             | electricity. You jump out of your chair and exclaim, "This
             | seat shocked me!" Then the person giving the presentation
             | walks to your seat and sits down and it doesn't shock him
             | (because he's the one holding the button), and he then
             | accuses you of wasting everyone's time. That's essentially
             | what happened here.
        
             | shadowgovt wrote:
             | And given the apparent failure of UMN's IRB, banning UMN
             | from contributing to the Linux kernel honestly seems like a
             | correct approach to resolve the underlying issue.
        
             | rococode wrote:
             | That was my thinking too, surely their school's IRB would
             | have a field day with this. The question is whether they
             | ran this by their IRB at all. If they did it, there would
             | be implications on the ethics of everything coming out of
             | UMN. If they didn't, then the same for their lab. I know at
             | my school things were quite clear - if your work requires
             | _any_ interaction with _any_ human not in your lab, you
             | need IRB approval. This is literally just a social
             | engineering experiment, so of course IRB should have
             | reviewed it.
             | 
             | https://research.umn.edu/units/irb
        
               | klodolph wrote:
               | They ran it by the IRB after publishing the paper, and
               | the IRB issued a post-hoc exemption.
               | 
               | Disgusting.
        
           | mshockwave wrote:
           | > Unfortunately, yes
           | 
           | That is the perfect example of being arrogant
        
           | slaw_pr wrote:
           | Time of these idiots is probably worth nothing so why should
           | they care :D
        
           | sombragris wrote:
           | Indeed! "we could not figure out a better solution in this
           | study".
           | 
           | There IS a better solution: not to proceed with that "study"
           | at all.
        
             | hannasanarion wrote:
             | Exactly. Since when is "people will sometimes believe lies"
             | a uncertain question that needs experimental review to
             | confirm?
             | 
             | Maybe that cop convicted yesterday was actually just a UMN
             | researcher investigating the burning scientific question
             | "does cutting off someone's airway for 9 minutes cause
             | death?".
        
               | db48x wrote:
               | Careful, the prosecution's witnesses testified on cross-
               | examination that there was no evidence of bruising on
               | Floyd's neck, which is inconsistent with "cutting off
               | someone's airway for 9 minutes".
        
             | kerng wrote:
             | Well, or do what an ethical research would do and seek
             | authorization from the board of Linux foundation before
             | doing any (who knows potentially illegal social engineering
             | attacks) on team members.
        
           | matheusmoreira wrote:
           | > We had carefully considered this issue, but could not
           | figure out a better solution in this study.
           | 
           | Couldn't figure out that "not doing it" was an option
           | apparently.
        
           | floatingatoll wrote:
           | This is an indignant rebuttal, not an apology.
           | 
           | No one says "wasted their precious time" in a sincere
           | apology. The word 'precious' here is exclusively used for
           | sarcasm in the context of an apology, as it does not
           | represent a specific technical term such as might appear in a
           | gemology apology.
        
             | smsm42 wrote:
             | I think this may be unintended. It is very hard to
             | formulate a message that essentially says both "we
             | recognize your time is valuable" and "we know we waste your
             | time, but we decided it's not very important" at the same
             | time, without it sounding sarcastic on some level. Inherent
             | contradiction of the message would get through, regardless
             | of the wording chosen.
        
               | floatingatoll wrote:
               | "We determined after careful evaluation of the potential
               | outcomes that the time wasted by kernel maintainers was,
               | in total, sufficiently low that no significant impact
               | would occur over a multi-day time scale."
               | 
               | If I can come up with the scientific paper gibberish for
               | that in real-time, and I don't even write science papers,
               | then these people who understand how to navigate an
               | ethical review board process surely know how to massage
               | an unpleasant truth into dry and dusty wording.
               | 
               | I think that they just screwed up and missed the word
               | "precious" in editing, and thus got caught being
               | dismissive and snide towards their experiment's
               | participants. Without that word, it's a plausible enough
               | paragraph. With it, it's no longer plausibly innocent.
        
             | zucker42 wrote:
             | Your particular criticism is not fair in my opinion. Both
             | researchers went to undergrad outside the U.S., so they may
             | not speak English as a first language. Therefore, it's not
             | fair to assume to intended that connotation.
        
               | temac wrote:
               | That would be considering they can say virtually anything
               | and pretend when criticized that it was just a
               | miscommunication problem because they don't speak English
               | well enough. Which depending on the consequences, can not
               | necessarily absolve from responsibility even if it would
               | give elements for excuses -- well at least for the first
               | time, certainly not if they continue their bullshit after
               | it!
               | 
               | If they have a problem in mastering English, they can
               | take lessons, and make native speaker review their
               | communication in the meantime.
               | 
               | The benefit of the doubt can not stick for ever on people
               | caught red-handed. It can be restored of course, but they
               | are now in a position where they drastically shifted the
               | perception by their own actions, and thus can't really
               | complain of the results of their own doings. Yes, they
               | can not make mistakes anymore, and everything they did in
               | the past will be reviewed harshly, not for further
               | condemning them without reasons, but just to be sure they
               | did not actually break things while practicing their
               | malicious activities.
        
               | ncann wrote:
               | I also think it's not fair criticism. While "precious"
               | can indeed have sarcastic connotation, I don't detect
               | that tone in the paragraph at all.
        
               | floatingatoll wrote:
               | That single word alone is enough to alter the tone of the
               | paragraph when read. That the rest of the paragraph is
               | plausible does not excuse it.
        
               | ramraj07 wrote:
               | I am from outside the US, and it's perfectly fair to
               | criticise a professionals ability to use language the way
               | its supposed to be; that's your job, if you can't do that
               | then don't take the job.
        
               | FloayYerBoat wrote:
               | I disagree. The text of their paper and their emails show
               | a firm grasp of english.
        
               | floatingatoll wrote:
               | I'm not inclined to be particularly forgiving, given the
               | overall context of their behaviors and the ethical
               | violations committed. I choose to consider that context
               | when parsing their words. You must make your own decision
               | in that regard.
        
         | megous wrote:
         | How can they be trusted though?
        
         | notdang wrote:
         | The main issue here is that it wastes the time of the reviewers
         | and they did not address it in their reply.
        
           | karsinkk wrote:
           | Agreed. This feel more like an involuntary social experiment
           | and it just uses up the Kernel maintainers bandwidth.
           | Reviewing code is difficult, even more so when the committer
           | is set out to introduce bad code in the first place.
        
           | alanning wrote:
           | To help clarify for purposes of continuing the discussion the
           | original research did address the issue of minimizing the
           | time of the reviewers [1] [2]. Seems the maintainers were OK
           | with that as no actions were taken other than an implied
           | request to stop that kind of research.
           | 
           | Now a different researcher from UMN, Aditya Pakki, has
           | submitted a patch which contains bugs that seems to be
           | attempting to do the same type of pen testing although the
           | PhD student denied it.
           | 
           | 1. Section IV.A of the paper, as pointed out by user
           | MzxgckZtNqX5i in this comment:
           | https://news.ycombinator.com/item?id=26890872
           | 
           | > Honoring maintainer efforts. The OSS communities are
           | understaffed, and maintainers are mainly volunteers. We
           | respect OSS volunteers and honor their efforts.
           | Unfortunately, this experiment will take certain time of
           | maintainers in reviewing the patches. To minimize the
           | efforts, (1) we make the minor patches as simple as possible
           | (all of the three patches are less than 5 lines of code
           | changes); (2) we find three real minor issues (i.e., missing
           | an error message, a memory leak, and a refcount bug), and our
           | patches will ultimately contribute to fixing them.
           | 
           | 2. Clarifications on the "hypocrite commit" work (FAQ)
           | 
           | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
           | hc....
           | 
           | "* Does this project waste certain efforts of maintainers?
           | Unfortunately, yes. We would like to sincerely apologize to
           | the maintainers involved in the corresponding patch review
           | process; this work indeed wasted their precious time. We had
           | carefully considered this issue, but could not figure out a
           | better solution in this study. However, to minimize the
           | wasted time, (1) we made the minor patches as simple as
           | possible (all of the three patches are less than 5 lines of
           | code changes); (2) we tried hard to find three real bugs, and
           | the patches ultimately contributed to fixing them."
        
         | arendtio wrote:
         | I wonder why they didn't just ask in advance. Something like
         | 'we would like to test your review process over the next 6
         | months and will inform you before a critical patch hits the
         | users', might have been a win-win scenario.
        
       | Dumbdo wrote:
       | In the follow up chain it was stated that some of their patches
       | made it to stable: https://lore.kernel.org/linux-
       | nfs/YH%2F8jcoC1ffuksrf@kroah.c...
       | 
       | Can someone who's more invested into kernel devel find them and
       | analyze their impact? That sounds pretty interesting to me.
       | 
       | Edit: This is the patch reverting all commits from that mail
       | domain:
       | https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
       | 
       | Edit 2: Now that the first responses to the reversion are
       | trickling in, some merged patched were indeed discovered to be
       | malicious, like the following. Most of them seem to be fine
       | though or at least non malicious.
       | https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
        
       | mycologos wrote:
       | I agree with most commenters here that this crosses the line of
       | ethical research, and I agree that the IRB dropped the ball on
       | this.
       | 
       | However, zooming out a little, I think it's kind of useful to
       | look at this as an example of the incentives at play for a
       | regulatory bureaucracy. Comments bemoaning such bureaucracies are
       | pretty common on HN (myself included!), with specific examples
       | ranging from the huge timescale of public works construction in
       | American cities to the FDA's slow approval of COVID vaccines. A
       | common request is: can't these regulators be a little less
       | conservative?
       | 
       | Well, this story is an example of why said regulators might avoid
       | that -- one mistake here, and there are multiple people in this
       | thread promising to email the UMN IRB and give them a piece of
       | their mind. One mistake! And when one mistake gets punished with
       | public opprobrium, it seems very rational to become conservative
       | and reject anything close to borderline to avoid another mistake.
       | And then we end up with the cautious bureaucracies that we like
       | to complain about.
       | 
       | Now, in a nicer world, maybe those emails complaining to the IRB
       | would be considered valid feedback for the people working there,
       | but unfortunately it seems plausible that it's the kind of job
       | where the only good feedback is no feedback.
        
       | LanceH wrote:
       | Committing a non-volunteer of your experiment to work, and
       | attempting to destroy their product of their work surely isn't
       | ethical research.
        
       | darau1 wrote:
       | So FOSS is insecure if maintainers are lazy? This would hold true
       | for any piece of software, wouldn't it? The difference here is
       | that even though the "hypocrite commits" /were/ accepted, they
       | were spotted soon after. Something that might not have happened
       | quite as quickly in a closed source project.
        
       | jcun4128 wrote:
       | huh I never knew of _plonk_ I bet I 've been plonked before
        
       | thayne wrote:
       | After they successfully got buggy patches in, did they submit
       | patches to fix the bugs? And were they careful to make sure their
       | buggy patches didn't make it into stable releases? If not, then
       | they risked causing real damage, and is at least toeing the line
       | of being genuinely malicious.
        
       | throwawayffffas wrote:
       | As a user of the linux kernel, I feel legal action against the
       | "researchers" should be pursued.
        
         | Avamander wrote:
         | Your feelings do not invalidate the results unfortunately.
        
           | AntiImperialist wrote:
           | In a fairer country, they would be hanged.
        
         | weagle05 wrote:
         | I agree, I think they should be looking at criminal charges.
         | This is the equivalent of getting a job at Ford on the assembly
         | line and then damaging vehicles to see if anyone notices. I've
         | been in software security for 13 years and the "Is Open Source
         | Really Secure" question is so over done. We KNOW there is risk
         | associated with open source.
        
         | jnxx wrote:
         | I feel somewhat similar. Since I am using Linux, they
         | ultimately were trying to break the security of _my_ computers.
         | If I do that with any company without their consent, I can
         | easily end up in jail.
        
           | foobar33333 wrote:
           | >they ultimately were trying to break the security of my
           | computers.
           | 
           | No they weren't. They made sure the bad code never made it
           | in. They are only guilty of wasting peoples time.
        
             | azernik wrote:
             | Except, from that email chain, it turns out that some of
             | the bad code _did_ make it into the stable branch. Clearly,
             | they weren 't keeping very close tabs on their bad code's
             | progress through the system.
        
           | throwawayffffas wrote:
           | It's more than that, if there is no consequences for this
           | kind of action, we are going to get a wave of "security
           | researcher" wannabes trying to pull similar bullshit.
           | 
           | Ps: I have put security researcher in quotes because this
           | kind of thing is not security research, it's a publicity
           | stunt.
        
           | Avamander wrote:
           | How dare they highlight the vulnerability that exists in the
           | process! The blasphemy!
           | 
           | How about you think about what they just proved, about the
           | actors that *actually* try to break the security of the
           | kernel.
        
       | kingsuper20 wrote:
       | Since there is bound to be a sort of trust hierarchy in these
       | commits, is it possible that bonafide name-brand university
       | people/email addresses come with an imprimatur that has now been
       | damaged generally?
       | 
       | Given the size and complexity of the Linux (/GNU) codeworld, I
       | have to wonder if they are coming up against (or already did) the
       | practical limits of assuring safety and quality using the current
       | model of development.
        
       | anarticle wrote:
       | Ah yes, showing those highly paid linux kernel developers how
       | broken their system of trust and connection is! Great work.
       | 
       | Now if we can only find more open source developers to punish for
       | trusting contributors!
       | 
       | Enjoy your ban.
       | 
       | Sorry if this comment seems off base, this research feels like a
       | low blow to people trying to do good for a largely thankless job.
       | 
       | I would say they are violating some ideas of Ken Thompson:
       | https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...
        
       | freewizard wrote:
       | Using faked identity and faked papers to expose loopholes and
       | issues in an institution is not news in science community. Kernel
       | community may not be immune to some common challenges for any
       | sizable institution I assume, so some ethical hacking here seems
       | reasonable.
       | 
       | However, doing it repeatedly with real names seems not helpful to
       | the community and indicates a questionable motivation.
        
       | crazypython wrote:
       | Trust is currency. Trust is an asset.
        
       | beshrkayali wrote:
       | This seems like a pretty scummy way to do "research". I mean I
       | understand that people in academia are becoming increasingly
       | disconnected from the real world, but wow this is low. It's not
       | that they're doing this, I'm sure they're not the first to think
       | of this (for research or malicious reasons), but having the gall
       | to brag about it is a new low.
        
         | DyslexicAtheist wrote:
         | LKML should consider not just banning the @umn.edu on the SMTP
         | but sinkholing the whole of University of MN network address
         | space. Demand a public apology and paying for compute for the
         | next 3 years or get yeeted
        
         | restingrobot wrote:
         | To me, this seems like a convoluted way to hide malicious
         | actions as research, (not the other way around). This smells of
         | intentional vulnerability introduction under the guise of
         | academic investigation. There are millions of other, less
         | critical, open source solutions this "research" could have
         | tested on. I believe this was an intentional targeted attack,
         | and it should be treated as such.
        
         | mihaaly wrote:
         | I believe this is violating research ethics hard, very hard.
         | Reminds me if someone was aiming at researching childs' mental
         | development through the study of inflicting mental damages. The
         | subjects and the likely damages are not similar but the
         | approach and mentality are inconveniently so.
        
           | jpm48 wrote:
           | yep first thing I thought was how did this get through the
           | research ethics panel (all research at my University has to
           | get approval).
        
             | oh_sigh wrote:
             | What I don't understand is how this is ethical, but the
             | sokol hoax was deemed unethical. I assume it's because I'm
             | sokol's case, academia was humiliated, whereas here the
             | target is outside academia
        
         | ec109685 wrote:
         | Agree, and it seems like at least this patch, despite the
         | researcher's protestations, actually landed sufficiently that
         | it could have caused harm?
         | https://lore.kernel.org/patchwork/patch/1062098/
        
           | whoopdedo wrote:
           | I've been scratching my head at this one and admit I can't
           | spot how it can be harmful. Why wouldn't you release the
           | buffer if the send fails?
        
             | Natsu wrote:
             | It might be a double free if the buffer is released
             | elsewhere.
        
               | whoopdedo wrote:
               | The buffer should only be released by its own complete
               | callback, which only gets called after being successfully
               | queued. Moreover, other uses of `mlx5_fpga_conn_send`,
               | and the related `mlx5_fpga_conn_post_recv` will free
               | after error.
               | 
               | The other part of the patch, that checks for `flow` being
               | NULL may be unnecessary since it looks like the handle is
               | always from an active context. But that's a guess. And
               | it's only unreachable code.
               | 
               | The opinion I have from this is despite other patches
               | being bad ideas, this one doesn't look like it. Because
               | the other patches didn't make it past the mailing list,
               | it demonstrates that the maintainers are doing a good
               | enough job.
        
         | radiator wrote:
         | Unfortunately, we cannot be sure it is low for today's
         | academia. So many people working there, with nothing useful to
         | do other than flooding the conferences and journals with
         | papers. They are desperate for anything that could be
         | published. Plus, they know that the standards are low, because
         | they see the other publications.
        
         | EthanHeilman wrote:
         | >I mean I understand that people in academia are becoming
         | increasingly disconnected from the real world, but wow this is
         | low.
         | 
         | I don't have data to back this up, but I've been around a while
         | and I can tell you papers are rejected from conferences for
         | ethics violations. My personal observation is that
         | infosec/cybersecurity academia has been steadily moving to
         | higher ethical standards in research. That doesn't mean that
         | all academics follow this trend, but that unethical research is
         | more likely to get your paper rejected from conferences.
         | 
         | Submitting bugs to an open source project is the sort of stunt
         | hackers would have done in 1990 and then presented at a defcon
         | talk.
        
           | [deleted]
        
           | [deleted]
        
           | roblabla wrote:
           | > I don't have data to back this up, but I've been around a
           | while and I can tell you papers are rejected from conferences
           | for ethics violations.
           | 
           | IEEE seems to have no problem with this paper though.
           | 
           | >>> On the Feasibility of Stealthily Introducing
           | Vulnerabilities in Open-Source Software via Hypocrite Commits
           | Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the
           | 42nd IEEE Symposium on Security and Privacy (Oakland'21).
           | Virtual conference, May 2021.
           | 
           | from https://www-users.cs.umn.edu/~kjlu/
        
             | khuey wrote:
             | Decent odds their paper gets pulled by the conference
             | organizers now.
        
             | MzxgckZtNqX5i wrote:
             | _Section IV.A_ :
             | 
             | > We send the minor patches to the Linux community through
             | email to seek their feedback. Fortunately, there is a time
             | window between the confirmation of a patch and the merging
             | of the patch. Once a maintainer confirmed our patches,
             | e.g., an email reply indicating "looks good", we
             | immediately notify the maintainers of the introduced UAF
             | and request them to not go ahead to apply the patch. At the
             | same time, we point out the correct fixing of the bug and
             | provide our correct patch. In all the three cases,
             | maintainers explicitly acknowledged and confirmed to not
             | move forward with the incorrect patches. All the UAF-
             | introducing patches stayed only in the email exchanges,
             | without even becoming a Git commit in Linux branches.
             | Therefore, we ensured that none of our introduced UAF bugs
             | was ever merged into any branch of the Linux kernel, and
             | none of the Linux users would be affected.
             | 
             | It seems that the research in this paper has been done
             | properly.
             | 
             | EDIT: since several comments come to the same point, I
             | paste here an observation.
             | 
             | They answer to these objections as well. Same section:
             | 
             | > Honoring maintainer efforts. The OSS communities are
             | understaffed, and maintainers are mainly volunteers. We
             | respect OSS volunteers and honor their efforts.
             | Unfortunately, this experiment will take certain time of
             | maintainers in reviewing the patches. To minimize the
             | efforts, (1) we make the minor patches as simple as
             | possible (all of the three patches are less than 5 lines of
             | code changes); (2) we find three real minor issues (i.e.,
             | missing an error message, a memory leak, and a refcount
             | bug), and our patches will ultimately contribute to fixing
             | them.
             | 
             | And, coming to ethics:
             | 
             | > The IRB of University of Minnesota reviewed the
             | procedures of the experiment and determined that this is
             | not human research. We obtained a formal IRB-exempt letter.
        
               | kwertyoowiyop wrote:
               | A "simple" change can still require major effort to
               | evaluate. Bogus logic on their part.
        
               | pdonis wrote:
               | In their "clarifications" [1], they say:
               | 
               | "In the past several years, we devote most of our time to
               | improving the Linux kernel, and we have found and fixed
               | more than one thousand kernel bugs"
               | 
               | But someone upthread posted that this group has a total
               | of about 280 commits in the kernel tree. That doesn't
               | seem like anywhere near enough to fix more than a
               | thousand bugs.
               | 
               | Also, the clarification then says:
               | 
               | "the extensive bug finding and fixing experience also
               | allowed us to observe issues with the patching process
               | and motivated us to improve it"
               | 
               | And the way you do that is _to tell the Linux kernel
               | maintainers about the issues you observed and discuss
               | with them ways to fix them_. But of course that 's not at
               | all what this group did. So no, I don't agree that this
               | research was done "properly". It shouldn't have been done
               | at all the way it was done.
               | 
               | [1] https://www-
               | users.cs.umn.edu/~kjlu/papers/clarifications-hc....
        
               | bonniemuffin wrote:
               | I'm surprised that the IRB determined this to be not
               | human subjects research.
               | 
               | When I fill out the NIH's "is this human research" tool
               | with my understanding of what the study did, it tells me
               | it IS human subjects research, and is not exempt. There
               | was an interaction with humans for the collection of data
               | (observation of behavior), and the subjects haven't
               | prospectively agreed to the intervention, and none of the
               | other very narrow exceptions apply.
               | 
               | https://grants.nih.gov/policy/humansubjects/hs-
               | decision.htm
        
               | karaterobot wrote:
               | In my admittedly limited interaction with human subjects
               | research approval, I would guess that this would not have
               | been considered a proper setup. For one thing, there was
               | no informed consent from any of the test subjects.
        
               | HarryHirsch wrote:
               | The piss-weak IRB decided that no such thing was
               | necessary, hence no consent was requested. It's
               | impossible not to get cynical about these review boards,
               | their only purpose seems to be to deflect liability.
        
               | LeegleechN wrote:
               | If something is determined not to be human research, that
               | doesn't automatically make it ethical.
        
               | mratsim wrote:
               | TIL that opensource project maintainers aren't humans.
        
               | moxvallix wrote:
               | Something I've expected for years, but have never had
               | evidence... until now.
        
               | notriddle wrote:
               | Or, alternatively, that submitting buggy patches on
               | purpose is not research.
        
               | capableweb wrote:
               | > It seems that the research in this paper has been done
               | properly.
               | 
               | How is wasting the time of maintainers of one of the most
               | popular open source project "done properly"?
               | 
               | Also, someone correct me if I'm wrong, but I think if you
               | do experiments that involve other humans, you need to
               | have their consent _before_ starting the experiment,
               | otherwise you're breaking a bunch of rules around ethics.
        
               | MzxgckZtNqX5i wrote:
               | They answer to this objection as well. Same section:
               | 
               | > Honoring maintainer efforts. The OSS communities are
               | understaffed, and maintainers are mainly volunteers. We
               | respect OSS volunteers and honor their efforts.
               | Unfortunately, this experiment will take certain time of
               | maintainers in reviewing the patches. To minimize the
               | efforts, (1) we make the minor patches as simple as
               | possible (all of the three patches are less than 5 lines
               | of code changes); (2) we find three real minor issues
               | (i.e., missing an error message, a memory leak, and a
               | refcount bug), and our patches will ultimately contribute
               | to fixing them.
               | 
               | And, coming to ethics:
               | 
               | > The IRB of University of Minnesota reviewed the
               | procedures of the experiment and determined that this is
               | not human research. We obtained a formal IRB-exempt
               | letter.
        
               | Natsu wrote:
               | They appear to have told the IRB they weren't
               | experimenting on humans, but that doesn't make sense to
               | me given that the reaction of the maintainers is
               | precisely what they were looking at.
               | 
               | Inasmuch as the IRB marked this as "not human research"
               | they appear to have erred.
        
               | Grimm1 wrote:
               | Sounds like the IRB may need to update their ethical
               | standards then. Pointing to the IRB exemption doesn't
               | necessarily make it fine, it could just mean the IRB has
               | outdated ethical standards when it comes to research with
               | comp sci implications.
        
               | pdpi wrote:
               | It doesn't make it fine, no. But it does make a massive
               | difference -- It's the difference between being
               | completely reckless about this and asking for at least
               | token external validation.
        
               | castlecrasher2 wrote:
               | If by "it does make a massive difference" you mean it
               | implicates the university as an organization rather than
               | these individuals then you're right.
        
               | capableweb wrote:
               | > They answer to this objection as well. Same section:
               | 
               | Not sure how that passage justifies wasting the time of
               | these people working on the kernel. Because the issues
               | they pretend to fix are real issues and once their
               | research is done, they also submit the fixes? What about
               | the patches they submitted (like
               | https://lore.kernel.org/linux-
               | nfs/20210407001658.2208535-1-p...) that didn't make any
               | sense and didn't actually change anything?
               | 
               | > And, coming to ethics:
               | 
               | So it seems that they didn't even just mislead the
               | developers of the kernel, but they also misled the IRB
               | board, as they would never approve it without getting
               | consent from the developers since they are experimenting
               | on humans and that requires consent.
               | 
               | Even in the section you put above, they even confess they
               | need to interact with the developers ("this experiment
               | will take certain time of maintainers in reviewing the
               | patches"), so how can they be IRB-exempt?
               | 
               | The closer you look, the more sour this whole thing
               | smells.
        
               | xbar wrote:
               | At least one human, GKH, disagrees.
        
               | fauigerzigerk wrote:
               | _> The IRB of University of Minnesota reviewed the
               | procedures of the experiment and determined that this is
               | not human research. We obtained a formal IRB-exempt
               | letter._
               | 
               | I was wondering why he banned the whole university and
               | not just these particular researchers. I think your quote
               | is the answer to that. I'm not sure on what basis this
               | exemption was granted.
               | 
               | Here's what the NIH says about it:
               | 
               | Definition of Human Subjects Research
               | 
               | https://grants.nih.gov/policy/humansubjects/research.htm
               | 
               | Decision Tool: Am I Doing Human Subjects Research?
               | 
               | https://grants.nih.gov/policy/humansubjects/hs-
               | decision.htm
               | 
               | And even if they did find some way to justify it under
               | their own rules, some of the research subjects clearly
               | disagree.
        
               | rurban wrote:
               | Because in the paper is stated that they used partially
               | fantasy names. So far they've found only 4 names of real
               | @umn.edu people from Kangjie Lu's lab, which could easily
               | be blocked, the most coming from two of his students,
               | Aditya Pakki, Qiushi Wu, plus his colleague Wenwen Wang.
               | The Wenwen Wang fixes look like actual fixes though, not
               | malicious. Some of Lu's earlier patches also look good.
               | 
               | https://lore.kernel.org/lkml/20210421130105.1226686-8-gre
               | gkh... for the full list
        
               | bronson wrote:
               | Is "we acknowledge that this will waste their time but
               | we're going to do it anyway" really an adequate answer to
               | that objection?
        
               | linepupdesign wrote:
               | To me, this further emphasizes the idea that Academia has
               | some serious issues. If some academic institution wasted
               | even 10 minutes of my time without my consent, I'd have a
               | bad taste in my mouth about them for a long time. Time is
               | money, and if volunteers believe their time is being
               | wasted, they will cease to be volunteers, which then
               | effects a much larger ecosystem.
        
               | _jal wrote:
               | This points to a serious disconnect between research
               | communities and development communities.
               | 
               | I would have reacted the same way Greg did - I don't care
               | what credentials someone has or what their hidden purpose
               | is, if you are intentionally submitting malicious code, I
               | would ban you and shame you.
               | 
               | If particular researchers continue to use methods like
               | this, I think they will find their post-graduate careers
               | limited by the reputation they're already establishing
               | for themselves.
        
               | ineedasername wrote:
               | Not really done properly: They were testing out the
               | integrity of the system. This includes the process by
               | which they notified the maintainers not to go ahead. What
               | if that step had failed and the maintainers missed that
               | message?
               | 
               | Essentially, the researchers were not in control to stop
               | the experiment if it deviated from expectations. They
               | were relying on the exact system they were testing to
               | trigger its halt.
               | 
               | We also don't know what details they gave the IRB. They
               | may have passed through due to IRB's naivete on this: It
               | had a high human component because it was humans making
               | many decisions in this process. In particular, there was
               | the potential to cause maintainers personal embarrassment
               | or professional censure by letting through a bugged
               | patch. If the researchers even considered this
               | possibility, I doubt the IRB would have approved this
               | experimental protocol if laid out in those terms.
        
               | rohansingh wrote:
               | The goal of ethical research wouldn't be to protect the
               | Linux kernel, it would be to protect the rights and
               | wellbeing of the people being studied.
               | 
               | Even if none of the patches made into the kernel (which
               | doesn't seem to be true, according to other accounts),
               | it's still possible to do permanent damage to the
               | community of kernel maintainers.
        
               | querez wrote:
               | Depends on your notion of "properly". IMO "ask for
               | forgiveness instead of permission" is not an acceptable
               | way to experiment on people. The "proper" way to do this
               | would've been to request permission from the higher
               | echelons of Linux devs beforehand, instead of blindly
               | wasting the time of everyone involved just so you can
               | write a research paper.
        
               | andrewflnr wrote:
               | That's still not asking permission from the actual humans
               | you're experimenting on, i.e. the non-"higher echelons"
               | humans who actually review the patch.
        
               | lou1306 wrote:
               | But still, this kind of research puts undue pressure on
               | the kernel maintainers who have to review patches that
               | were not submitted in good faith (where "good faith" =
               | the author of the patch were trying to improve the
               | kernel)
        
               | kevinventullo wrote:
               | I think that was kind of the point of the research:
               | submitting broken patches to the kernel represents a
               | feasible attack surface which is difficult to mitigate,
               | precisely _because_ kernel maintainers already have such
               | a hard job.
        
               | varjag wrote:
               | So what's the null hypothesis here? Human maintainers are
               | infallible? Why this even need to be researched?
        
               | soneil wrote:
               | "in all the three cases" is mildly interesting, as 232
               | commits have been reverted from these three actors. To my
               | reading this means they either have a legitimate history
               | of contributions with three red herrings, or they have a
               | different understanding of the word "all" than I do.
        
               | gnramires wrote:
               | Saying something is ethical because a committee approved
               | it is dangerously tautological (you can't justify any
               | unethical behavior because someone at some time said it
               | was ethical!).
               | 
               | We can independently conclude this kind of research has
               | put open source projects in danger by getting
               | vulnerabilities that could carry serious real world
               | consequences. I could imagine many other ways to carrying
               | out this experiment without the consequences it appears
               | to have had, like perhaps inviting developers to a
               | private repository and keeping the patch from going
               | public, or collaborating with maintainers to set up a
               | more controlled experiment without risks.
               | 
               | This seems by all appearances an unilateral and egoistic
               | behavior without great thought into its real world
               | consequences.
               | 
               | Hopefully researchers learn from it and it doesn't
               | discourage future ethical kernel research.
        
             | lr1970 wrote:
             | The IEEE Symposium on Security and Privacy should remove
             | this paper at once for gross ethics violations. The message
             | should be strong and unequivocal that this type of behavior
             | is not tolerated.
        
             | [deleted]
        
             | phw wrote:
             | > IEEE seems to have no problem with this paper though.
             | 
             | IEEE is just the publishing organisation and doesn't review
             | research. That's handled by the program committee that each
             | IEEE conference has. These committees consist of several
             | dozen researchers from various institutions that review
             | each paper submission. A typical paper is reviewed by 2-5
             | people and the idea is that these reviewers can catch
             | ethical problems. As you may expect, there's wide variance
             | in how well this works.
             | 
             | While problematic research still slips through the cracks,
             | the field as a whole is getting more sensitive to ethical
             | issues. Part of the problem is that we don't yet have well-
             | defined processes and expectations for how to deal with
             | these issues. People often expect IRBs to make a judgement
             | call on ethics but many (if not most) IRBs don't have
             | computer scientists that are able to understand the nuances
             | of a given research projects and are therefore ill-equipped
             | to reason about the implications.
        
             | neatze wrote:
             | "To appear"
        
               | JadeNB wrote:
               | "To appear" has a technical meaning in academia, though--
               | it doesn't mean "I hope"; it means "it's been formally
               | accepted but hasn't actually been put in 'print' yet."
               | 
               | That doesn't stop someone from lying about it, but it's
               | not a casual claim, and doing so would probably bring
               | community censure (as well as being easily falsifiable
               | after time).
        
               | neatze wrote:
               | "To appear" to me meant; it is under revision by IEEE,
               | otherwise why not just to state paper was accepted by
               | IEEE.
        
               | flakiness wrote:
               | It's a jargon in academia.
        
               | andi999 wrote:
               | It is a bit more complicated, since this is a conference
               | paper. Usually, if a conference paper is accepted, it is
               | only published if the presentation was held (so if the
               | speaker cancels, or doesnt show up, the publication is
               | revoked).
               | 
               | Edit: All conference are different, I dont know if it
               | applies to that one.
        
               | cardiffspaceman wrote:
               | I have only ever attended one conference, but I attended
               | it about 32 times, and the printed proceedings were in my
               | hands before I attended any talks in the last dozen or
               | so. How does revocation work in that event?
        
               | andi999 wrote:
               | Well, it depends on the conference. I know this to be
               | true for a certain IEEE conference, so I assumed it to be
               | the same for this IEEE one, but I have to admit, I didnt
               | check. You are right, I also remember the handouts at a
               | different conference handed on a usb stick at arrival.
        
               | neatze wrote:
               | It makes sense, thank you for explanation.
        
               | [deleted]
        
               | detaro wrote:
               | What are you trying to suggest? It's an accepted paper,
               | the event just hasn't happened yet.
        
               | corty wrote:
               | I'm not holding my breath. I don't think they will pull
               | that paper.
               | 
               | Security research is not always the most ethical branch
               | of computer science, to say it mildly. Those are the
               | people selling exploits to oppressive regimes, allowing
               | companies to sit on "responsibly reported" bugs for years
               | while hand-wringing about "that wasn't in the attacker
               | model, sorry our 'secure whatever' we sold is practically
               | useless". Of course the overall community isn't like
               | that, but the bad apples spoil the bunch. And the
               | aforementioned unethical behaviour even seems widely
               | accepted.
        
         | shuringai wrote:
         | how is this different than blackhats contributing to general
         | awareness of web security practices? Opensource considered
         | secure just because its up on github is no different than
         | plaintext HTTP GET params being secure just because "who the
         | hell will read your params in the browser", which would be
         | still the status quo if some hackers hadn't done the "lowest of
         | the low " and show the world this lesson.
        
         | rob74 wrote:
         | Yup, it's basically stating the obvious: that any system based
         | on an assumption of good faith is vulnerable to bad faith
         | actors. The kernel devs are probably on the lookout for someone
         | trying to introduce backdoors, but simply introducing a bug for
         | the sake of introducing a bug (without knowing if it can be
         | exploited), which is obviously much easier to do stealthily -
         | why would anyone do that? Except for "academic research" of
         | course...
        
           | freeflight wrote:
           | _> why would anyone do that?_
           | 
           | I can think of a whole lot of three letter agencies with
           | reasons to do that, most of whom recruit directly from
           | universities.
        
           | cptskippy wrote:
           | In theory wouldn't it be possible to introduce bugs that are
           | seemingly innocuous when reviewed independently but when
           | combined form and exploit?
           | 
           | Could a number of seemingly unrelated individuals introduce a
           | number of bugs over time to form and exploit without being
           | detected?
        
             | corty wrote:
             | I think in some cases, you wouldn't even need multiple
             | patches, sometimes very small things can be exploits. See:
             | http://www.ioccc.org/
        
               | evilotto wrote:
               | Another source on such things, although no longer an
               | ongoing effort: http://underhanded-c.org/_page_id_2.html
        
             | dec0dedab0de wrote:
             | yes, of course, and I'm fairly certain it's happened before
             | or at least there have been suspicions of it happening.
             | Thats why trust is important, and why I'm glad kernel
             | development is not very friendly.
             | 
             | Doing code review at work I am constantly catching
             | blatantly obvious security bugs. Most developers are so
             | happy to get the thing to work, that they don't even
             | consider security. This is in high level languages, with a
             | fairly small team, only internal users, and pretty simple
             | code base. I can't imagine trying to do it for something as
             | high stakes and complicated as the kernel. Not to mention
             | how subtle bugs can be in C. I suspect it is impossible to
             | distinguish incompetence from malice. So aggressively
             | weeding out incompetence, and then forming layers of trust
             | is the only real defense.
        
             | rgrs wrote:
             | In that scenario, it is a genuine bug. Not a malicious
             | actor
        
             | LinuxBender wrote:
             | Yes. binfmt and some other parts of systemd are such an
             | example that introduce vulnerabilities that existed in
             | windows 95. Not going into detail because it still needs to
             | be fixed, assuming it was not intentional.
        
           | shadowgovt wrote:
           | Academic research, cyberwarfare, a rival operating system
           | architecture attempting to diminish the quality of an
           | alternative to the system they're developing, the lulz of
           | knowing one has damaged something... The reasons for bad-
           | faith action are myriad, as diverse as human creativity.
        
         | lpapez wrote:
         | The "scientific" question answered by the mentioned paper is
         | basically:
         | 
         | "Can open-source maintainers make a mistake by accepting faulty
         | commits?"
         | 
         | In addition to being scummy, this research seems utterly
         | pointless to me. Of course mistakes can happen, we are all
         | humans, even the Linux maintainers.
        
         | mistersquid wrote:
         | This observation may very well get downvoted to oblivion: what
         | UMN pulled is the Linux kernel development version of the Sokal
         | Hoax.
         | 
         | Both are unethical, disruptive, and prove nothing about the
         | integrity of the organizations they target.
        
           | neoburkian wrote:
           | Except for Linux actively running on 99% of all servers on
           | the planet. Vulnerabilities in Linux can _literally_ kill
           | people, open holes for hackers, spies, etc.
           | 
           | Submitting a fake paper to a journal read by a few dozen
           | academics is a threat to someones ego. It is not in the same
           | ballpark as a threat to IT infrastructure everywhere.
        
           | alexpetralia wrote:
           | The main difference is that the Sokal Hoax worked (that is
           | why it is notable).
        
         | a3n wrote:
         | The researchers have a future at Facebook, which experimented
         | on how to make users feel bad.
         | 
         | https://duckduckgo.com/?q=facebook+emotional+study&t=fpas&ia...
        
         | andrepd wrote:
         | Devil's advocate, but why? How is this different from any other
         | white/gray-hat pentest? They tried to submit buggy patches,
         | once approved they immediately let the maintainers know not to
         | merge them. Then they published a paper with their findings and
         | which weak parts in the process they thing are responsible, and
         | which steps they recommend be taken to mitigate this.
        
           | burnished wrote:
           | You can read the (relatively short) email chains for
           | yourself, but to try and answer your question, as I
           | understood it the problem wasn't entirely the problems
           | submitted in the paper it was followup bad patches and
           | ridiculous defense. Essentially they sent patches that were
           | purportedly the result of static analysis but did nothing,
           | broke social convention by failing to signal that the patch
           | was the result of a tool, and it was deemed indistinguishable
           | from more attempts to send bad code and perform tests on the
           | linux maintainers.
        
           | kerng wrote:
           | Very easy, if its not authorized it's not a pentest or red
           | team operation.
           | 
           | Any pentester or red team considers their profession an
           | ethical one.
           | 
           | By the response of the Linux Foundation, this is clearly not
           | authorized nor falling into any bug bounty rules/framework
           | they would offer. Social engineering attacks are often out of
           | bounds for bug bounty - and even for authorized engagements
           | need to follow strict rules and procedures.
           | 
           | Wonder if there are even legal steps that could be taken by
           | Linux foundation.
        
         | utopcell wrote:
         | Agreed. Plus, I find the "oh, we didn't know what we were
         | doing, you're not an inviting community" social engineering
         | response, completely slimey and off-putting.
        
         | alpaca128 wrote:
         | > having the gall to brag about it is a new low
         | 
         | Even worse: They bragged about it, then sent a new wave of
         | buggy patches to see if the "test subjects" fall for it once
         | again, and then tried to push the blame on the kernel
         | maintainers for being "intimidating to newbies".
         | 
         | This is thinly veiled and potentially dangerous bullying.
        
           | cosmie wrote:
           | > This is thinly veiled and potentially dangerous bullying.
           | 
           | Which itself could be the basis of a follow up research
           | paper. The first one was about surreptitiously slipping
           | vulnerabilities into the kernel code.
           | 
           | There's nothing surreptitious about their current behavior.
           | They're now known bad actors attempting to get patches
           | approved. First nonchalantly, and after getting called out
           | and rejected they framed it as an attempt at bullying by the
           | maintainers.
           | 
           | If patches end up getting approved, everything about the
           | situation is ripe for another paper. The initial rejection,
           | attempting to frame it as bullying by the maintainers (which
           | ironically, is thinly veiled bullying itself), impact of
           | public pressure (which currently seems to be in the
           | maintainers' favor, but the public is fickle and could turn
           | on a dime).
           | 
           | Hell, even if the attempt isn't successful you could probably
           | turn it into another paper anyway. Wouldn't be as splashy,
           | but would still be an interesting meta-analysis of techniques
           | bad actors can use to exploit the human nature of the open
           | source process.
        
             | dcow wrote:
             | Yep, while the downside is that it wastes maintainers' time
             | and they are rightfully annoyed, I find the overall topic
             | fascinating not repulsive. This is a real world red team
             | pen test on one of the highest profile software projects.
             | There is a lot to learn here all around! Hope the UMN
             | people didn't burn goodwill by being too annoying, though.
             | Sounds like they may not be the best red team after all...
        
               | [deleted]
        
               | TheSpiceIsLife wrote:
               | A _real world red team_?
               | 
               | Wouldn't the correct term for that be: _malicious threat
               | actor_?
               | 
               |  _Red team_ penetration testing doesn 't involve the
               | element of surprise, and is pre-arranged.
               | 
               | Intentionally wasting peoples time, and then going
               | further to claim you weren't, is a _malicious act_ as it
               | _intends to do harm_.
               | 
               | I agree though, it's fascinating but only in the _true
               | crime_ sense.
        
               | brobdingnagians wrote:
               | Totally agree. It is a threat, not pen testing. Pen
               | testing would stop when it was obvious they would or had
               | succeeded and notify the project so they could remedy the
               | process and prevent it in the future. Reverting to name
               | calling and outright manipulative behavior is immature
               | and counterproductive in any case except where the action
               | is malicious.
        
               | kemonocode wrote:
               | A good red team pentest would have been to just stop
               | after the first round of patches, not to try again and
               | then cry foul when they get rightfully rejected. Unless,
               | of course, social denunciation is part of the attack- and
               | yes, it's admittedly a pretty good sidechannel- but
               | that's a rather grisly social engineering attack,
               | wouldn't you agree?
        
             | throw14082020 wrote:
             | > Which itself could be the basis of a follow up research
             | paper.
             | 
             | Seems more like low grade journalism to me.
        
             | derefr wrote:
             | But the first paper is a Software Engineering paper
             | (social-exploit-vector vulnerability research), while the
             | hypothetical second paper would be a Sociology paper about
             | the culture of FOSS. Kind of out-of-discipline for the
             | people who were writing the first paper.
        
               | cosmie wrote:
               | There's certainly a sociology aspect to the whole thing,
               | but the hypothetical second paper is just as much social-
               | exploit-vector vulnerability research as the first one.
               | The only change being the state of the actor involved.
               | 
               | The existing paper researched the feasibility of unknown
               | actors to introduce vulnerable code. The hypothetical
               | second paper has the same basis, but is from the vantage
               | point of a known bad actor.
               | 
               | Reading through the mailing list (as best I can), the
               | maintainer's response to the latest buggy patches seemed
               | pretty civil[1] in general, and even more so considering
               | the prior behavior. And the submitter's response to that
               | (quoted here[2]) went to the extreme end of
               | defensiveness. Instead of addressing or acknowledging
               | anything in the maintainer's message, the submitter:
               | 
               | - Rejected the concerns of the maintainer as "wild
               | accusations bordering on slander"
               | 
               | - Stating their naivety of the kernel code, establishing
               | themselves as a newbie
               | 
               | - Called out the unfriendliness of the maintainers to
               | newbies and non-expects
               | 
               | - Accused the maintainer of having preconceived biases
               | 
               | An empathetic reading of their response is that they
               | really are a newbie trying to be helpful and got
               | defensive after feeling attacked. But a cynical reading
               | of their response is that they're attempting to
               | exploiting high-visibility social issues to pressure or
               | coerce the maintainers into accepting patches from a
               | known bad actor.
               | 
               | The cynical interpretation is as much social-exploit-
               | vector vulnerability research as what they did before.
               | Considering how they deflected on the maintainer's
               | concerns stemming from their prior behavior and
               | immediately pulled a whole bunch of hot-button social
               | issues into the conversation at the same time, the
               | cynical interpretation seems at least plausible.
               | 
               | [1] https://lore.kernel.org/linux-
               | nfs/YH5%2Fi7OvsjSmqADv@kroah.c...
               | 
               | [2] https://lore.kernel.org/linux-
               | nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
        
             | brobdingnagians wrote:
             | I agree. If it quacks like a duck and waddles like a duck,
             | then it is a duck. Anyone secretly introducing exploitable
             | bugs in a project is a malicious threat actor. It doesn't
             | matter if it is a "respectable" university or a teenager,
             | it matters what they _do_.
        
               | wolverine876 wrote:
               | They did not secretly introduce exploitable bugs:
               | 
               |  _Once any maintainer of the community responds to the
               | email,indicating "looks good",we immediately point out
               | the introduced bug and request them to not go ahead to
               | apply the patch. At the same time, we point out the
               | correct fixing of the bug and provide our proper patch.
               | In all the three cases, maintainers explicitly
               | acknowledged and confirmed to not move forward with the
               | incorrect patches. This way, we ensure that the incorrect
               | patches will not be adopted or committed into the Git
               | tree of Linux._
               | 
               | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
               | hc....
               | 
               | > If it quacks like a duck and waddles like a duck, then
               | it is a duck.
               | 
               | A lot of horrible things have happened on the Internet by
               | following that philosophy. I think it's imperative to
               | learn the rigorous facts and different interpretations of
               | them, or we will continue to great harm and be easily
               | manipulated.
        
           | smsm42 wrote:
           | This looks a very cynical attempt to leverage PC language to
           | manipulate people. Basically a social engineering attack.
           | They surely will try to present it as pentest, but IMHO it
           | should be treated as an attack.
        
           | bobthechef wrote:
           | Why not just call it what it is: fraud. They tried to
           | _deceive_ the maintainers into incorporating buggy code under
           | false pretenses. They _lied_ (yes, let 's use that word)
           | about it, then doubled down about the lie when caught.
        
           | twic wrote:
           | I don't see any sense in which this is bullying.
        
             | tingol wrote:
             | I come to your car, cut your breaks, tell you just before
             | you go on a ride, say it's just research and I will repair
             | them. What would you call a person like that?
        
               | twic wrote:
               | I'm not sure, but i certainly wouldn't call them a bully.
        
           | sergiotapia wrote:
           | >then tried to push the blame on the kernel maintainers for
           | being "intimidating to newbies".
           | 
           | As soon as I read that all sympathy for this clown was out
           | the window. He knows exactly what he's doing.
        
           | spfzero wrote:
           | There are some activities that _should_ be  "intimidating to
           | newbies" though, shouldn't there? I can think of a lot of
           | specific examples, but in general, anything where significant
           | preparation is helpful in avoiding expensive (or dangerous)
           | accidents. Or where lack of preparation (or intentional
           | "mistakes" like in this case) would shift the burden of work
           | unfairly onto someone else. Also, a "newbie" in the context
           | of Linux system programming would still imply reasonable
           | experience and skill in writing code, and in checking and
           | testing your work.
        
           | ISL wrote:
           | It isn't even bullying. It is just dumb?
           | 
           | Fortunately, the episode also suggests that the kernel-
           | development immune-system is fully-operational.
        
             | tigerBL00D wrote:
             | Not sure. From what I read they've successfully introduced
             | a vulnerability in their first attempt. Would anyone have
             | noticed if they didn't call more attention to their
             | activities?
        
               | mirchibajji wrote:
               | Can you point to this please? From my reading, it appears
               | that their earlier patches were merged, but there is no
               | mention of them being actual vulnerabilities. The lkml
               | thread does mention they want to revert these patches,
               | just in case.
        
               | lb7000 wrote:
               | From LKML
               | 
               | "A lot of these have already reached the stable trees. I
               | can send you revert patches for stable by the end of
               | today (if your scripts have not already done it)."
               | 
               | https://lore.kernel.org/linux-
               | nfs/YH%2F8jcoC1ffuksrf@kroah.c...
        
               | PeterisP wrote:
               | It's not saying that those are introduced bugs; IMHO
               | they're just proactively reverting all commits from these
               | people.
        
               | lfc07 wrote:
               | Yes because the UMN guys have made their intent clear,
               | and even went on to defend their actions. They should
               | have apologised and asked for reverting their patches.
        
               | remexre wrote:
               | Which kind of sucks for everyone else at UMN, including
               | people who are submitting actual security fixes...
        
               | capybara_2020 wrote:
               | > > > They introduce kernel bugs on purpose. Yesterday, I
               | took a look on 4 > > > accepted patches from Aditya and 3
               | of them added various severity security > > > "holes".
               | 
               | It looks like actual security vulnerabilities were
               | successfully added to the stable branch based on that
               | comment.
        
               | dr-detroit wrote:
               | dont most distro come with systemd so they're already
               | compromised?
        
           | deagle50 wrote:
           | And they tried to blow the "preconceived biases" dog whistle.
           | I read that as a threat.
        
             | fartcannon wrote:
             | Intimidating new people is the same line that was lobbed at
             | Linus to neuter his public persona. It would not surprise
             | me if opportunists utilize this kind of language more
             | frequently in the future.
        
             | forkwhilefork wrote:
             | Context for those curious: https://lore.kernel.org/linux-
             | nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
        
               | kstrauser wrote:
               | WTF. I didn't have strong feelings about that until
               | reading this thread. Nothing like doubling down on the
               | assholishness after getting caught, Aditya.
        
           | mintplant wrote:
           | I'm gonna go against the grain here and say I don't think
           | this is a continuation of the original research. It'd be a
           | strange change in methodology. The first paper used temporary
           | email addresses, why switch to a single real one? The first
           | paper alerted maintainers as soon as patches were approved,
           | why switch to allowing them to make it through to stable? The
           | first paper focused on a few subtle changes, why switch to
           | random scattershot patches? Sure, this person's advisor is
           | listed as a co-author of the first paper, but that really
           | doesn't imply the level of coordination that people are
           | assuming here.
        
             | powersnail wrote:
             | They had already done it once without asking for consent.
             | At least in my eye, that makes them--everyone in the team--
             | lose their credibility. Notifying the kernel maintainers
             | _afterwards_ is irrelevant.
             | 
             | It is not the job of the kernel maintainers to justify the
             | teams new nonsense patches. If the team has stopped being
             | bullshit, they should defend the merit of their own
             | patches. They have failed to do so, and instead tried to
             | deflect with recriminations, and now they are banned.
        
             | legacynl wrote:
             | It doesn't really matter that he/they changed MO, because
             | they've already shown to be untrustworthy. You can only get
             | the benefit of the doubt once.
             | 
             | I'm not saying people or institutions cant change. But the
             | burden of proof is on them now to show that they did. A
             | good first step would be to acknowledge that there IS a
             | good reason for doubt, and certainly not whine about
             | 'preconceived bias'.
        
           | IAmNotAFix wrote:
           | At this point how do you even make the difference between
           | their genuine behavior and the behavior that is part of the
           | research?
        
             | fellellor wrote:
             | It would be hard to show this wasn't genuine behaviour but
             | a malicious attempt to infect the Linux kernel. That still
             | doesn't give them a pass though. Academia is full of
             | copycat "scholars". Kernel maintainers would end up wasting
             | significant chunks of their time fending off this type of
             | "research".
        
               | winkeltripel wrote:
               | The kernel maintainers don't need to show or prove
               | anything, or owe anyone an explanation. The University's
               | staff/students are banned, and their work will be undone
               | within a few days.
               | 
               | The reputational damage will be lasting, both for the
               | researchers, and for UMN.
        
             | Mordisquitos wrote:
             | I would say that, from the point of view of the kernel
             | maintainers, that question is irrelevant, as they never
             | agreed to taking part in any research so. Therefore, from
             | their perspective, all the behaviour is genuinely
             | malevolent regardless of the individual intentions of each
             | UMN researcher.
        
               | nsood wrote:
               | It does prevent anyone with a umn.edu email address, be
               | it a student or professor, of submitting patches of _any
               | kind,_ even if they're not part of research at all. A
               | professor might genuinely just find a bug in the Linux
               | kernel running on their machines, fix it, and be unable
               | to submit it.
               | 
               | To be clear, I don't think what the kernel maintainers
               | did is wrong; it's just sad that all past and future
               | potentially genuine contributions to the kernel from the
               | university have been caught in the crossfire.
        
               | olliej wrote:
               | I think in explicitly stating that no on from the
               | university is allowed to submit patches includes
               | disallowing them from submitting using personal/spoof
               | addresses.
               | 
               | Sure they can only automatically ban the .edu address,
               | but it would be pretty meaningless to just ban the
               | university email host, but be ok with the same people
               | submitting patches from personal accounts.
               | 
               | I would also explicitly ban every person involved with
               | this "research" and add their names to a hypothetical ban
               | list.
        
               | ipaddr wrote:
               | As a Minnesota U employee/student you cannot submit
               | officially from campus or using the minn. u domain.
               | 
               | As Joe Blow at home who happens to go to school or work
               | there you could submit even if you were part of the
               | research team. Because you are not representing the
               | university. The university is banned.
        
               | thatguy0900 wrote:
               | The professor, or any students, can just use a non edu
               | email address, right? It really doesn't seem like a big
               | deal to me. It's not like they can personally ban anyone
               | who's been to that campus, just the edu email address.
        
               | olliej wrote:
               | no, that would get them around an automatic filter, but
               | the ban was on people from the university, not just
               | people using uni email addresses.
               | 
               | I'm not sure how the law works in such cases, but surely
               | the IRB would eventually have to realize that an explicit
               | denouncement by the victims means that the "research"
               | cannot go ahead
        
               | tmp538394722 wrote:
               | For one, it's a way of punishing the university.
               | 
               | Eg - If you want to do kernel related research, don't go
               | to the university of Minnesota.
        
               | janoc wrote:
               | Which is completely fine, IMO, because,as pointed out
               | already, the university's IRB has utterly failed here.
               | There is no way how this sort of "research" could have
               | passed an ethics review.
               | 
               | - Human subjects - Intentionally
               | misleading/misrepresenting things, potential for a lot of
               | damage, given how widespread Linux is - No informed
               | consent at all!
               | 
               | Sorry but one cannot use unsuspecting people as guinea
               | pigs for research, even if it is someone from a reputable
               | institution.
        
               | vtail wrote:
               | However, if you use a personal email, you can't hide
               | behind "I'm just doing my research".
        
               | mort96 wrote:
               | I looked into it (https://old.reddit.com/r/linux/comments
               | /mvd6zv/greg_khs_resp...). People from the University of
               | Minnesota has 280 commits to the Linux kernel. Of those,
               | 232 are from the three people directly implicated in this
               | attack (that is, Aditya Pakki and the two authors of the
               | paper), and the remaining 28 commits is from one
               | individual who might not be directly involved.
        
               | lfc07 wrote:
               | He writes "We are not experts in the linux kernel..."
               | after pushing so many changes since 2018. I am left
               | scratching my head.
        
               | radiator wrote:
               | And what about the other 20 commits? (not that it is so
               | important, but sometimes a missing detail can be
               | annoying)
        
               | lfc07 wrote:
               | Haha
        
               | nanna wrote:
               | This. This research says something about Minnesota's
               | ethics approval process.
        
               | ineedasername wrote:
               | I'm surprised it passed their IRB. Any research has to go
               | through them, even if it's just for the IRB to confirm
               | with "No this does not require a full review". Either the
               | researchers here framed it in a way that there was no
               | damage being done, or they relied on their IRB's lack of
               | technical understanding to realize what was going on.
        
               | cbkeller wrote:
               | According to one of the researchers who co-signed a
               | letter of concern over the issue, the Minnesota group
               | also only received IRB approval retroactively, after said
               | letter of concern [1].
               | 
               | [1] https://twitter.com/SarahJamieLewis/status/1384871385
               | 5379087...
        
               | op00to wrote:
               | lol it didn't. looks like some spots are opening up at
               | UMN's IRB. :)
        
               | ineedasername wrote:
               | Yeah, I don't think they can claim that human subjects
               | weren't part of this when there is outrage on the part of
               | the humans working at the targeted organization and a ban
               | on the researchers' institution from doing any research
               | in this area.
        
               | sfshaw wrote:
               | In the paper they state that they received an exemption
               | from the IRB.
        
               | ineedasername wrote:
               | I'd love to see what they submitted to their IRB to get
               | the determination of no human subjects:
               | 
               | It had a high human component because it was humans
               | making decisions in this process. In particular, there
               | was the potential to cause maintainers personal
               | embarrassment or professional censure by letting through
               | a bugged patch. If the researchers even considered this
               | possibility, I doubt the IRB would have approved this
               | experimental protocol if laid out in those terms.
        
               | sonofgod wrote:
               | https://research.umn.edu/units/irb/how-submit/new-study ,
               | find the document that points to "determining that it's
               | not human research", leads you to https://drive.google.co
               | m/file/d/0Bw4LRE9kGb69Mm5TbldxSVkwTms...
               | 
               | The only relevant question is: "Will the investigator use
               | ... information ... obtained through ... manipulations of
               | those individuals or their environment for research
               | purposes?"
               | 
               | which could be idly thought of as "I'm just sending an
               | email, what's wrong with that? That's not manipulating
               | their environment".
               | 
               | But I feel they're wrong.
               | 
               | https://grants.nih.gov/policy/humansubjects/hs-
               | decision.htm would seem to agree that it's non-exempt
               | (i.e. potentially problematic) human research if "there
               | will be an interaction with subjects for the collection
               | of ... data (including ... observation of behaviour)" and
               | there's not a well-worn path (survey/public observation
               | only/academic setting/subject agrees to study) with
               | additional criteria.
        
               | ineedasername wrote:
               | Agreed: sending an email is certainly manipulating their
               | environment when the action taken (or not taken) as a
               | result has the potential for harm. Imagine an extreme
               | example of an email death-threat: That is an undeniable
               | harm, meaning email has such potential, so the IRB should
               | have conducted a more thorough review.
               | 
               | Besides, all we have to do is look at the outcome:
               | Outrage on the part of the organization targeted, and a
               | ban by that organization that will limit the researcher's
               | institution from conducting certain types of research.
               | 
               | If this human-level harm was the actual outcome means the
               | experiment was a de fact experiment including human
               | subjects.
        
               | soneil wrote:
               | I have to admit, I can completely understand how
               | submitting source code patches to the linux kernel
               | doesn't sound like human testing to the layman.
               | 
               | Not to excuse them at all, I think the results are
               | entirely appropriate. What they're seeing is the immune
               | system doing its job. Going easy on them just because
               | they're a university would skew the results of the
               | research, and we wouldn't want that.
        
               | ineedasername wrote:
               | Agreed: I can understand how the IRB overlooked this. The
               | researchers don't get a pass though. And considering the
               | actual harm done, the researchers could not have
               | presented an appropriate explanation to their IRB.
        
               | light_hue_1 wrote:
               | This research is not exempt.
               | 
               | One of the important rules you must agree to is that you
               | cannot deceive anyone in any way, no matter how small, if
               | you are going to claim that you are doing exempt
               | research.
               | 
               | These researchers violated the rules of their IRB.
               | Someone should contact their IRB and tell them.
        
               | ShamblingMound wrote:
               | This was (1) research with human subjects (2) where the
               | human subjects were deceived, and (3) _there was no
               | informed consent_!
               | 
               | If the IRB approved this as exempt and they had an
               | accurate understanding of the experiment, it makes me
               | question the IRB itself. Whether the researchers were
               | dishonest with the IRB or the IRB approved this as
               | exempt, it's outrageous.
        
               | eieiei wrote:
               | Yes!! Minnesota sota caballo rey. Spanish cards dude
        
               | oriolpr wrote:
               | Hi
        
               | oriolpr wrote:
               | hii
        
               | [deleted]
        
               | [deleted]
        
             | cjohansson wrote:
             | One could probably do a paper about evil universities doing
             | stupid things.. anyway evil actions are evil regardless of
             | the context, research 100-yrs ago was intentionally evil
             | without being questioned, today ethics should filter what
             | research should be done or not
        
         | [deleted]
        
         | seoaeu wrote:
         | There is no separate real world distinct from academia. Saying
         | that scientists and researchers whose job it is to understand
         | and improve the world are somehow becoming "increasingly
         | disconnected from the real world" is a pretty cheap shot.
         | Especially without any proof or even a suggestion of how you
         | would quantify that.
        
         | nborwankar wrote:
         | Technically analogous to pen testing except that it wasn't done
         | at the behest of the target, as legal pen testing is done.
         | Hence it is indistinguishable from and must be considered, a
         | malicious attack.
        
       | knz_ wrote:
       | The bad actors here should be expelled and deported. The
       | nationalities involved make it clear this is likely a backfired
       | foreign intelligence operation and not just 'research'.
       | 
       | They were almost certainly expecting an obvious bad patch to be
       | reverted while trying to sneak by a less obvious one.
        
       | im3w1l wrote:
       | I can't help but think of the Sokal affair. But I'll leave the
       | comparison to someone more knowledgeable about them both.
        
         | whatshisface wrote:
         | I'd bet that it was inspired by the Sokal affair. The
         | difference in reaction is probably because people think the
         | purity of Linux is important but the purity of obscure academic
         | journals isn't. (They're probably right, because one fault in
         | Linux will make the whole system insecure, whereas one dumb
         | paper would go in next to the other dumb papers and leave the
         | good papers unharmed.)
         | 
         | The similarities are that reviewers can get sleepy no matter
         | what they're reviewing. Troll doll QC staff get sleepy. Nuclear
         | reactor operators get sleepy too.
        
           | BugsJustFindMe wrote:
           | > _The similarities are that reviewers can_
           | 
           | Most people in the outgroup who know about the Sokal Affair
           | but who know nothing about the journal they submitted to
           | aren't aware of this, but Social Text was known to be not
           | peer reviewed at the time. It's not that reviewers failed
           | some test; there explicitly and publicly wasn't a review
           | process. Everyone reading Social Text at the time would have
           | known that and interpreted contents accordingly, so Sokal
           | didn't demonstrate anything of value and was just being a
           | jackass.
        
       | devillius wrote:
       | An appropriate place to make a report:
       | https://compliance.umn.edu/
        
       | mfringel wrote:
       | When James O' Keefe tries to run a fake witness scam on the
       | Washington Post, and the newspaper successfully detects it, the
       | community responds with "Well played!"
       | 
       | When a university submits intentionally buggy patches to the
       | Linux Kernel, and the maintainers successfully detect it, the
       | community responds with "That was an incredibly scummy thing to
       | do."
       | 
       | I sense a teachable moment, here.
        
         | noofen wrote:
         | Being a Linux Kernel maintainer is a thankless job. Being a
         | Washington Post journalist is nothing more than doing Bezos'
         | bidding and dividing the country in the name of profit.
        
         | loeg wrote:
         | I think O'Keefe is scummy, too.
        
       | Taylor_OD wrote:
       | The professor is going to give a ted talk in about a year talking
       | about how he got banned from open source development and the five
       | things he learned from it.
        
       | f46f7ab7de71 wrote:
       | Chink spies
        
       | Luker88 wrote:
       | The discussion points link to the github of the research
       | 
       | https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
       | 
       | It has yet to be published (due next month)
       | 
       | How about opening few bug reports to correctly report the final
       | response of the community and the actual impact?
       | 
       | Not asking to harass them: if anyone should do it, it would be
       | the kernel devs, and I'm not one of them
        
       | ficiek wrote:
       | Is introducing bugs into computer systems on purpose like this in
       | some way illegal in the USA? I understand that Linux is run by a
       | ton of government agencies as well, would they take interest in
       | this?
        
       | devwastaken wrote:
       | this is not surprising to me given the quality of minnesotta
       | universities. U of M should be banned from existence. I remember
       | vividly how they'd break their budgets redesigning cafeterias,
       | hiring low quality 'professors' that refused to make paper
       | assignments digitized. (They didnt know how). Artificially
       | inflated dorm costs without access to affordable cooking. (Meal
       | plans only). They have bankrupted plenty of students that were
       | forced to drop out due to their policies on mental health. It's
       | essentially against policy to be depressed or suicidal. They
       | predate on kids in high school who don't at all know what they're
       | signing up for.
       | 
       | Defund federal student loans. Make these universities stand on
       | their own two feet or be replaced by something better.
        
       | francoisp wrote:
       | I fail to see how this does not amount to vandalism of public
       | property. https://www.shouselaw.com/ca/defense/penal-code/594/
        
       | inquisitivemind wrote:
       | I have a question for this community:
       | 
       | Insofar as this specific method of injecting flaws matches a
       | foreign country's work done on U.S. soil - as many people in this
       | thread have speculated - do people here think that U.S. three
       | letter agencies (in particular NSA/CIA) should have the ability
       | to look at whether the researchers are foreign agents/spies, even
       | though the researchers are operating from the United States? For
       | example, should the three letter agencies have the ability to
       | review these researchers' private correspondence and social
       | graphs?
       | 
       | Insofar as those agencies _should_ have this ability, then, when
       | should they use it? If they do use it, and find that someone is a
       | foreign agent, in what way and with whom should they share their
       | conclusions?
        
       | jl2718 wrote:
       | I would check their ties to nation-state actors.
       | 
       | In closed source, nobody would even check. Modern DevOps has
       | essentially replaced manual code review with unit tests.
        
         | JohnWhigham wrote:
         | I don't understand why this isn't a more widely-held sentiment.
         | There's been instance after instance of corporate espionage in
         | Western companies involving Chinese actors in the past 2
         | decades.
        
         | jnxx wrote:
         | That gives me goose bumps.
        
       | dboreham wrote:
       | The replies here have been fascinating to read. Yes it's bad that
       | subterfuge was engaged in vs kernel devs. But don't the many
       | comments here expressing outrage at the actions of these
       | researchers sound exactly like the kind of outrage commonly
       | expressed by those in power when their misdeeds are exposed? e.g.
       | Republican politicians outraged at a "leaker" who has leaked
       | details of their illegal activity. It honestly looks to me like
       | the tables have been turned here. Surely the fact that the
       | commonly touted security advantages of OSS have been shown to be
       | potentially fictitious, is at least as worrying as the
       | researchers' ethics breaches?
        
         | not2b wrote:
         | One very good security practice is that if you find that you
         | have a malicious contributor, you fire that contributor. The
         | "misdeeds" were committed by the UMN researchers, not by the
         | Linux maintainers.
        
         | mratsim wrote:
         | Vulnerabilities in OSS are fixed over time. They are fixed by
         | people running the code and contributing back, by fuzzing
         | efforts, by testing a release candidate.
         | 
         | The difference between OSS and closed source is not the number
         | of reviewers for the initial commit, it's the number of
         | reviewers over years of usage.
        
       | macspoofing wrote:
       | Linux maintainers should log a complaint with the University's
       | ethics board. You can't just experiment on people without
       | consent.
        
         | smnrchrds wrote:
         | According to duncaen, the researchers had gotten the green
         | light from the ethics board before conducting the experiment.
         | 
         | https://news.ycombinator.com/item?id=26888978
        
           | rurban wrote:
           | Because they lied to them. They promised not to do any actual
           | harm. But they did
        
           | formerly_proven wrote:
           | IRB makes a decision based on the study protocol/design, so
           | if you intentionally mislead / make wrong statements there,
           | IRB approval doesn't really mean anything.
        
             | ajarmst wrote:
             | It means they either lied to the IRB or the IRB is
             | absolutely incompetent. Possibly actionably so. I've sat on
             | an IRB. This experiment would have been punted on initial
             | review. It wouldn't even have made the agenda for
             | discussion and vote.
        
         | kevincox wrote:
         | I agree. They are attempting to put security vulnerabilities
         | into a security-critical piece of software that is used by
         | billions of people. This is clearly unethical and unacceptable.
        
         | [deleted]
        
         | Frost1x wrote:
         | I always find the dichotomy we have regarding human subject
         | experimentation interesting in the US. We essentially have two
         | ecosystems of human subjects as to what is allowed and isn't:
         | public and privately funded. The contrast is a bit stark.
         | 
         | We have public funded rules (typically derived or pressured by
         | availability of federal or state monies/resources) which are
         | quite strict, have ethics and IRB boards, cover even behavioral
         | studies like this where no direct physical harm is induced but
         | still manipulates peoples' behaviors. This is the type of
         | experiment you're referring to where you can't experiment on
         | people without their consent (and by the way, I agree with this
         | opinion).
         | 
         | Meanwhile, we have private funded research which has a far
         | looser set of constraints and falls into everyday regulations.
         | You can't really physically harm someone or inject syphilis in
         | them (Tuskegee experiments) which makes sense, but when we
         | start talking about human subjects in terms of data, privacy of
         | data, or behavioral manipulation most regulation goes out the
         | window.
         | 
         | These people likely could be reprimanded, even fired, and
         | scarlet lettered making their career going forward more
         | difficult (maybe not so much in this specific case because it's
         | really not _that_ harmful) but enough to screw them over
         | financially and potentially in terms of career growth.
         | 
         | Meanwhile, some massive business could do this with their own
         | funding and not bat an eye. Facebook could do this (I don't
         | know why they would) but they could. Facebook is a prime
         | example of largely unregulated human subject experimentation
         | though. Social networks are a hotbed for data, interactions,
         | and setting up experimentation. It's not just Facebook though
         | (they're an obvious easy target), it's slews of businesses
         | collecting data and manipulating it around consumers:
         | marketing/advertising, product design/UX focusing on
         | 'engagement', and all sorts of stuff. Every industry does this
         | and that sort of human subject experimentation is accepted
         | because $money$. Meanwhile, researchers from public funding
         | sources are crucified for similar behaviors.
         | 
         | I'm not defending this sort of human subject experimentation,
         | it's ethically questionable, wrong, and should involve
         | punishment. I am however continually disgusted by the double
         | standard we have. If we as a society really think this sort of
         | experimentation on human subjects or human subject data is so
         | awful, why do we allow it to occur under private capital and
         | leave it _largely_ unregulated?
        
         | hn8788 wrote:
         | One of the other emails in the chain says they already did.
         | 
         | > This is not ok, it is wasting our time, and we will have to
         | report this, AGAIN, to your university...
        
         | judge2020 wrote:
         | "Is it ethical to A/B test humans on web pages?"
        
         | s_dev wrote:
         | I'm not sure it is experimenting people without consent. Though
         | it's certainly shitty and opportunitstic of UoM to do this.
         | 
         | Linux Bug fixes are open to the public. The experiment isn't on
         | people but on bugs. I would be like filing different customer
         | support complaints to change the behavior of a company --
         | you're not experimenting on people but the process of how that
         | company interfaces with the public.
         | 
         | I see no wrong here including the Linux maintainers banning
         | submissions from UoM which is completely justified as time
         | wasting.
        
           | HPsquared wrote:
           | It's experimenting with how specific people manage bugs.
        
           | garyfirestorm wrote:
           | UoM generally refers to University of Michigan. You probably
           | meant UMN.
        
           | koheripbal wrote:
           | I'm not sure which form of ethical violation this is, but
           | it's malicious and should be reported.
        
           | travisjungroth wrote:
           | I assure you that customer support reps and Linux maintainers
           | are in fact people.
        
         | walrus01 wrote:
         | I have a theory that while the university's ethics board may
         | have people on it who are familiar with the myriad of issues
         | surrounding, for instance, biomedical research, they have
         | nobody on it with even the most cursory knowledge of open
         | source software development. And nobody who has even the
         | faintest idea of how critically important the Linux kernel is
         | to global infrastructure.
        
       | [deleted]
        
       | omginternets wrote:
       | I did my Ph.D in cognitive neuroscience, where I conducted
       | experiments on human subjects. Running these kinds of experiments
       | required approval from an ethics committee, which for all their
       | faults (and there are many), are quite good at catching this kind
       | of shenanigans.
       | 
       | Is there not some sort of equivalent in this field?
        
         | angry_octet wrote:
         | It seems they lied to the ethics committee. But I'm not holding
         | my breath for the University to sanction them or withdraw/EoC
         | their papers, because Universities prefer to have these things
         | swept under the carpet.
        
           | forgotpwd16 wrote:
           | >they lied to the ethics committee
           | 
           | That'll be a fraud, no?
        
       | endisneigh wrote:
       | Though I disagree with the research in general, if you _did_ want
       | to research  "hypocrite commits" in an actual OSS setting, there
       | isn't really any other way to do it other than actually
       | introducing bugs per their proposal.
       | 
       | That being said, I think it would've made more sense for them to
       | have created some dummy complex project for a class and have say
       | 80% of the class introduce "good code", 10% of the class review
       | all code and 10% of the class introduce these "hypocrite"
       | commits. That way you could do similar research without having to
       | potentially break legit code in use.
       | 
       | I say this since the crux of what they're trying to discover is:
       | 
       | 1. In OSS anyone can commit.
       | 
       | 2. Though people are incentivized to reject bad code,
       | complexities of modern projects make 100% rejection of bad code
       | unlikely, if not impossible.
       | 
       | 3. Malicious actors can take advantage of (1) and (2) to
       | introduce code that does both good and bad things such that an
       | objective of theirs is met (presumably putting in a back-door).
        
         | sigstoat wrote:
         | > Though I disagree with the research in general, if you did
         | want to research "hypocrite commits" in an actual OSS setting,
         | there isn't really any other way to do it other than actually
         | introducing bugs per their proposal.
         | 
         | they could've done the much harder work of studying all of the
         | incoming patches looking for bugs, and then just not reporting
         | their findings until the kernel team accepts the patch.
         | 
         | the kernel has a steady stream of incoming patches, and surely
         | a number of bugs in them to work with.
         | 
         | yeah it would've cost more, but would've also generated
         | significant value for the kernel.
        
           | endisneigh wrote:
           | The point of the research isn't to study bugs, it's to study
           | hypocrite commits. Given that a hypocrite commit requires
           | _intention_ , there's no other way except to submit commits
           | yourself as the submitter would obviously know their own
           | intention.
        
             | cgriswald wrote:
             | In what way does a hypocrite commit differ from a commit
             | which unintentionally has the same effect?
        
               | endisneigh wrote:
               | They explain everything in their paper: https://github.co
               | m/QiushiWu/qiushiwu.github.io/blob/main/pap...
        
         | Sanzig wrote:
         | They could have contacted a core maintainer and explained to
         | them what they planned to do. That core maintainer could have
         | then spoken to other senior core maintainers in confidence
         | (including Greg and Linus) to decide if this type of pentest
         | was in the best interest of Linux and the OSS community at
         | large. That decision would need to weigh the possibility of
         | testing and hardening Linux's security review process against
         | possible reputational damage as well as alienating contributors
         | who might quite rightly feel they've been publicly duped.
         | 
         | If leadership was on board, they could have then proceeded with
         | the test under the supervision of those core maintainers who
         | ensure introduced security holes don't find their way into
         | stable. The insiders themselves would abstain from reviewing
         | those patches to see if review by others catches them.
         | 
         | If leadership was _not_ on board, they should have respected
         | the wishes of the Linux team and found another high-visibility
         | open-source project who is more amenable to the project. There
         | are lots of big open-source projects to choose from, the kernel
         | simply happens to be high-profile.
        
           | endisneigh wrote:
           | I don't disagree, but the point of the research is more to
           | point out a flaw in how OSS supposedly is conducted, not to
           | actually introduce bugs. If you agree with what they were
           | researching (and I don't) any sort of pre-emptive disclosure
           | would basically contradict the point of their research.
           | 
           | I still think the best thing for them would be to simply
           | create their own project and force their own students to
           | commit, but they probably felt that doing that would be too
           | contrived.
        
             | mratsim wrote:
             | Pentesting has wide accepted standards and protocols.
             | 
             | You don't test a bank or Fortune 500 security system
             | without buy-in of leadership ahead of time.
        
               | xxs wrote:
               | Doing otherwise would likely amount to a crime in a lot
               | of cases.
        
               | endisneigh wrote:
               | Those things aren't open source and don't take random
               | submissions though.
               | 
               | In any case as I mentioned before I disagree with what
               | they did.
        
           | not2b wrote:
           | Exactly. A test could have been conducted the knowledge of
           | Linus and Greg K-H, but not of the other maintainers. If the
           | proposed patch made it all the way through, it could be
           | blocked at the last stage from making it into an actual
           | release or release candidate. But it should be up to the
           | people in charge of the project whether they want to be
           | experimented on.
        
       | bigbillheck wrote:
       | So how does this differ from the Sokal hoax thing?
        
         | cblconfederate wrote:
         | Sokal didn't try to pass harmful ideas, just nonsense.
        
           | sltkr wrote:
           | The patch in the posted mail thread is mostly harmless
           | nonsense too. It's a no-op change that doesn't introduce a
           | bug; at worst it makes the code slightly less readable.
        
             | cblconfederate wrote:
             | then the title of this post is false, but given their
             | previous paper they probably wanted to inject more serious
             | bugs. if only noop code can pass, then if anything it's
             | good for linux
             | 
             | (this is not really noop, it would add some slight delay)
        
       | WrtCdEvrydy wrote:
       | I just want you to know that this is extremely unethical to
       | create a paper where you attempt to discredit others by just
       | using your university's reputation to try to create
       | vulnerabilities on purpose.
       | 
       | I back your decision and fuck these people. I will additionally
       | be sending a strongly worded email to this person, their advisor
       | and their whoever's in charge of this joke of a computer science
       | school. Sometimes I wish we had the ABA equivalent for computer
       | science.
        
         | incrudible wrote:
         | I completely disagree with this framing.
         | 
         | A real malicious actor is going to be planted in some reputable
         | institution, creating errors that look like honest mistakes.
         | 
         | How do you test if the process catches such vulnerabilities?
         | You do it the just the way that these researchers did.
         | 
         | Yes, it creates extra homework for some people with certain
         | responsibilities, that doesn't mean it's unethical. Don't shoot
         | the messenger.
        
           | michelpp wrote:
           | It _is_ unethical. You cannot experiment on people without
           | their consent. Their own university has explicit rules
           | against this.
        
           | hctaw wrote:
           | These are _real_ malicious actors.
        
             | incrudible wrote:
             | You don't know that, but that's also irrelevant. There's
             | _always_ plausible deniability with such bugs. The point is
             | that you need to catch the errors no matter where they come
             | from, because you can 't trust anyone.
        
               | hctaw wrote:
               | Carrying out an attack for personal gain is malicious. It
               | doesn't matter if the payload is for crypto mining,
               | creating a backdoor for the NSA, or a vulnerability you
               | can cite in a paper.
               | 
               | Pentesting unwitting participants is malicious, and in
               | many cases illegal.
        
               | mannykannot wrote:
               | It is ironic that you introduce plausible deniability
               | here. No one as concerned about security as you profess
               | to be should consider the presence of plausible
               | deniability as being grounds for terminating a threat
               | analysis. In the real world, where we cannot be sure of
               | catching every error, identifying actual threats, and
               | their capabilities and methods, is a security-enhancing
               | analysis.
        
               | WrtCdEvrydy wrote:
               | But that's the point, you're a security researcher
               | wanting to get the honors of getting a PhD, not a petty
               | criminal, so you're supposed to have a strong ethical
               | background.
               | 
               | A security researcher doesn't just delete a whole hard
               | drive's worth of data to prove they have the rights to
               | delete things, they are trusted for this reason.
        
           | WrtCdEvrydy wrote:
           | > A real malicious actor
           | 
           | They introduced a real vulnerability in a codebase that
           | lowers world-wide cybersecurity used by billions so they
           | could jerk themselves off over a research paper.
           | 
           | They are a real malicious actor and I hope they hit by the
           | CFAA.
        
             | toomuchtodo wrote:
             | There is a specific subsection of the CFAA that applies to
             | this situation (deployment of unauthorized code that makes
             | its way into non consenting systems).
             | 
             | This was a bold and unwise exercise, especially if you're
             | an academic in country on a revocable visa who
             | participated.
        
               | sharken wrote:
               | Others would call it stupid to submit the patches and it
               | would be fine if there were further consequences to deter
               | others.
        
               | toomuchtodo wrote:
               | Was attempting politeness. You're not wrong.
        
           | andrewzah wrote:
           | No. There are processes to do such sorts of penetration
           | testing. Randomly sending buggy commits or commits with
           | security vulns to "test the process" is extremely unethical.
           | The linux kernel team are not lab rats.
        
             | xiphias2 wrote:
             | It's not simply unethical, it's a national security risk.
             | Is there a proof that the Chinese government was not
             | sponsoring this ,,research '' for example?
        
               | andrewzah wrote:
               | Linux kernel vulnerabilities affect the entire world. The
               | world does not revolve around the U.S., and I find it
               | extremely unlikely a university professor in the U.S.
               | doing research for a paper did this on behalf of the
               | Chinese government.
               | 
               | It's far more likely that professor is so out of touch
               | that they honestly think their behavior is acceptable.
        
               | discoduck1 wrote:
               | The bio of the assistant professor, Kangjie Lu, is here:
               | https://www-users.cs.umn.edu/~kjlu/
               | 
               | It probably IS from being out of touch, or perhaps
               | desperation to become tenured. However, he is also an
               | alumnus of Chongqing University:
               | http://www.cse.cqu.edu.cn/info/2095/5360.htm
        
               | kodah wrote:
               | How about that question gets asked when there's actually
               | some semblance of evidence that supports that theory.
               | When you just throw, what I call, "dual loyalty" out as
               | an immediate possibility just because the person is from
               | China it starts to sound real nasty from the observers
               | point of view.
        
               | cameronh90 wrote:
               | Although there's nothing to suggest that this professor
               | is in any way supported by the Chinese state, I don't
               | think it's completely unreasonable to wonder.
               | 
               | The UK government has already said that China is
               | targeting the UK via academics and students. China is a
               | very aggressive threat with a ton of resources. It's
               | certainly a real scenario to consider.
               | 
               | Just as this "research" has burnt the trust between the
               | kernel maintainers and the UMN, if China intentionally
               | installs spies into western academia, at some point you
               | have to call into question the background of any Chinese
               | student. It's not fair, but currently China is relying on
               | the fact that we care about fairness and due process.
        
               | kodah wrote:
               | I acknowledged it's a possibility and there is precedence
               | for it, at least in industry, in the US.
               | 
               | That said, prove what they did was wrong, prove whether
               | controls like the IRB were used properly and informed
               | correctly, prove or disqualify the veracity of their
               | public statements (like the ones they made to the IEEE),
               | then start looking at possible motivations other than the
               | ones stated. I get that's difficult because these folks
               | have already proven to be integrity violators but I think
               | it's worthwhile to try to stick to.
               | 
               | If you jump straight to dual loyalty it is unfortunately
               | also a position that will be easily co-opted by _other_
               | bad faith actors and needlessly muddies the conversation
               | because not all good faith and reasonable possibilities
               | have been explored yet. I 'm promoting the idea of a
               | well-defined process here so that nobody can claim that
               | it's just bigoted people making these accusations.
        
               | splistud wrote:
               | So asking 'why?' in this situation is in some way
               | unethical because the person in question is from China?
               | Or is it that we have to limit the answers to our
               | question because the person is from China? Please advise,
               | and further clarify what thoughts are not permitted based
               | on the nationality of the person in question.
        
               | Woden501 wrote:
               | It's a very real threat and possibility thus an
               | absolutely appropriate question to be asking. There are
               | numerous documented instances of espionage performed by
               | Chinese nationals while operating within the US
               | educational system.
               | 
               | https://www.nbcnews.com/news/china/american-universities-
               | are...
        
               | delaynomore wrote:
               | If that's the case, why would they publish a paper and
               | announce their "research" to the world?
        
             | incrudible wrote:
             | > There are processes to do such sorts of penetration
             | testing.
             | 
             | What's the process then? I doubt there is such a process
             | for the Linux kernel, otherwise the response would've been
             | "you did not follow the process" instead of "we don't like
             | what you did there".
        
               | andrewzah wrote:
               | Well, if there's no process, then it's not ethical (and
               | sometimes, not legal) to purposefully introduce bad
               | commits or do things like that. You need consent.
               | 
               | Firstly, it accomplishes nothing. We already all know
               | that PRs and code submission is a potential vector for
               | buggy code or security vulnerabilities. This is like
               | saying water is wet.
               | 
               | Secondly, it wastes the time of the people working on the
               | linux kernel and ruins the trust of code coming from the
               | university of minnesota.
               | 
               | All of this happened due to caring about one's own
               | research more than the ethics of doing this sort of
               | thing. And continuing to engage in this behavior after
               | receiving a warning.
        
               | incrudible wrote:
               | First of all, whether something is ethical is an opinion,
               | and in my opinion, it is not unethical.
               | 
               | Even if I considered it unethical, I would still want
               | this test to be performed, because I value kernel
               | security above petty ideological concerns.
               | 
               | If this is illegal, then I don't think it should be
               | illegal. There's always debates about the legality of
               | hacking, but there's no doubt that many illegal (and
               | arguably unethical) acts of hacking have improved
               | computer security. If you remember the dire state of
               | computer security in the early 2000s, remember that the
               | solution was not throw all the hacker kids in jail.
        
               | andrewzah wrote:
               | The kernel team literally already does this by the very
               | nature of reviewing code submission. What do you think
               | they do if not examining the incoming code to determine
               | what, exactly, it does?
               | 
               | "because I value kernel security above petty ideological
               | concerns"
               | 
               | This implies that this is the only or main way security
               | is achieved. This is not true. Also, "valuing kernel
               | security above other things"... is an ideological
               | concern. You just happen to value this ideology more than
               | other ideological concerns.
               | 
               | "whether something is ethical is an opinion"
               | 
               | It is, but there are bases for forming opinions on what
               | is moral and ethical. In my opinion, secretly testing
               | people is not ethical. Again, the difference here is
               | consent. Plenty of organizations agree to
               | probing/intrusion attempts; there is no reason to
               | secretly do this. Again, security is not improved only by
               | secret intrusion attempts.
               | 
               | "there's no doubt that many illegal (and arguably
               | unethical) acts of hacking have improved computer
               | security"
               | 
               | I don't believe in the ends justify the means argument.
               | Either it's ethical or it isn't; whether or not security
               | improved in the meantime is irrelevant. Security also
               | improves in its own regard over time.
               | 
               | I do agree that the way the current laws regarding
               | "hacking" are badly worded and very punitive, but crimes
               | are crimes. Just because you like that hacking or think
               | it may be beneficial does not change the fact that it was
               | unauthorized access or an intentional attempt to submit
               | bad, buggy code, etc.
               | 
               | We have to look at it exactly like we look at
               | unauthorized access in i.e. business properties or
               | peoples' homes. That doesn't change just because it's
               | digital. You don't randomly walk up to your local
               | business with a lock picking kit to "test their
               | security". You don't randomly steal someone's wallet to
               | "test their security". Why is the digital space any
               | different?
        
               | incrudible wrote:
               | > The kernel team literally already does this by the very
               | nature of reviewing code submission. What do you think
               | they do if not examining the incoming code to determine
               | what, exactly, it does?
               | 
               | Maybe that's what they _claim_ to do, but how do you know
               | for sure? How do you _test_ for it?
               | 
               | > This implies that this is the only or main way security
               | is achieved.
               | 
               | It doesn't, there are many facets of security, social
               | engineering being one of them. Maybe it's controversial
               | to test something that requires misleading people, but
               | realistically the only alternative is to ignore the
               | problem. I prefer not to do that.
               | 
               | > Plenty of organizations agree to probing/intrusion
               | attempts; there is no reason to secretly do this.
               | 
               | Yes there is: Suppose you use some company's service and
               | they refuse to cooperate in regards to pentesting: The
               | "goody two-shoes" type of person just gives up. The
               | "hacker type" puts on their grey hat and plays some golf.
               | Is that unethical? What if they expose some massive flaw
               | that affects millions of unwitting people?
               | 
               | > I don't believe in the ends justify the means argument.
               | 
               | Not all ends justify all means, but some ends do justify
               | some means. In fact, if it's a justification to some
               | means, it's almost certainly an end.
               | 
               | > I do agree that the way the current laws regarding
               | "hacking" are badly worded and very punitive, but crimes
               | are crimes.
               | 
               | Tautologically speaking, crimes are indeed crimes, but
               | what are you trying to say here? Just because it's a
               | crime doesn't mean it is unethical. Sometimes, _not_
               | performing a crime is unethical.
               | 
               | > You don't randomly walk up to your local business with
               | a lock picking kit to "test their security".
               | 
               | Yes, but only because that's illegal, not because it is
               | _unethical_.
               | 
               | > You don't randomly steal someone's wallet to "test
               | their security".
               | 
               | Again, there's nothing morally wrong with "stealing"
               | someone's wallet and then giving it back to them. Better
               | I do it than some pickpocket. I have been tempted on
               | numerous occasions to do exactly that, but it's rather
               | hard explaining yourself in such a situation...
               | 
               | > Why is the digital space any different?
               | 
               | Because the risk of running into a physical altercation
               | is quite low, as is the risk of getting arrested.
        
               | andrewzah wrote:
               | "Maybe that's what they claim to do,"
               | 
               | Our society is built on trust. Do you test the water from
               | the city every time you drink it? Etc. Days like today
               | show that, yes, the kernel team is doing their job.
               | 
               | How about -you- prove that they -aren't- doing their job?
               | 
               | "Suppose you use some company's service and they refuse
               | to cooperate in regards to pentesting ... Is that
               | unethical?"
               | 
               | Yes. You are doing it without their consent. It is
               | unethical. Just because you think you are morally
               | justified in doing something without someone's consent
               | does not mean that it is not unethical. Just because you
               | think the overall end result will be good does not mean
               | that the current action is ethical.
               | 
               | "Yes, but only because that's illegal, not because it is
               | unethical."
               | 
               | This is very pedantic. It's both illegal and unethical.
               | How would you like if it you had a business and random
               | people came by and picked locks, etc, in the "name of
               | security"? That makes zero sense. It's not your
               | prerogative to make other people more secure. If they are
               | insecure and don't want to test it, then it's their own
               | fault when a malicious actor comes in.
               | 
               | "Again, there's nothing morally wrong with "stealing"
               | someone's wallet and then giving it back to them"
               | 
               | Yes, it is morally wrong. In that scenario, you -are- the
               | pickpocket. This is a serious boundary that is being
               | crossed. You are not their parent. You are not their
               | caretaker or guardian. You are not considering their
               | consent -at all-. You have no right to "teach people
               | lessons" just because you feel like you are okay with
               | doing that. If you did that to me I would not hang out
               | with you ever again, and let people know that you might
               | randomly take their stuff or cross boundaries for
               | "ideological reasons".
               | 
               | "Because the risk of running into a physical altercation
               | is quite low, as is the risk of getting arrested. "
               | 
               | This is admission that you know what you're doing is
               | wrong, and the only reason you do it digitally is because
               | it's more difficult to receive consequences for it.
               | 
               | I strongly urge you to start considering consent of other
               | people before taking actions. You can voice your
               | concerns, but things like taking a wallet or picking a
               | lock is crossing the line. Either they will take the
               | advice or not, but you cannot force it by doing things
               | like that.
        
               | incrudible wrote:
               | > Our society is built on trust.
               | 
               | Doveriai, no proveriai
               | 
               | > Do you test the water from the city every time you
               | drink it?
               | 
               | Not every time, but on a regular basis.
               | 
               | > Days like today show that, yes, the kernel team is
               | doing their job.
               | 
               | ...and I am happy to report that my water test results
               | did not raise concerns.
               | 
               | > Yes. You are doing it without their consent. It is
               | unethical.
               | 
               | I disagree that it is unethical just because it lacks
               | consent. Whistleblowing also implies that there is no
               | consent, yet it is considered ethical. Suppose that
               | Facebook leaves private data out in the open, then
               | refuses to allow anyone to test their system for such
               | vulnerabilities. It would be unethical _not_ to ignore
               | their consent in this regard.
               | 
               | > How would you like if it you had a business and random
               | people came by and picked locks, etc, in the "name of
               | security"? That makes zero sense.
               | 
               | I would find it annoying, of course. Computer hackers
               | _are_ annoying. It 's not fun to be confronted with
               | flaws.
               | 
               | The thing is, security is not about how I _feel_. We need
               | to look at things in proportion. If my business was a
               | random shoe store, then perhaps it doesn 't matter that
               | my locks aren't that great, perhaps these lockpickers are
               | idiots. If my business houses critical files that
               | absolutely must not be tampered with, then I _can not
               | afford_ to have shitty locks and frankly I should be
               | grateful that someone is testing them, for free.
               | 
               | > Yes, it is morally wrong. In that scenario, you -are-
               | the pickpocket. This is a serious boundary that is being
               | crossed. You are not their parent. You are not their
               | caretaker or guardian...
               | 
               | Can we just agree to disagree on _morals_?
               | 
               | > This is admission that you know what you're doing is
               | wrong, and the only reason you do it digitally is because
               | it's more difficult to receive consequences for it.
               | 
               | Not at all, those are two entirely separate things. I
               | wouldn't proclaim my atheism in public while visiting
               | Saudi Arabia - not because I think there's anything
               | morally wrong with that, but because I don't want the
               | trouble.
               | 
               | > I strongly urge you to start considering consent of
               | other people before taking actions.
               | 
               | You use "consent" as if it was some magical bane word in
               | every context. In reality, there's always a debate to be
               | had on what should and should not require consent. For
               | example, you just _assumed_ my consent when you quoted my
               | words, yet I have never given it to you.
        
               | DyslexicAtheist wrote:
               | Human Research Protection Program Plan & IRB determines
               | if something is unethical. and while these documents are
               | based on opinions they have weight due to consensus.
               | 
               | The way these (intrusive) tests (e.g. anti phishing) are
               | performed within organizations would be with the
               | knowledge and a very strongly worded contract between the
               | owners of the company and the party conducting the tests.
               | 
               | It is illegal in most of the world today. Even if you
               | disagree with responsible disclosure you would be well
               | advised not to send phishing mail to companies (whether
               | your intention was to improve their security or not is
               | beside the point).
        
               | WrtCdEvrydy wrote:
               | > I would still want this test to be performed, because I
               | value kernel security above petty ideological concerns.
               | 
               | The biggest issue around this is consent. You can totally
               | send an email saying "we're doing research on the
               | security implications of the pull request process, can we
               | send you a set of pull requests and you can give up
               | approve/deny on each one?"
               | 
               | > If you remember the dire state of computer security in
               | the early 2000s, remember that the solution was not throw
               | all the hacker kids in jail.
               | 
               | You weren't there when Mirai caused havok due to
               | thousands of insecure IoT devices getting pwned and
               | turned into a botnet... and introducing more
               | vulnerabilities is never the answer.
        
           | willtemperley wrote:
           | This would absolutely be true if this were an authorised
           | penetration test, however it was unauthorised and therefore
           | unethical.
        
             | incrudible wrote:
             | How exactly do you "authorize" these tests? Giving advance
             | notice would defeat the purpose, obviously.
        
               | WrtCdEvrydy wrote:
               | "We're writing research on the security systems involved
               | around the Linux kernel, would it be acceptable to submit
               | a set of patches to be reviewed for security concerns
               | just as if it was a regular patch to the Linux kernel?"
               | 
               | This is what you do as a grownup and the other side is
               | expected to honor your request and perform the same thing
               | they do for other commits... the problem is that people
               | think of pen testing as an adversarial relationship where
               | one person needs to win over the other one.
        
               | yjftsjthsd-h wrote:
               | To play Devil's Advocate, I suspect that this would if
               | different results because people behave differently when
               | they know that there is something going on.
        
               | WrtCdEvrydy wrote:
               | That's the thing, you just told the person to review the
               | request for security... in a true double blind, you
               | submit 10 PRs and see how many get rejected / approved.
               | 
               | If all 10 are rejected but only one had a security
               | concern, then the process is faulty in another way.
               | 
               | Edit: There is this theory that penetration testing is
               | adversarial but in the real world people want the best
               | outcome for all. The kernel maintainers are professionals
               | so I would expect the same level of caring for a "special
               | PR" versus a "normal PR"
        
               | captn3m0 wrote:
               | There are ways to reach the Kernel Security team that
               | doesn't notify all the reviewers. It is upto the Kernel
               | team to decide if they want to authorize such a test, and
               | what kind of testing is permissible.
        
               | MaxBarraclough wrote:
               | In a corporate setting, the solution would presumably be
               | to get permission from further up the chain of command
               | than the individuals being experimented upon. I think
               | that would resolve the ethical problem, as no individual
               | or organisation/project is then being harmed, although
               | there is still an element of deception.
               | 
               | I don't know enough about the kernel's process to comment
               | on whether the same approach could be taken there.
               | 
               | Alternatively, if the time window is broad enough,
               | perhaps you could be almost totally open with everyone,
               | withholding only the identity of the submitter. For a
               | sufficiently wide time window, _Be on your toes for
               | malicious or buggy commits_ doesn 't change the behaviour
               | of the reviewers, as that's part of their role anyway.
        
               | incrudible wrote:
               | That's not really testing the process, because now you
               | have introduced bias. Once you know there's a bug in
               | there, you can't just act as if you didn't know.
               | 
               | I guess you could receive "authorization" from a
               | confidante who then delegates the work to unwitting
               | reviewers, but then you could make the same "ethical"
               | argument.
               | 
               | Again, from a hacker ethos perspective, none of this was
               | unethical. From a "research ethics committee", maybe it
               | was unethical, but that's not the standard I want applied
               | to the Linux kernel.
        
               | jcranmer wrote:
               | This is the sort of situation where the best you could do
               | is likely to be slightly misleading about the purpose of
               | the experiment. So you'd lead off with "we're interested
               | in conducting a study on the effectiveness of the Linux
               | code review processes", and then use patches that have a
               | mix of no issues, issues only with the Linux coding style
               | (things go in the wrong place, etc.), only security
               | issues, and both.
               | 
               | But at the end of the day, sometimes there's just no way
               | to do ethically do the experiment you want to do, and the
               | right solution to that is to just live with being unable
               | to do certain experiments.
        
               | WrtCdEvrydy wrote:
               | > from a hacker ethos perspective, none of this was
               | unethical.
               | 
               | It totally is if your goal as a hacker is generating a
               | better outcome for security. Read the paper, see what
               | they actually did, they just jerked themselves off over
               | how they were better than the open source community, and
               | generated a sum total of zero helpful recommendations.
               | 
               | So they subverted a process, introduced a Use After
               | vulnerability and didn't do jack shit to improve it.
        
               | incrudible wrote:
               | > It totally is if your goal as a hacker is generating a
               | better outcome for security. Read the paper, see what
               | they actually did, they just jerked themselves off over
               | how they were better than the open source community, and
               | generated a sum total of zero helpful recommendations.
               | 
               | The beauty of it is that by "jerking themselves off",
               | they _are_ generating a better outcome for security. In
               | spirit, this reaction of the kernel team is not that
               | different from Microsoft attempting to bring asshole
               | hacker kids behind bars for exposing them. When Microsoft
               | realized that this didn 't magically make Windows more
               | secure, they fixed the actual problems. Windows security
               | was a joke in the early 2000s, now it's arguably better
               | than Linux. Why? Because those asshole hacker kids
               | actually changed the process.
               | 
               | > So they subverted a process, introduced a Use After
               | vulnerability and didn't do jack shit to improve it.
               | 
               | The value added here is to show that the process could be
               | subverted, the lessons are to be learned by someone else.
        
               | WrtCdEvrydy wrote:
               | > is to show that the process could be subverted, the
               | lessons are to be learned by someone else.
               | 
               | If you show up to a kernel developer's house, put a gun
               | to their head and tell them to approve the PR, that
               | process can also be subverted...
        
               | incrudible wrote:
               | It can also be subverted by abducting and replacing the
               | entire development team by impostors. What's your point?
               | That process security is hopeless and we should all just
               | go home?
        
               | WrtCdEvrydy wrote:
               | > What's your point? That process security is hopeless
               | and we should all just go home?
               | 
               | That there's an ethical way of testing processes which
               | includes asking for permission and using proven tested
               | methods like sending a certain amount of items N where X
               | are compromised and Y are not compromised and seeing the
               | ratio of K where K are rejected items and the ratio of
               | rejected items which are compromised K/X versus non-
               | compromised items K/Y.
               | 
               | By breaking the ethical component, the entire scientific
               | method of this paper is broken... now I have to go check
               | the kernel pull requests list to see if they sent 300
               | pull requests and got one accepted or if it was a 1:1
               | ratio.
        
               | incrudible wrote:
               | > That there's an ethical way of testing processes which
               | includes asking for permission and using proven tested
               | methods like sending a certain amount of items N where X
               | are compromised and Y are not compromised and seeing the
               | ratio of K where K are rejected items and the ratio of
               | rejected items which are compromised K/X versus non-
               | compromised items K/Y.
               | 
               | Again, that's not the same test. You are introducing
               | bias. You are not observing the same thing. Maybe you
               | think that observation is of equal value, but I don't.
               | 
               | > By breaking the ethical component, the entire
               | scientific method of this paper is broken...
               | 
               | Not at all. The _scientific method_ is amoral. The
               | absolute highest quality of data could only be obtained
               | by performing experiments that would make Joseph Mengele
               | faint.
               | 
               | There's always an ethical balance to be struck. For
               | example, it's not _ethical_ to perform experiments on
               | rats to develop insights that are of no benefit to these
               | rats, nor the broader rat population. If we applied our
               | human ethical standards to animals, we could barely
               | figure anything out. So what do we do? We accept the
               | trade-off. Ethical concerns are not the be-all-end-all.
               | 
               | In this case, I'm more than happy to have the kernel
               | developers be the labrats. I think the tradeoff is worth
               | it. Feel free to disagree, but I consider the ethical
               | argument to be nothing but hot air.
        
               | coldpie wrote:
               | Perhaps the research just simply shouldn't be done. What
               | are the benefits of this research? Does it outweigh the
               | costs?
        
               | incrudible wrote:
               | What's the harm exactly? Greg becomes upset? Is there
               | evidence that any intentional exploits made it into the
               | kernel? The process worked, as far I can see.
               | 
               | What's the benefit? You raise trust in the process behind
               | one of the most critical pieces of software.
        
               | coldpie wrote:
               | > What's the harm exactly?
               | 
               | It is wasting a lot of peoples' time.
               | 
               | > What's the benefit? You raise trust in the process
               | behind one of the most critical pieces of software.
               | 
               | I'm skeptical that a research paper by some nobodies from
               | a state university will accomplish this.
        
               | incrudible wrote:
               | > It is wasting a lot of peoples' time.
               | 
               | If you run a test on your codebase and it passes, do you
               | find that writing the test was a waste of time?
               | 
               | > I'm skeptical that a research paper by some nobodies
               | from a state university will accomplish this.
               | 
               | It did for me.
        
               | coldpie wrote:
               | Let's take a peek at how the people whose time is being
               | wasted feel about it:
               | 
               | > This is not ok, it is wasting our time, and we will
               | have to report this, AGAIN, to your university...
               | 
               | > if you have a list of these that are already in the
               | stable trees, that would be great to have revert patches,
               | it would save me the extra effort these mess is causing
               | us to have to do...
               | 
               | > Academic research should NOT waste the time of a
               | community.
               | 
               | > The huge advantage of being "community" is that we
               | don't need to do all the above and waste our time to fill
               | some bureaucratic forms with unclear timelines and
               | results.
               | 
               | Seems they don't think it is a good use of their time,
               | no. But I'm sure you know a lot more about kernel
               | development and open source maintenance than they do,
               | right?
        
               | incrudible wrote:
               | I didn't intend to convey that the answer to my question
               | is "no". That's the whole problem with tests: Most of the
               | time, it's drudge work and it does _feel_ like they 're a
               | waste of time when they never signal anything. That
               | doesn't mean they are a waste of time.
               | 
               | Similarly, if a research paper shows that its hypothesis
               | is false, the author might _feel_ that it was a waste of
               | time having worked on it, which can lead to publication
               | bias.
        
         | [deleted]
        
         | mbauman wrote:
         | Don't harass people.
        
           | WrtCdEvrydy wrote:
           | I'm not harassing anyone... I'm politely reminding this
           | person that ethics are a real thing.
        
             | jdsully wrote:
             | Unless you are an involved party you're just adding to the
             | mob. The kernel maintainers said their piece there's no
             | need for everyone to pile on.
        
               | WrtCdEvrydy wrote:
               | I am a user of the Linux kernel which a publicly funded
               | university using my tax dollars just attempted to subvert
               | in order to jerk themselves off over a research paper.
        
               | burnished wrote:
               | Maybe there is? I'm not convinced of my own position but
               | I'd suggest there is a difference from an outcry sourced
               | in people that are well informed of their complaint and
               | the sort of brigading/bandwagoning you can see come from
               | personal details being posted under a complaint.
        
               | jdsully wrote:
               | I guess the question is what more is there to accomplish?
               | They've been banned and I gurantee they already know the
               | community is pissed. Is filling up their inbox with angry
               | emails going to actually help in any way?
        
         | DyslexicAtheist wrote:
         | just write to irb@umn.edu and ask if this was a) reviewed and
         | b) who approved it. It seems they have anyway violated the
         | Human Research Protection Program Plan.
         | 
         | The researchers should not have done this, but ultimately it's
         | the faculty that must be held accountable for allowing this to
         | happen in the first place. They are a for-profit institution
         | and should not get away with harassing people who are
         | contributing their personal time. So nail them to the
         | proverbial cross but make sure the message is heard by those
         | who slipped up (not the researchers who should have been
         | stopped before it happened).
        
         | dang wrote:
         | Please don't fulminate on HN (see
         | https://news.ycombinator.com/newsguidelines.html). It's not
         | what this site is for, because it leads to considerably worse
         | conversation.
         | 
         | Even if you're justifiably steaming about something, please
         | wait to cool down before posting here.
         | 
         | We detached this subthread from
         | https://news.ycombinator.com/item?id=26889743.
        
           | WrtCdEvrydy wrote:
           | Are you serious? If I publish a paper on the social
           | engineering vulnerabilities we have used over the last three
           | months to gain access to your password and attempt to take
           | over Hacker News, you would be fine with it? No outburst, no
           | angrily banning my account...
        
             | burnished wrote:
             | It seems odd that you are responding with a threat, or at
             | least a threatening hypothetical to a (the?) moderator.
             | 
             | The way I understand it is that unnecessarily angry or
             | confrontational posts tend to lower the overall tone. They
             | are cathartic/fun to write, fast to write, and tend to get
             | wide overall agreement/votes. So if they are allowed then
             | most of the discussion on a topic gets pushed down beneath
             | that sort of post.
             | 
             | Hence why we are asked to refrain, to permit more room for
             | focused and substantive discussion.
        
               | WrtCdEvrydy wrote:
               | No, I'm asking if he thinks as a person who built Hacker
               | News if this is what we want out of the technology
               | ecosystem from the cybersecurity professionals.
               | 
               | Edit: dang is a good person and I don't understand how
               | he's taking sides here with people sending out malware
               | (because that's what this sums up to). I understand I
               | came on a little hot, but that was unexpected.
        
               | burnished wrote:
               | I'm glad we agree on the merits of dang.
               | 
               | After reviewing the thread I don't see any of what you
               | are asking, here, upstream. I don't seem dang coming out
               | on the same side as people sending out malware, and I
               | don't really see that question present. I wish I had
               | something more concrete to say, but I think your take
               | here (and only here) is wrong and that you might have
               | just entered this one on the wrong foot?
        
       | forgotpwd16 wrote:
       | Not wanting to play the devil's advocate here but though scummy,
       | they still successfully introduced vulnerabilities to the kernel.
       | Suppose the paper hadn't been released or an adversary had done
       | it. How long they'll be lingering around if they're ever removed?
       | The paper makes a case that FOSS projects shouldn't merely trust
       | authority for security (neither the ones submitting or the ones
       | reviewing) but utilize tools to find potential vulnerabilities
       | for every commit.
        
         | rrss wrote:
         | > utilize tools to find potential vulnerabilities for every
         | commit.
         | 
         | The paper doesn't actually have concrete suggestions for tools,
         | just hand-waving about "use static analysis tools, better than
         | the ones you already use" and "use fuzzers, better than those
         | that already exist."
         | 
         | The work was a stunt to draw attention to the problem of
         | malicious committers. In that regard, it was perhaps
         | successful. The authors' first recommendation is for the kernel
         | community to increase accountability and liability for
         | malicious committers, and GregKH is doing a fantastic job at
         | that by holding umn.edu accountable.
        
         | bombcar wrote:
         | Coverity found at least one:
         | 
         | vvv CID 1503716: Null pointer dereferences (REVERSE_INULL) vvv
         | Null-checking "rm" suggests that it may be null, but it has
         | already been dereferenced on all paths leading to the check.
         | 
         | and tools are useful, but given the resources and the know-how
         | of those who compete in the IOCC I think we'd have to assume
         | they'd be able to get something through. It'd have an even
         | higher chance of success if it could be built to target a
         | particular hardware combination (of a desired victim) as you
         | could make the exploit dependent on multiple parts of the code
         | (and likely nobody would ever determine the extent, as they'd
         | find parts of it and fix them independently).
        
       | stakkur wrote:
       | When you test in production...
        
       | brundolf wrote:
       | What a bizarre saga.
        
       | qalmakka wrote:
       | Well, they had it coming. They abused the community's trust once
       | in order to gain data for their research, and now it's
       | understandable GKH has very little regard for them. Any action
       | has consequences.
        
       | foolfoolz wrote:
       | how can i see these prs?
        
         | jessaustin wrote:
         | A search on the linked mailing list seems to include a lot of
         | this junk:
         | 
         | https://lore.kernel.org/linux-nfs/?q=@umn.edu&x=t
        
       | neatze wrote:
       | Interesting, if they provided to NSF human subject research
       | section, to me this is potential research ethics issue.
       | 
       | Imagine, saying we would like to test how fire department
       | responds to fire, by setting buildings on fire in NYC.
        
         | andi999 wrote:
         | Well, just a small fire which you promise to extinguish
         | yourself if they dont show up on time. Of course nobody can
         | blame you if you didnt manage to extinguish it...
         | 
         | Also the buildings are not random, but safety critical
         | infrastructure, but this is good, you can advise later:'put a
         | "please do not ignite" sign on the building'.
        
       | arkh wrote:
       | > I will not be sending any more patches due to the attitude that
       | is not only unwelcome but also intimidating to newbies and non
       | experts.
       | 
       | Maybe not being nice is part of the immune system of open source.
        
         | jasonwatkinspdx wrote:
         | There is nothing about enforcing high standards that requires
         | hostility or meanness. In this case the complaint that greg is
         | being intimidating is being made entirely in bad faith. I don't
         | think anyone else has a problem with greg's reply. So this
         | doesn't really come across as an example that demonstrates your
         | "not being nice is necessary" view.
        
         | walrus01 wrote:
         | Every time I have seen Theo from the OpenBSD project come down
         | hard on someone, it was deserved.
        
         | mccr8 wrote:
         | Being rude isn't going to discourage malicious actors, who are
         | motivated by fame or wealth.
         | 
         | If you ran a bank and had a bunch of rude bank tellers, you are
         | only going to dissuade customers, not bank robbers.
        
           | motohagiography wrote:
           | Being nice is expensive, and sending bad code imposes costs
           | on maintainers, so the sharp brevity of maintainers is
           | efficient, and in cases where the submitter has wasted the
           | maintainers time, the maintainer _should_ impose a
           | consequence by barking at them.
           | 
           | Sustaining the belief that every submitter is an earnest,
           | good, and altruistic person is painfully expensive and a
           | waste of very valuable minds. Unhinged is unhinged and that
           | needs to be managed, but keeping up the farce that there is
           | some imaginary universe where the submitter is not wasting
           | your time and working the process is wrong.
           | 
           | I see this in architecture all the time, where people feign
           | ignorance and appeal to this idea you are obligated to keep
           | up the pretense that they aren't being sneaky. Competent
           | people hold each other accountable. If you can afford
           | civility, absolutely use it, but when people attempt to tax
           | your civility, impose a cost. It's the difference between
           | being civil and harmless.
        
           | devenson wrote:
           | Enforcing formal behavior makes the deviant behavior more
           | noticeable.
        
           | mirchibajji wrote:
           | A better analogy: Attempting to pee in the community pool to
           | research if the maintainers are doing a good job of managing
           | the hygiene standards.
        
           | betaby wrote:
           | Proper analogy would be 'rude SWAT team', not 'rude bank
           | tellers'.
        
         | jallen_dot_dev wrote:
         | "We're banning you for deliberately submitting buggy patches as
         | an experiment."
         | 
         | "Well if you're gonna be a jerk about it, I won't be sending
         | any more patches."
        
           | stuffbyspencer wrote:
           | "I can excuse racism wasting OSS maintainers time, but I draw
           | the line at rudeness!" - (community)
        
           | [deleted]
        
         | john-radio wrote:
         | But G. K-H's correspondence here is completely cordial and
         | professional, and still gets all the results that were needed?
        
         | Ar-Curunir wrote:
         | Instead of not being nice, maybe Linux should adopt some sort
         | of CI and testing infrastructure.
        
           | da_big_ghey wrote:
           | Linux are have plenty testing machines, but it are not so
           | simple that you seem to think for to test whole kernel. there
           | is not any catching for all these possible cases, so not nice
           | remain importance. and greater part is driver, driver need
           | device for to work, so CI on this is hard.
        
           | eropple wrote:
           | https://kernelci.org is a Linux Foundation project; there are
           | others, but that's just the main one I know of offhand.
           | 
           | The idea that "not being nice" is necessary is plainly
           | ridiculous, but this post is pretty wild--effectively you're
           | implying that they're just amateurs or something and that
           | this is a novel idea nobody's considered, while billions and
           | billions of dollars of business run atop Linux-powered
           | systems.
           | 
           | What they _don 't_ do is hand over CI resources to randos
           | submitting patches. That's why kernel developers receive and
           | process those patches.
        
         | colechristensen wrote:
         | I think so. With a large project I think a realist attitude
         | that raises to the level of mean when there's bullshit around
         | is somewhat necessary to prevent decay.
         | 
         | If not you get cluttered up with bad code and people there for
         | the experience. Like how stackoverflow is lost to rule zealots
         | there for the game not for the purpose.
         | 
         | Something big and important should be intimidating and isn't a
         | public service babysitter...
        
           | admax88q wrote:
           | I'm not sure why you think you have to be mean to avoid bad
           | code. Being nice doesn't mean accepting any and all
           | contributions. It just means not being a jerk or _overly_
           | harsh when rejecting.
        
           | ethbr0 wrote:
           | It feels like a corollary of memetic assholery in online
           | communities. Essentially the R0 [0] of being a dick.
           | 
           | If I have a community, bombarded by a random number of
           | transient bad actors at random times, then if R0 > some
           | threshold, my community _inevitably_ trends to a cesspool, as
           | each bad actor creates more negative members.
           | 
           | If I take steps to decrease R0, one of which may indeed be
           | "blunt- and harshness to new contributors", then my community
           | may survive in the face of equivalent pressures.
           | 
           | It's a valid point, and seems to have historical support via
           | evidence of many egalitarian / welcoming communities
           | collapsing due to the accumulation of bad faith participants.
           | 
           | The key distinction is probably "Are you being blunt / harsh
           | in the service of the primary goal, or ancillary to the
           | mission?"
           | 
           | [0] https://en.m.wikipedia.org/wiki/Basic_reproduction_number
        
             | depressedpanda wrote:
             | > It's a valid point, and seems to have historical support
             | via evidence of many egalitarian / welcoming communities
             | collapsing due to the accumulation of bad faith
             | participants.
             | 
             | Could you provide references to some of this historical
             | support?
        
               | yifanl wrote:
               | Kinda a silly example, but several subreddits that
               | started out with the aim of making fun of some subject
               | (i.e. r/prequelmemes and the Star Wars prequels, or
               | r/the_donald and then Presidential candidate Donald
               | Trump) were quickly turned into communities earnestly
               | supporting the initial subject of parody.
        
               | ethbr0 wrote:
               | I think Reddit is the broadest example, because it's
               | evidence of both outcomes due to the diversity in
               | moderation policy between subs.
               | 
               | Some can tolerate a steady influx of bad actors: some
               | fall apart. There's probably a valid paper in there
               | somewhere.
        
               | alexeldeib wrote:
               | I don't think this is silly at all. And the fact that
               | reddit's admins occasionally have to step in with a
               | forceful hand over what the mods do only speaks louder to
               | GP's point.
        
           | ABCLAW wrote:
           | You can create a strict, high functioning organization
           | without being an asshole. Maintaining high standards and
           | expecting excellence isn't an exercise in babysitting; it's
           | an exercise in aligning contributors to those same standards
           | and expectations.
           | 
           | You don't need to do that by telling them they're garbage.
           | You can do it by getting them invested in growth and
           | improvement.
        
             | da_big_ghey wrote:
             | that is depend on who you are asking. if i am taking "no
             | nonsense" aproach then some people are having no problem.
             | but other people, include especialy woman, are say that it
             | is not "nice" and that there is some problem even if it is
             | neither "mean".
             | 
             | also here we are seeing persons are having no interest in
             | "growth and improvement", they are not even creating the
             | good faith contributions to project.
        
           | tester756 wrote:
           | > Like how stackoverflow is lost to rule zealots there for
           | the game not for the purpose.
           | 
           | Like?
        
             | skeletal88 wrote:
             | Honest questions getting downvoted, closed for being too
             | broad, duplicates or just "Wrong" in the eyes of
             | overzealous long time members
        
               | tester756 wrote:
               | There's nothing wrong with duplicates
               | 
               | If they weren't doing it, then quality of SO would
               | decrease for all of us.
               | 
               | It's in our interest to have strict mods on SO
        
               | burnished wrote:
               | You haven't seen a question closed as a duplicate when it
               | was clear that time or details had made the linked
               | question not an actual duplicate?
        
               | JadeNB wrote:
               | I think the idea is that it's better to err on the side
               | of too-strict moderation than too-lax. People can always
               | come back to re-try a question at another time, but, once
               | the spirit of the community is lost, there's not much you
               | can do about it.
               | 
               | (Not to say I like the StackExchange community much. It's
               | far too top-down directed for me. But I'm very much
               | sympathetic to the spirit of strict moderation.)
        
               | burnished wrote:
               | I agree with you. Its silly to act like the phenomenon
               | doesn't exist though.
        
               | threatofrain wrote:
               | I've never seen it develop into a serious problem, just
               | as I've never seen rule driven Wikipedia have problems
               | with rule obsession.
               | 
               | There are all sorts of community websites around the
               | world. Which have developed into a serious SO contendor?
               | IMO many things are threatening SO's relevance, but they
               | don't look anything like it, which suggests that what SO
               | is doing wrong isn't the small details.
               | 
               | For example, I'd argue that Discord has become the next
               | place for beginners to get answers, but chat rooms are
               | very different from SO. For one thing, the help is better
               | because someone else can spend their brain power to
               | massage your problem. And another is that knowledge dies
               | almost instantly.
        
               | burnished wrote:
               | EDIT: I removed quoted portions and snippy replies.
               | 
               | Forgive me if I wasn't being clear. It seems like your
               | core point is that SO's rules are on the whole good for
               | keeping it focused, and it seems like you are assuming
               | I'm a beginner programmer who is frustrated with SO for
               | not being more beginner friendly and thus advising me on
               | what I should do instead. I feel like you are shadow
               | boxing a little.
               | 
               | I think we probably mostly agree; I think SO gets an
               | unfortunate reputation as a good place for beginners (as
               | opposed to a sort of curated wiki of asked and answered
               | questions on a topic, a data store of wisdom), and that
               | in general beginners are probably best served by smaller
               | 1-1 intervention. I usually suggest people seek out a
               | mentor, it had never occurred to me that Discord could be
               | a good way to go about this.
               | 
               | The original point I was trying to make is simply that
               | you can see overzealous rule following on SO and that a
               | form of that is in inappropriately closed as duplicate
               | questions.
        
               | JadeNB wrote:
               | Kind of like how the zero-tolerance of the HN community
               | for joke-y / quick-take comments kills the fun sometimes
               | --but also means that people (like me who came here from
               | Reddit and discovered what wasn't welcome right quick)
               | learn the culture, and get to remain part of the culture
               | we signed up for rather than something that morphs over
               | time to the lowest common denominator.
        
         | wyager wrote:
         | Absolutely. The derision that people like Linus get for being
         | "mean" to big corpos trying to submit shitty patches is totally
         | misplaced.
        
         | devmor wrote:
         | I was enjoying Linus being less aggressive, but maybe we do
         | need angry Linus.
        
           | depressedpanda wrote:
           | I enjoyed (and now miss) angry Linus.
        
           | rataata_jr wrote:
           | I enjoy Linus's wit in insulting people. He's good.
        
           | c2h5oh wrote:
           | Angry Linus would risk a stroke responding in that email
           | thread.
        
             | pvarangot wrote:
             | Because he's old. I think young Linus wouldn't have held
             | back in making judgement about the quality and usefulness
             | of the research being done here.
        
               | c2h5oh wrote:
               | No, because people introducing bad code through lack of
               | skill/not enough effort were enough to get him going.
               | 
               | People introducing bad code on purpose, for a social
               | experiment, are on a whole new level of bad and so would
               | his anger be.
        
           | disintegore wrote:
           | Angry Greg is doing a great job. Effective, and completely
           | without expletives or personal insults.
        
             | corty wrote:
             | He is doing a great job. But I think a few insults earlier
             | on might have prevented a whole lot of trouble.
        
             | [deleted]
        
         | slver wrote:
         | Honestly WTF would a "newbie and non-expert" have to do with
         | sending KERNEL PATCHES.
        
           | hmfrh wrote:
           | Nobody is an expert on every subject. You could have PhD
           | level knowledge of the theory behind a specific filesystem or
           | allocator but know next to nothing about the underlying
           | hardware.
        
             | slver wrote:
             | My point is that "we're newbies on the topic of the Linux
             | Kernel, so be friendly to us when sending Linux Kernel
             | patches" is the worst argument I've heard about anything in
             | years.
        
               | themulticaster wrote:
               | I'd say that's a very valid argument in principle. If you
               | want to start contributing to the Linux kernel, you'll
               | have to start somewhere - but you can't start refactoring
               | entire subsystems, rather you'll start with small patches
               | and it's very natural to make minor procedural and
               | technical mistakes at that stage. [1]
               | 
               | However, in this particular case, I agree that it is not
               | a valid argument since it is doubtful whether the
               | beginning kernel contributor's patches are in good faith.
               | 
               | [1] Torvalds encouraging new contributors to send in
               | trivial patches back in 2004:
               | https://lkml.org/lkml/2004/12/20/255
        
               | slver wrote:
               | You have plenty of places to start. Fork and patch away.
               | You don't start by patching the distribution the entire
               | world uses.
               | 
               | It's like "be kind I'm new to the concept of airplanes,
               | let me fly this airplane with hundreds of passengers"
        
             | afr0ck wrote:
             | How is it possible that you have a PhD in filesystems but
             | you don't know how to write an acceptable patch for the
             | Linux kernel? That's what I call a scam PhD.
        
           | tolbish wrote:
           | So they can tell companies "I am a contributor to the Linux
           | kernel"..there are charlatans are in every field. Assuming
           | this wasn't malicious and "I'm a newbie" isn't just a cover.
        
             | MeinBlutIstBlau wrote:
             | I had a professor in college who gave out references to
             | students and made his assignments so easy a middle school
             | kid could do the homework. He never said why but I'm 100%
             | positive he was gaming the reviews sent back so that he'd
             | stay hired on as an instructor while he built up his lesson
             | plans. I think he figured out how to game it pretty quick
             | seeing as the position had like 3 instructors before him
             | whom students universally hated for either being
             | ridiculously strict or lying through their teeth about even
             | knowing what the difference between public and private
             | meant.
        
           | scott00 wrote:
           | Personally I don't think you can become an expert in Linux
           | kernel programming without sending patches. So over the long
           | term, if you don't let non-experts submit patches then no new
           | experts will ever be created, the existing ones will die or
           | move on, and there won't be any experts at all. At that point
           | the project will die.
        
             | da_big_ghey wrote:
             | but Greg had the correct that patches sent were in many
             | part easily seeable as bad for any persons who are knowing
             | C. each person must have some time new to C, and some time
             | new to kernel, but those times should not be same.
        
         | praptak wrote:
         | > Maybe not being nice is part of the immune system of open
         | source.
         | 
         | Someone for whom being a bad actor is a day job will not get
         | deterred by being told to fuck off.
         | 
         | Being nasty _might_ deter some low key negative contributors -
         | maybe someone who overestimates their ability or someone  "too
         | smart to follow the rules". But it might also deter someone who
         | could become a good contributor.
        
         | dmos62 wrote:
         | Not being nice is always to protect self. Not always effective
         | though, and not always necessary.
        
         | medium_burrito wrote:
         | I'm curious what sort of lawsuits might be possible here. I for
         | one would donate $1000 to a non-profit trust formed to find
         | plaintiffs for whatever possible cause and then sue the
         | everloving shit out of the author + advisor + university as
         | many times as possible.
         | 
         | EDIT: University is fair game too.
        
         | alexfromapex wrote:
         | I disagree, I think it's important to be nice and welcoming to
         | contributors but the immune system should be a robust code of
         | conduct which explicitly lists things like this that will
         | result in a temporary or permanent ban
        
         | cptskippy wrote:
         | Attacking those critical of your questionable behavior and then
         | refusing to participate further is a common response people
         | have when caught red handed.
         | 
         | This is just a form of "well I'll just take my business
         | elsewhere!". Chances are he'll try again under a pseudonym.
        
         | motohagiography wrote:
         | On the other thread, I suggested this was an attack on critical
         | infrastructure using a university as cover and that this was a
         | criminal/counter-intellgence matter, and then asked whether any
         | of these bug submitters also suggested the project culture was
         | too aggressive and created an unsafe environment, to reduce
         | scrutiny on their backdoors.
         | 
         | Talk about predictive power in a hypothesis.
        
           | CoolGuySteve wrote:
           | Given its ubiquity in so many industries, tampering with
           | Linux kernel security sounds an awful lot like criminal
           | sabotage under US law.
           | 
           | Getting banned from contributing is a light penalty.
        
             | _n_b_ wrote:
             | > criminal sabotage under US law
             | 
             | It is pretty comfortably not sabotage under 18 USC 105,
             | which requires proving intent to harm the national defense
             | of the United States. Absent finding an email from one of
             | the researchers saying "this use-after-free is gonna fuck
             | up the tankz," intent would otherwise be nearly impossible
             | to prove.
        
               | LinuxBender wrote:
               | Intent would be hard to prove without emails / chat
               | conversations for sure. As for damages, Linux is used by
               | DoD, NASA and a myriad of other agencies. All the 2 and 3
               | letter agencies use it. Some of them contribute to it.
        
               | rvba wrote:
               | If congress wasn't full of old people who don't
               | understand computers, that university professor could
               | spend years in jail or be executed for treason.
        
               | dragonwriter wrote:
               | > It is pretty comfortably not sabotage under 18 USC 105,
               | which requires proving intent to harm the national
               | defense of the United States.
               | 
               | Presumably, this reference is intended to be to 18 USC
               | ch. 105 (18 USC SSSS 2151-2156). However, the
               | characterization of required intent is inaccurate; the
               | most relevant provision (18 USC SS 2154) doesn't require
               | intent if the defendant has " _reason to believe_ that
               | his act may injure, interfere with, or obstruct the
               | United States or any associate nation in preparing for or
               | carrying on the war or defense activities" (emphasis
               | added) during either a war or declared national
               | emergency.
               | 
               | It wouldn't take much more than showing evidence that the
               | defendant was aware (or even was in a position likely to
               | be exposed to information that would make him aware) that
               | Linux is used somewhere in the defense and national
               | security establishment to support the mental state aspect
               | of the offense.
               | 
               | https://www.law.cornell.edu/uscode/text/18/2154
        
             | webinvest wrote:
             | Either that or the CFAA
        
           | andrewzah wrote:
           | "I suggested this was an attack on critical infrastructure
           | using a university as cover and that this was a
           | criminal/counter-intellgence matter"
           | 
           | There is absolutely zero evidence of this. None. In my
           | opinion it's baseless speculation.
           | 
           | It's far more likely that they are upset over being called
           | out, and are out of touch with regards as to what is ethical
           | testing.
        
             | mratsim wrote:
             | Sure, don't attribute to malice what can be attributed to
             | ignorance. But you have to admit that backdooring Linux
             | would be huge and worth billions.
        
               | webinvest wrote:
               | Yes, Hanlon's razor is apt but if you read TFA, you can
               | see heavy amounts of both malice and ignorance.
               | 
               | From TFA: "The UMN had worked on a research paper dubbed
               | "On the Feasibility of Stealthily Introducing
               | Vulnerabilities in Open-Source Software via Hypocrite
               | Commits". Obviously, the "Open-Source Software" (OSS)
               | here is indicating the Linux kernel and the University
               | had stealthily introduced Use-After-Free (UAF)
               | vulnerability to test the susceptibility of Linux."
        
             | SuoDuanDao wrote:
             | GP had a hypothesis and made a prediction based on it. The
             | prediction turned out to be right. What more do you want?
        
               | andrewzah wrote:
               | I want proof that the motive was in any way, shape, or
               | form, related to or sponsored by a foreign government
               | under the cover of university research. Not speculation
               | based -solely- on the nationality or ethnicity of the
               | accused.
        
               | SuoDuanDao wrote:
               | With the utmost possible respect, 'criminal or
               | counterintelligence' in no way implies the involvement of
               | a foreign government, and trying to allege racism on such
               | flimsy grounds is a rhetorical tactic well past it's
               | sell-by date.
        
               | motohagiography wrote:
               | What is their ethnicity? I just assumed they were all
               | American citizens. My previous comment included how U.S.
               | based attackers alleged they did something similar to
               | openbsd's VPN libraries over a decade ago.
               | 
               | Suggesting a foreign government could be leaping to
               | conclusions as well, given domestic activists with an
               | agenda may do the same thing. A linux kernel backdoor is
               | valuable to a lot of different interests. Hence why
               | counter-intelligence should be involved.
               | 
               | However, I just looked at the names of the people
               | involved and I don't know. Even if they were Taiwanese,
               | that's an allied country, so I wouldn't expect it. Who
               | were you thinking of?
        
       | booleandilemma wrote:
       | _The UMN had worked on a research paper dubbed "On the
       | Feasibility of Stealthily Introducing Vulnerabilities in Open-
       | Source Software via Hypocrite Commits"._
       | 
       | I guess it's not as feasible as they thought.
        
       | icedchai wrote:
       | Seems like completely pointless "research." Clearly it wasted the
       | maintainers' time, but also the "researchers" investigating
       | something that is so obviously possible. Weren't there any real
       | projects to work on?
        
       | tiziniano wrote:
       | Unsurprising given the current open sores model of software
        
       | williesleg wrote:
       | Fucking Chiners
        
       | GRBurst wrote:
       | Actually I do understand BOTH sides, BUT:
       | 
       | The way the university did this tests and the reactions
       | afterwards are just bad.
       | 
       | What I see here and what the Uni of Minnesota seem to neglected
       | is: 1. Financial damage (time is wasted) 2. Ethical reasons of
       | experimenting with human beings
       | 
       | As a result, the University should give a clear statement on both
       | and should donate a generous amount of on money for compensation
       | of (1.)
       | 
       | For part (2.), a simple bit honest apology can do wonders!
       | 
       | ---
       | 
       | Having said that, I think there are other and ethically better
       | ways to achieve these measurement.
        
       | WrtCdEvrydy wrote:
       | Yes, and robbing a bank to show that the security is lax is
       | totally fine because the real criminals don't notify you before
       | they rob a bank.
       | 
       | Do you understand how dumb that sounds?
        
         | incrudible wrote:
         | > Do you understand how dumb that sounds?
         | 
         | If you make a dumb analogy, that's on you.
        
           | WrtCdEvrydy wrote:
           | Same analogy... there's a vulnerability and you want to test
           | it? Go set up a test, and notify the people.
           | 
           | You really think the Linux kernel guys would change their
           | process if you did this? They'd still do the same things they
           | do.
        
             | incrudible wrote:
             | > Go set up a test, and notify the people.
             | 
             | The vulnerability is in the process, and _this was the
             | test_.
             | 
             | > You really think the Linux kernel guys would change their
             | process if you did this? They'd still do the same things
             | they do.
             | 
             | If they're vulnerable to accepting patches with exploits
             | because the review process fails, then the process is
             | broken. Linux isn't some toy, it's critical infrastructure.
        
               | WrtCdEvrydy wrote:
               | You can test the process without pushing exploits to the
               | real kernel.
        
               | incrudible wrote:
               | > You can test the process without pushing exploits to
               | the real kernel.
               | 
               | No, you can't, because _that is the test_! If you manage
               | to push exploits to the real kernel, the test failed. If
               | you get caught, the test passes. They did get caught.
        
               | WrtCdEvrydy wrote:
               | You totally can... contact the kernel maintainers, tell
               | them you want them to review a merge for security and
               | they can give you a go/no go. If they approve your merge,
               | then it's the same effect as purposely compromising
               | millions without compromising millions.
        
               | incrudible wrote:
               | Again, that's not the same, because then they _will_ look
               | for problems. What you want to test is that they 're
               | looking for problems all the time, on every patch,
               | without you telling them to do so.
               | 
               | If they don't, then _that 's_ the vulnerability in the
               | process.
        
               | WrtCdEvrydy wrote:
               | > because then they will look for problems. What you want
               | to test is that they're looking for problems all the
               | time, on every patch, without you telling them to do so.
               | 
               | That's what they do every time.
        
               | darkwinx wrote:
               | Telling them in advance will potentially make them more
               | alert for problem coming from specific source. It will
               | introduce bias.
               | 
               | The best they can do is notify the maintainers after they
               | got the result for their research and give the
               | maintainers an easy way to recover from vulnerability
               | they intentionally create.
        
         | dang wrote:
         | Please review https://news.ycombinator.com/newsguidelines.html
         | and omit name-calling and swipes from your comments here. See
         | also https://news.ycombinator.com/item?id=26893776.
         | 
         | We detached this subthread from
         | https://news.ycombinator.com/item?id=26890035.
        
           | [deleted]
        
       | jokoon wrote:
       | I'm pretty confident the NSA has been doing this for at least two
       | decades, it's not a crazy enough conspiracy theory.
       | 
       | Inserting backdoors in the form of bugs is not difficult. Just
       | hijack the machine of a maintainer, insert a well placed
       | semicolon, done!
       | 
       | Do you remember the quote of Linus Torvalds ? "Given enough eye
       | balls, all bugs are shallow." ? Do you really believe the Linux
       | source code is being reviewed for bugs?
       | 
       | By the way, how do you write tests for a kernel?
       | 
       | I like open source, but security implies a lot of different
       | problems and open source is not always better for security.
        
       | resoluteteeth wrote:
       | I suspect the university will take some sort of action now that
       | this has turned into incredibly bad press (although they really
       | should have done something earlier).
        
       | ajarmst wrote:
       | I used to sit on a research ethics board. This absolutely would
       | not have passed such a review. Not a 'revise and resubmit' but a
       | hard pass accompanied with 'what the eff were you thinking?. And,
       | yes, this should have had a REB review: testing the
       | vulnerabilities of a system that includes people is experimenting
       | on human subjects. Doing so without their knowledge absolutely
       | requires a strict human subject review and these "studies" would
       | not pass the first sniff test. I don't think it's even legal in
       | most jurisdictions.
        
         | neatze wrote:
         | This is my understanding as well, but then, how such paper was
         | accepted by IEEE ?
        
       | MR4D wrote:
       | Looks like vandalism masquerading as "research".
       | 
       | Greg's response is totally right.
        
       | mnouquet wrote:
       | In other news: the three little pigs ban wolves after wolves
       | exposed the dubious engineering of the straw house by blowing on
       | it for a research paper.
        
         | burnished wrote:
         | So if an identifiable group messes with a project, but says
         | "its for research!", then its OK? I'm just confused by your
         | comment because it seems like you are upset with the
         | maintainers for protecting their time from sources of known bad
         | patches. And just... why? Where does the entitlement come from?
        
       | [deleted]
        
       | g42gregory wrote:
       | Reading this email exchange, I worry about the state of our
       | education system, including computer science departments. Instead
       | of making coherent arguments, this PhD student speaks about
       | "preconceived biases". I loved Greg's response. The spirit of
       | Linus lives within the Kernel! These UMN people should be nowhere
       | near the kernel. I guess they got the answer to their research on
       | what would happen if you keep submitting stealth malicious
       | patches to the kernel: you will get found out and banned. Made my
       | day.
        
         | Igelau wrote:
         | The tone of Pakki's reply made me cringe:
         | 
         | > Attitude that is not only unwelcome but also intimidating to
         | newbies and non experts
         | 
         | Between that and the "Clarifications" document suggesting they
         | handle it by updating their Code of Conduct, they're _clearly_
         | trying really hard to frame all of this as some kind of toxic
         | culture in kernel development. That 's a hideous defense. It's
         | like a bad MMA fight where one fighter refuses to stand up
         | because he insists on keeping it a ground fight. Maybe it works
         | sometimes, but it's shameful.
        
       | Aissen wrote:
       | Greg does not joke around:
       | https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
       | [PATCH 000/190] Revertion of all of the umn.edu commits
        
         | sesuximo wrote:
         | How does the kernel still run after reverting like this?
        
           | dcchambers wrote:
           | I was wondering the same thing. From the Patch itself:
           | 
           | > This patchset has the "easy" reverts, there are 68
           | remaining ones that need to be manually reviewed. Some of
           | them are not able to be reverted as they already have been
           | reverted, or fixed up with follow-on patches as they were
           | determined to be invalid. Proof that these submissions were
           | almost universally wrong.
        
           | finnthehuman wrote:
           | The answer is somewhere between "it's 'only' 190 patches" and
           | "Greg posting this patch series doesn't mean it's applied to
           | stable yet"
        
           | [deleted]
        
       | duerra wrote:
       | I'll give you one guess nation states do.
        
       | kerng wrote:
       | What I dont get... why not ask the board of the Linux foundation
       | if they could attempt social engineering attacks and get
       | authorization. If Linux foundation sees value they'd approve it
       | and who knows maybe such tests (hiring pentesters to do social
       | engineering) are done anyway by the Linux foundation.
        
       | nwsm wrote:
       | I would say the research was a success. They found that when a
       | bad actor submits malicious patches they are appropriately banned
       | from the project.
        
       | honeybutt wrote:
       | Very unethical and extremely inconsiderate of the maintainers
       | time to say the least.
        
       | [deleted]
        
       | pypie wrote:
       | I was expecting this to be about introducing strange bugs and
       | then claiming to fix them in order to get a publication. But the
       | publication is titled "On the Feasibility of Stealthily
       | Introducing Vulnerabilities in Open-Source Software via Hypocrite
       | Commits"! So I guess it's less feasible than they imagined, at
       | least in this instance.
        
       | ltfish wrote:
       | Some clarifications since they are unclear in the original
       | report.
       | 
       | - Aditya Pakki (the author who sent the new round of seemingly
       | bogus patches) is not involved in the S&P 2021 research. This
       | means Aditya is likely to have nothing to do with the prior round
       | of patching attempts that led to the S&P 2021 paper.
       | 
       | - According to the authors' clarification [1], the S&P 2021 paper
       | did not introduce any bugs into Linux kernel. The three attempts
       | did not even become Git commits.
       | 
       | Greg has all reasons to be unhappy since they were unknowingly
       | experimented on and used as lab rats. However, the round of
       | patches that triggered his anger *are very likely* to have
       | nothing to do with the three intentionally incorrect patch
       | attempts leading to the paper. Many people on HN do not seem to
       | know this.
       | 
       | [1] https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
       | hc....
        
         | oraged wrote:
         | This is Aditya Pakki's webiste:
         | 
         | https://adityapakki.github.io/
         | 
         | In this "About" page:
         | 
         | https://adityapakki.github.io/about/
         | 
         | he claims "Hi there! My name is Aditya and I'm a second year
         | Ph.D student in Computer Science & Engineering at the
         | University of Minnesota. My research interests are in the areas
         | of computer security, operating systems, and machine learning.
         | I'm fortunate to be advised by Prof. Kangjie Lu."
         | 
         | so he in no uncertain terms is claiming that he is being
         | advised in his research by Kangjie Lu. So it's incorrect to say
         | his patches have nothing to do with the paper.
        
           | probably_wrong wrote:
           | I would encourage you not to post people's contact
           | information publicly, specially in a thread as volatile as
           | this one. Writing "He claims in his personal website" would
           | bring the point across fine.
           | 
           | This being the internet, I'm sure the guy is getting plenty
           | of hate mail as it is. No need to make it worse.
        
             | whimsicalism wrote:
             | They are named in the comment above. Aditya Pakki's
             | personal website is the first result upon Googling that
             | name.
             | 
             | I doubt HN has the volume of readership/temperament to lead
             | to substantial hate mail (unlike, say, Twitter).
        
           | Denvercoder9 wrote:
           | _> So it 's incorrect to say his patches have nothing to do
           | with the paper._
           | 
           | Professors usually work on multiple projects, which involve
           | different grad students, at the same time. Aditya Pakki could
           | be working on a different project with Kangjie Lu, and not be
           | involved with the problematic paper.
        
             | varjag wrote:
             | Sucks to be him then. He can thank his professor.
        
               | g42gregory wrote:
               | Based on the tone of his email, I would say that the ban
               | is not entirely undeserved.
        
         | [deleted]
        
         | [deleted]
        
         | blippage wrote:
         | > S&P 2021 paper did not introduce any bugs into Linux kernel.
         | 
         | I used to work as an auditor. We were expected to conduct our
         | audits to neither expect nor not expect instances of
         | impropriety to exist. However, once we had grounds to suspect
         | malfeasance, we were "on alert", and conduct tests accordingly.
         | 
         | This is a good principle that could be applied here. We could
         | bat backwards and forwards about whether the other submissions
         | were bogus, but the presumption must now be one of guilt rather
         | than innocence.
         | 
         | Personally, I would have been furious and said, in no uncertain
         | terms, that the university keep a low profile and STFU lest I
         | be sufficiently provoked to taking actions that lead to
         | someone's balls being handed to me on a plate.
        
         | peanut_worm wrote:
         | It is worth noting that Pakki is one of the paper's writer's
         | (Lu) assistants.
         | 
         | https://adityapakki.github.io/experience/
        
         | bradleyjg wrote:
         | It shouldn't be up to the victim to sort that out. The only
         | thing that could perhaps have changed here is for the
         | university wide ban to have been announced earlier. Perhaps the
         | kernel devs assumed that no one would be so shameless as to
         | continue to send students back to someone they had already
         | abused.
        
         | hpoe wrote:
         | It doesn't matter. I think this is totally appropriate. A group
         | of students are submitting purposely buggy patches? It isn't
         | the kernels team to sift through and distinguish they come down
         | and nuke the entire university. This sends a message to any
         | other University thinking of a similar stunt you try this bull
         | hockey you and your entire university are going to get caught
         | in the blast radius.
         | 
         | In short "f** around, find out"
        
           | seoaeu wrote:
           | I seriously doubt this policy would have been adopted if
           | other unrelated groups at the same university were submitting
           | constructive patches.
        
           | shadowgovt wrote:
           | On the plus side, I guess they get a hell of a result for
           | that research paper they were working on.
           | 
           | "We sought to probe vulnerabilities of the open-source
           | public-development process, and our results include a
           | methodology for getting an entire university's email domain
           | banned from contributing."
        
             | mirchibajji wrote:
             | Given the attitude of "the researchers" and an earlier
             | paper [1] so far, somehow I doubt they will act in good
             | faith this time.
             | 
             | For instance:
             | 
             | "D. Feedback of the Linux Community. We summarized our
             | findings and suggestions, and reported them to the Linux
             | community. Here we briefly present their feedback. First,
             | the Linux community mentioned that they will not accept
             | preventive patches and will fix code only when it goes
             | wrong. They hope kernel hardening features like KASLR can
             | mitigate impacts from unfixed vulnerabilities. Second, they
             | believed that the great Linux community is built upon
             | trust. That is, they aim to treat everyone equally and
             | would not assume that some contributors might be malicious.
             | Third, they mentioned that bug-introducing patches is a
             | known problem in the community. They also admitted that the
             | patch review is largely manual and may make mistakes.
             | However, they would do their best to review the patches.
             | Forth, they stated that Linux and many companies are
             | continually running bug-finding tools to prevent security
             | bugs from hurting the world. Last, they mentioned that
             | raising the awareness of the risks would be hard because
             | the community is too large."
             | 
             | [1] https://raw.githubusercontent.com/QiushiWu/qiushiwu.git
             | hub.i...
        
               | _jal wrote:
               | That is just appalling. I'm glad these jokers used their
               | real names; it will be easier to avoid them in the
               | future.
        
             | the_duke wrote:
             | Which will (hopefully) not be accepted by any reputable
             | journal.
        
         | ncann wrote:
         | I read through that clarification doc. I don't like their
         | experiment but I have to admit their patch submission process
         | is responsible (after receiving a "looks good" for the bad
         | patch, point out the flaw in the patch, give the correct fix
         | and make sure the bad patch doesn't get into the tree).
        
         | rideontime wrote:
         | Aditya's story about the new patches is that he was writing a
         | static analysis tool and was testing it by... submitting PRs to
         | the Linux kernel? He's either exploiting the Linux maintainers
         | to test his new tool, or that story's bullshit. Even taking his
         | story at face value is justification to at least ban him
         | personally IMO.
        
           | amluto wrote:
           | People do this with static analysis tools all the time. It's
           | obnoxious but not necessarily malicious.
        
             | rideontime wrote:
             | To be clear: asking Linux maintainers to verify the results
             | of static analysis tools they wrote themselves, without
             | admitting to it until they're accused of being malicious?
        
               | MikeHolman wrote:
               | As someone who used to maintain a large C++ codebase,
               | people usually bug-dump static analysis results rather
               | than actually submitting fixes, but blindly "fixing" code
               | that a static analysis tool claims to have issue with is
               | not surprising to see either.
               | 
               | If the patches were accepted, the person could have used
               | those fixes to justify the benefits of the static
               | analysis tool they wrote.
        
               | amluto wrote:
               | Asking Linux maintainers to apply patches or fix "bug"
               | resulting from home grown static analysis tools, which
               | usually flag all kinds of things that aren't bugs. This
               | happens regularly.
        
         | shakna wrote:
         | > According to the authors' clarification [1], the S&P 2021
         | paper did not introduce any bugs into Linux kernel. The three
         | attempts did not even become Git commits.
         | 
         | Except that at least one of those three, did [0]. The author is
         | incorrect that none of their attempts became git commits.
         | Whatever process that they used to "check different versions of
         | Linux and further confirmed that none of the incorrect patches
         | was adopted" was insufficient.
         | 
         | [0] https://lore.kernel.org/patchwork/patch/1062098/
        
           | re wrote:
           | > The author is incorrect that none of their attempts became
           | git commits
           | 
           | That doesn't appear to be one of the three patches from the
           | "hypocrite commits" paper, which were reportedly submitted
           | from pseudononymous gmail addresses. There are hundreds of
           | other patches from UMN, many from Pakki[0], and some of those
           | did contain bugs or were invalid[1], but there's currently no
           | _hard_ evidence that Pakki was deliberately making bad-faith
           | commits--just the association of his advisor being one of the
           | authors of the  "hypocrite" paper.
           | 
           | [0] https://github.com/torvalds/linux/commits?author=pakki001
           | @um...
           | 
           | [1] Including his most recent that was successfully applied:
           | https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-
           | ca.linux...
        
         | timdorr wrote:
         | Aditya's advisor [1] is one of the co-authors of the paper. He
         | at least knew about this work and was very likely involved with
         | it.
         | 
         | [1] https://adityapakki.github.io/assets/files/aditya_cv.pdf
        
           | [deleted]
        
             | frakkingcylons wrote:
             | Uh yeah no. Renaissance Technologies LLC is a hedge fund.
             | The Renaissance listed on his resume is related to gaming,
             | not securities trading.
        
           | ltfish wrote:
           | There is no doubt that Kangjie is involved in Aditya's
           | research work, which leads to bogus patches sent to Linux
           | devs. However, based on my understanding of how CS research
           | groups usually function, I do not think Kangjie knew the
           | exact patches that Aditya sent out. In this specific case, I
           | feel Aditya is more likely the one to blame: He should have
           | examined these automatically generated patches more carefully
           | before sending them in for reviewing.
        
             | de6u99er wrote:
             | >based on my understanding of how CS research groups
             | usually function
             | 
             | If you mean supervisors adding their names on to
             | publications without having contributed any work, than this
             | is not only limited to CS research groups. Authorship
             | misrepresentation is widespread in academia and
             | unfortunately mostly being ignored. Those who speak up are
             | being singled out and isolated instead.
        
             | bradleyjg wrote:
             | Kangjie should not have approved any research plan
             | involving kernel patches knowing that he had already set
             | that relationship on fire in order to get a prior paper
             | published.
        
         | rflrob wrote:
         | But Kanjie Lu, Pakki's advisor, was one of the authors. The
         | claim that " You, and your group, have publicly admitted to
         | sending known-buggy patches" may not be totally accurate (or it
         | might be--Pakki could be on other papers I'm not aware of), but
         | it's not totally inaccurate either. Most academic work is
         | variations on a theme, so it's reasonable to be suspect of
         | things from Lu's group.
        
           | otherme123 wrote:
           | As Greg KH notes, he has no time to deal with such BS, when
           | suggested to write a formal complain. He has no time to play
           | detectives: you are involved in a group that does BS and this
           | smell like BS again, banned.
           | 
           | Unfair? Maybe: complain to your advisor.
        
       | closeparen wrote:
       | This is a community that thinks it's gross negligence if
       | something with a real name on it fails to be airgapped.
       | 
       | Social shame and reputation damage may be useful defense
       | mechanisms in general, but in a hacker culture where the right to
       | make up arbitrarily many secret identities is a moral imperative,
       | people who burn their identities can just get new ones. Banning
       | or shaming is not going to work against someone with actual
       | malicious intent.
        
       | djohnston wrote:
       | Wow this "researcher" is a complete disaster. Who nurtures such a
       | toxic attitude of entitlement and disregard for others time and
       | resources? Not to mention the possible real world consequences of
       | introducing bugs into this OS. He and his group need to be
       | brought before an IRB.
        
         | bgorman wrote:
         | Victim mentality is being cultivated on campuses all over the
         | US. This will not be the last incident like this.
        
       | mort96 wrote:
       | The previous discussion seems to have suddenly disappeared from
       | the front page:
       | 
       | https://news.ycombinator.com/item?id=26887670
        
         | zulban wrote:
         | Thanks for pointing that out. 4 hours old, 1000+ points, it
         | seems to have been hit with an invisible penalty.
        
           | speedgoose wrote:
           | From what I understood, when a new post has a lot of
           | comments, it disappears from the frontpage.
        
             | slumpt_ wrote:
             | Absolutely absurd and illogical
        
               | thatguy0900 wrote:
               | It's also related to the up votes. A post with a bad
               | comment to up vote ratio usually means that a flame war
               | is going on
        
               | kevingadd wrote:
               | It's a measure to halt flamewars
        
         | dang wrote:
         | We made a mistake. I'm not sure what happened but it's possible
         | that we mistook this post for garden-variety mailing-list
         | drama. A lot of that comes up on HN, and is mostly not
         | interesting; same with Github Issues drama.
         | 
         | In reality, this post is clearly above that bar--it's a
         | genuinely interesting and significant story that the community
         | has a ton of energy to discuss, and is well on topic. I've
         | restored the thread now, and merged in the dupe that was on the
         | front page in its stead.
         | 
         | Sorry everybody! Our only priority is to serve what the
         | community finds (intellectually) interesting, but moderation is
         | guesswork and it's not always easy to tell what's chaff.
        
         | [deleted]
        
       | Metacelsus wrote:
       | Yikes, and what are they hoping to accomplish with this
       | "research"?
        
         | pikzel wrote:
         | Why don't you read the article to find out?
         | https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
        
           | Igorvelky wrote:
           | They are sending BUGS, and wasting time of people approving
           | patches, in useless and idiotic manner
        
         | Macuyiko wrote:
         | What any researcher needs to accomplish: more publications
        
           | Ma8ee wrote:
           | That's about as useful as to answer the question "what is
           | this company doing?" with "trying to make money".
        
             | xwolfi wrote:
             | But that question is as deep and important to answer as
             | yours :D What can anyone hope to accomplish by doing fake
             | research ? Progress, wealth, peer approval, mating,
             | pleasure ?
             | 
             | So answering that they hope to get more material for
             | papers, which is the only goal of researchers (and their
             | main KPI), is quite deeper an answer than the question
             | required.
        
               | guipsp wrote:
               | I wouldn't call this fake research. Maybe unethical, but
               | they did do research, and they did obtain data,and they
               | did (attempt?) to publish it.
        
           | voxadam wrote:
           | It's a near perfect example of the dangers 'publish or
           | perish'.
        
           | DangerousPie wrote:
           | What journal is going to accept a study like this if they
           | haven't obtained proper consent?
        
             | vbezhenar wrote:
             | That might be an interesting topic for research LoL
        
             | timkam wrote:
             | My guess is: a journal that does not focus on studies of
             | human behavior and whose editors are a) not aware of the
             | ethical problems or b) happy to ignore ethics concerns if
             | the publication is prone to receive much attention (which
             | it is).
        
             | dd82 wrote:
             | IEEE, see the publications list at https://www-
             | users.cs.umn.edu/~kjlu/
             | 
             | >>>On the Feasibility of Stealthily Introducing
             | Vulnerabilities in Open-Source Software via Hypocrite
             | Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings
             | of the 42nd IEEE Symposium on Security and Privacy
             | (Oakland'21). Virtual conference, May 2021.
        
               | j16sdiz wrote:
               | May 2021 -- I guess some IEEE member can complain loudly
               | to take it down.
        
             | JustFinishedBSG wrote:
             | The IEEE apparently. It is a clear breach of ethics but
             | apparently they don't care
        
               | jnxx wrote:
               | Sadly, that only consolidates my view of that
               | organization.
        
         | mariuolo wrote:
         | Perhaps they wish to improve kernel security by pushing
         | reviewers to be more careful.
         | 
         | Or to prove its overall insecurity.
        
         | jedimastert wrote:
         | They apparently made a tool to find vulnerabilities that could
         | later lead to bugs is a different patch was introduced.
         | 
         | And for some insane reason, they decided to test if these kinds
         | of bugs would be caught by inventing some and just submitting
         | the patches, without informing anyone beforehand.
         | 
         | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
        
       | gjvc wrote:
       | I hope USENIX et al ban this student / professor / school /
       | university associated with this work from submitting anything to
       | any of their conferences for 10 years.
       | 
       | This was his clarification https://www-
       | users.cs.umn.edu/~kjlu/papers/clarifications-hc....
       | 
       | ...in which they have the nerve to say that this is not
       | considered "human research". It most definitely is, given that
       | their attack vector is the same one many people would be keen on
       | using for submitting legitimate requests for getting involved.
       | 
       | If anything, this "research" highlights the notion that coding is
       | but a small proportion of programming and delivery of a product,
       | feature, or bugfix from start-to-finish is a much bigger job than
       | many people like to admit to themselves or others.
        
       | Radle wrote:
       | @gregkh
       | 
       | These patches look like bombs under bridges to me.
       | 
       | Do you believe that some open source projects should have legal
       | protection against such actors? The Linux Kernel is pretty much a
       | piece of infrastructure that keeps the internet going.
        
       | readme wrote:
       | these people have no ethics
        
       | francoisp wrote:
       | (I posted this on another entry that dropped out of the first
       | page of HN? sorry for the dupe)
       | 
       | I fail to see how this does not amount to vandalism of public
       | property. https://www.shouselaw.com/ca/defense/penal-code/594/
        
       | francoisp wrote:
       | those that can't do teach, and those that can't teach troll open
       | source devs?
        
       | sadfev wrote:
       | Dang, I am not sure how to feel about this kind of "research"
        
       | kome wrote:
       | Does the University of Minnesota have an ethical review board or
       | research ethics board? They need to be contacted ASAP.
        
         | parrellel wrote:
         | Apparently, they were and did not care.
        
           | maze-le wrote:
           | It's possible that they didn't knew what they were doing (
           | approving of this project) -- not sure if this is better or
           | worse.
        
       | dumpsterdiver wrote:
       | Could someone clarify: this made it to the stable branch, so does
       | that mean that it made it out into the wild? Is there action
       | required here?
        
       | [deleted]
        
       | devpbrilius wrote:
       | Weirdly enough
        
       | aisio wrote:
       | One reviewers comments to a patch of theirs from 2 weeks ago
       | 
       | "Plainly put, the patch demonstrates either complete lack of
       | understanding or somebody not acting in good faith. If it's the
       | latter[1], may I suggest the esteemed sociologists to fuck off
       | and stop testing the reviewers with deliberately spewed
       | excrements?"
       | 
       | https://lore.kernel.org/lkml/YH4Aa1zFAWkITsNK@zeniv-ca.linux...
        
         | bombcar wrote:
         | Interesting - follow that thread and you find
         | https://lore.kernel.org/linux-next/202104081640.1A09A99900@k...
         | where coverity-bot says "this is bullshit":
         | vvv     CID 1503716:  Null pointer dereferences
         | (REVERSE_INULL)         vvv     Null-checking "rm" suggests
         | that it may be null, but it has already been dereferenced on
         | all paths leading to the check.
        
       | largehotcoffee wrote:
       | This was absolutely the right move. Smells really fishy given the
       | history. I imagine this is happening in other parts of the
       | community (attempting to add malicious code), albeit under a
       | different context.
        
       | atleta wrote:
       | Now one of the problems with research in general is that negative
       | results don't get published. While in this case it probably
       | resolved itself automatically, if they have any ethical standards
       | then they'll write a paper about how it ended. Something like
       | "our assumption was that it's relatively easy to deliberately
       | sneak in bugs into the Linux kernel but it turns out we were
       | wrong. We managed to get our whole university banned and all
       | former patches from all contributors from our university,
       | including from those outside of your our research team,
       | reversed."
       | 
       | Also, while their assumption is interesting, there sure had to be
       | an ethical and safe way to conduct this. Especially without
       | allowing their bugs to slip into release.
        
       | Apofis wrote:
       | Minnesota being Minnesota.
        
       | charonn0 wrote:
       | It seems like this debacle has created a lot of extra work for
       | the kernel maintainers. Perhaps they should ask the university to
       | compensate them.
        
       | psim1 wrote:
       | UMN is still sore that http took off and gopher didn't.
        
       | devit wrote:
       | The project is interesting, but how can they be so dumb as to
       | post these patches under an @umn.edu address instead of using a
       | new pseudonymous identity for each patch?!?
       | 
       | I mean, sneakily introducing vulnerabilities obviously only works
       | if you don't start your messages by announcing you are one of the
       | guys known to be trying to do so...
        
         | EamonnMR wrote:
         | That's kind of the rub. They used a university email to exploit
         | the trust afforded to them as academics and then violated that
         | trust. As a result that trust was revoked. If they want to
         | submit future patches they'll need to do it with random email
         | addresses and will be subject to the scrutiny afforded random
         | email addresses.
        
           | devit wrote:
           | I doubt an university e-mail gives you significantly
           | increased trust in the kernel community, since those are
           | given to all students in all majors (most of which are of
           | course much less competent at kernel development than the
           | average kernel developer).
        
             | acd10j wrote:
             | University students could be naive and could be rapped by
             | community if they unintentionally commit harmful patches,
             | but if they send intentionally harmful patches, maintainers
             | can report them to university and they risk getting
             | expelled. In this particular case the research was approved
             | and encouraged by university and hence, and in this process
             | they broke trust placed on university.
        
             | azernik wrote:
             | There are two different kinds of trust: trust that you're a
             | legitimate person with good intentions, and trust that
             | you're competent.
             | 
             | A university or corporate e-mail address helps with the
             | former: even if the individual doesn't put their real name
             | into their email address, the institution still maintains
             | that mapping. The possibility of professional, legal, or
             | social consequences attaching to your real-world identity
             | (as is likely to happen here) is a generally-effective
             | deterrent.
        
           | bargle0 wrote:
           | Why should an academic institution be afforded any extra
           | trust in the first place?
        
             | Jiejeing wrote:
             | Because there are quite a few academics working on the
             | kernel in the first place (not a in a similar order of
             | magnitude compared to industry, of course). Even GKH gets
             | invited by academics to work together regularly.
        
             | seoaeu wrote:
             | One guess would be that an edu address would be tied to
             | your real identity, whereas a throwaway email could be
             | pseudonymous.
        
       | veltas wrote:
       | Regardless of whether consent (which was not given) was required,
       | worth pointing out the emails sent to the mailing list were also
       | intentionally misleading, or fraudulent, so some kind of ethic
       | has obviously been violated there.
        
       | FrameworkFred wrote:
       | This feels like the kind of thing that "white hat" hackers have
       | been doing forever. UMN may have introduced useful knowledge into
       | the world in the same way some random hacker is potentially
       | "helping" a company by pointing out that they've left a security
       | hole exposed in their system.
       | 
       | With that said, kernel developers and companies with servers on
       | the internet are busy doing work that's important to them. This
       | sort of thing is always an unwelcome distraction.
       | 
       | And, if my neighbors walks in my door at 3 a.m. to let me know I
       | left it unlocked, I'm going to treat them the same way UMN is
       | getting treated in this situation. Or worse.
        
         | threatofrain wrote:
         | A modification of your metaphor would also have a reputed
         | institution in your life enter your apartment on the
         | credibility of that institution. It is not surprising when that
         | institution has its credibility downranked.
        
         | mort96 wrote:
         | Your analogy doesn't work. A true "white hat" hacker would hack
         | a system to expose a security vulnerability, then immediately
         | inform the owners of the system, all without using their
         | unintended system access for anything malicious. In this case,
         | the "researchers" submitted bogus patches, got them accepted
         | and merged, then said nothing, and pushed back against
         | accusations that they've been malicious, all for personal gain.
         | 
         | EDIT: Also, even if you do no harm and immediately inform your
         | victim, this sort of stuff might rather be categorized as grey-
         | hat. Maybe a "true" white-hat would only hack a system with
         | explicit consent from the owner. These terms are fuzzy. But my
         | point is, attacking a system for personal gain without
         | notifying your victim afterwards and leaving behind malicious
         | code is certainly not white-hat by any definition.
        
           | qw3rty01 wrote:
           | That's gray-hat, a white-hat wouldn't have touched the system
           | without permission from the owners in the first place.
        
             | mort96 wrote:
             | Haha, I just realized that and added an edit right as you
             | commented.
        
           | FrameworkFred wrote:
           | You make a fair point. I'm just saying that, while it might
           | ultimately be interesting and useful to someone or even lots
           | of someones, it remains a crappy thing to do and the
           | consequences that UMN is facing as a result is predictable
           | and makes perfect sense to me, a guy who has had to rebuild a
           | few servers and databases over the years because of
           | intrusions and a couple of those have come with messages
           | about how we should consult with the intruder who had less-
           | than-helpfully found some security issue for us.
        
         | matheusmoreira wrote:
         | Hacking on software is one thing. Running experiments on
         | _people_ is something completely different.
         | 
         | In order to do this ethically, all that's needed is respect
         | towards our fellow human beings. This means informing them
         | about the nature of the research, the benefits of the collected
         | data, the risks involved for test subjects as well as asking
         | for their consent and permission to be researched on. Once
         | researchers demonstrate this respect, they're likely to find
         | that a surprising number of people will allow them to perform
         | their research.
         | 
         | We all hate it when big tech tracks our every move and draws
         | all kinds of profitable conclusions based on that data at our
         | expense. We hate it so much we deploy active countermeasures
         | against it. It's fundamentally the same issue.
        
       | protomyth wrote:
       | FYI The IRB for University of Minnesota
       | https://research.umn.edu/units/irb has a Human Research
       | Protection Program https://research.umn.edu/units/hrpp where I
       | cannot find anything on research on people without their
       | permission. There is a Participant's Bill of Rights
       | https://research.umn.edu/units/hrpp/research-participants/pa...
       | that would seem to indicate uninformed research is not allowed. I
       | would be curious how doing research on the reactions of people to
       | test stimulus in a non-controlled environment is not human
       | research.
        
       | spinny wrote:
       | Are they legally liable in any way for including deliberate flaws
       | in a piece of software they know is widely used and therefore
       | creating a surface attack surface for _any_ attacker with the
       | skill to so do and putting private and public infrastructure at
       | risk ?
        
       | tester34 wrote:
       | Researcher(s) shows that it's relatively not hard to introduce
       | bugs in kernel
       | 
       | HN: let's hate researcher(s) instead of process
       | 
       | Wow.
       | 
       | Assume good faith, I guess?
        
         | nonethical wrote:
         | With that logic you can conduct research on how easy it is to
         | rob elderly people in the street, inject poison in supermarket
         | yogurts, etc.
        
         | brabel wrote:
         | There are two separate issues with this story.
         | 
         | One is that what the researchers did is beyond reckless. Some
         | of the bugs they've introduced could be affecting real world
         | critical systems.
         | 
         | The other issue is that the research is actually good in
         | proving by practical means that pretty much anyone can
         | introduce vulnerabilities into software as important and
         | sensitive as the Linux kernel. This hurts the industry
         | confidence that we can have secure systems even more than it
         | already is.
         | 
         | While some praise may be appropriate for the latter, they
         | absolutely deserve the heat they're getting for the former.
         | There may be many better ways to prove a point.
        
         | jnxx wrote:
         | It is not hard to point a gun at someone's head.
         | 
         | But let's assume your girlfriend points an (unknown to you)
         | empty gun at your head, because she wants to know how you will
         | react. What do you think is the appropriate reaction?
        
         | saagarjha wrote:
         | Wasting the time of random open source maintainers who have not
         | consented to your experiment to try to get your paper published
         | is highly unethical; I don't see why this is a bad faith
         | interpretation.
        
           | tester34 wrote:
           | State-level actors / Nation wide actors (fancy terms lately,
           | heh) will not ask anyone for consent
        
             | saagarjha wrote:
             | This is also unethical.
        
         | jeroenhd wrote:
         | The concept of the research is quite good. The way this
         | research was carried out, is downright unethical.
         | 
         | By submitting their bad code to the actual Linux mailing list,
         | they have made Linux kernel developers part of their research
         | without their knowledge or consent.
         | 
         | Some of this vandalism has made it down into the Linux kernel
         | already. These researchers have sabotaged other people's
         | software for their personal gain, another paper to boast about.
         | 
         | Had this been done with the developers' consent and with a way
         | to pull out the patches before they actually hit the stable
         | branches, then this could have been a valuable research. It's
         | the way that the research was carried out that's the problem,
         | and that's why everybody is hating on the researches (rather
         | than the research matter itself).
        
           | mratsim wrote:
           | To provide some parallel on how the research was carried
           | about:
           | 
           | I see it as similar to
           | 
           | - allowing recording of people without their consent (or
           | warrant),
           | 
           | - experimenting on PTSD by inducing PTSD without people
           | consent,
           | 
           | - or medical experimentation without the subject consent.
           | 
           | And the arguments about not having anyone know:
           | 
           | Try to introduce yourself in the White House and when you get
           | caught tell them "I was just testing your security
           | procedures".
        
             | InsomniacL wrote:
             | submitting a patch for review to test the strength of the
             | review process is not equivalent to inducing PTSD in people
             | without consent or breaking in to the Whitehouse. You're
             | being ridiculous. Linux runs many of the worlds financial,
             | medical, etc etc... institutions and they have exposed how
             | easy it is to introduce a backdoor.
             | 
             | If this was Facebook and not Linux everyone would look upon
             | this very differently.
        
               | mratsim wrote:
               | The fact that issues in Linux can kill people is exactly
               | why they need leadership buy in first.
               | 
               | There are ways to test social vulnerabilities
               | (pentesting) and they all involve asking for permission
               | first.
        
       | biffstallion wrote:
       | Ahh Minnesota... land of out-of-control and local government-
       | supported rioting... so I guess shenanigans are expected.
        
       | [deleted]
        
       | Pensacola wrote:
       | <consipracy theory>This is intentionally malicious activity
       | conducted with a perfect cover story</conspiracy theory>
        
       | devmunchies wrote:
       | Aditya: _I will not be sending any more patches due to the
       | attitude that is not only unwelcome but also intimidating to
       | newbies and non experts._
       | 
       | Greg: _You can 't quit, you're fired._
        
       | libpcap wrote:
       | Their action needs to be reported to the FBI.
        
       | yosito wrote:
       | Can someone explain what the kernel bugs were that were
       | introduced, in general terms?
        
         | Avamander wrote:
         | None.
        
       | HelloNurse wrote:
       | They seem to be teaching social engineering. Using a young,
       | possibly foreign student as a front is a classy touch.
        
         | oraged wrote:
         | The author of the patches, Aditya Pakki, is a second year PHD
         | student as per his website https://adityapakki.github.io/about/
         | 
         | He himself is to blame for submitting these kind of patches and
         | claiming innocence. If a person as old as him can't figure out
         | what's ethical and what's not, then that person deserves what
         | comes out of actions like these.
        
       | rincebrain wrote:
       | Prior discussion: https://news.ycombinator.com/item?id=26887670
        
         | alexfromapex wrote:
         | Here's the research article linked there, for those interested:
         | https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
        
           | amir734jj wrote:
           | Please correct me if I'm wrong. So he (PhD student) was
           | introducing bad code as part of research? And publishes a
           | paper to show how he successfully introduced bad code.
        
             | alexfromapex wrote:
             | It seems that Aditya Pakki was the one introducing shady
             | code to the kernel and was caught. He is listed as an
             | author on several other very similar papers (https://schola
             | r.google.com/citations?user=O9WEZuoAAAAJ&hl=en) with
             | authors Wu and Lu about automatically detecting "missing-
             | check bugs" and other security issues which they purport to
             | want to fix but this research paper explicitly discusses
             | submitting "fixes" that have latent security bugs in them.
        
         | dang wrote:
         | Merging them now...
        
       | TedShiller wrote:
       | TLDR?
        
       | unanswered wrote:
       | I am concerned that the kernel maintainers might be falling into
       | another trap: it is possible that some patches were designed such
       | that they are legitimate fixes, and moreover such that _reverting
       | them_ amounts to introducing a difficult-to-detect malicious bug.
       | 
       | Maybe I'm just too cynical and paranoid though.
        
       | kleiba wrote:
       | This is bullshit research. I mean, what they have actually found
       | out through their experiments is that you can maliciously
       | introduce bugs into the linux kernel. But, did anyone have doubts
       | about this being possible prior to this "research"?
       | 
       | Obviously, bugs gets introduced into all software projects _all
       | the time_. And the bugs don 't know whether they've been put
       | there intentionally or accidentally. Alls bugs that ever appeared
       | in the linux kernel obviously made it through the review process.
       | Even when no-one actively tried to introduce them.
       | 
       | So, why should it not be possible to intentionally insert bugs if
       | it already "works" unintentionally? What is the insight gained
       | from this innovative "research"?
        
       | dumbDev wrote:
       | What is this? A "science" way of saying it's a prank bro?
        
       | [deleted]
        
       | nemoniac wrote:
       | Who funds this? They acknowledge funding from the NSF but you
       | could imagine that it would benefit some other large players to
       | sow uncertainty and doubt about Open Source Software.
        
       | slaw_pr wrote:
       | It seems we hit another level of "progressive" stupidity and that
       | has to be the most "woke" IT univ if they allow such BS to
       | happen. Of course they deserve the ban. To be that stupid, self-
       | centered and despise someone else's work to this extent, then
       | play a victim - you need to be brainwashed by some university
       | professor. Not to mention they confuse being friendly to
       | beginners with tolerance for brainless parasites.
       | 
       | They exploited authority of educational institution and everyone
       | SANE who is studying there to intentionally break someone's else
       | work for their own profit.
       | 
       | Not sure what's severity of this attack but if these "patches"
       | got into critical parts of kernel like NFS they should not only
       | be expelled but prosecuted. Becase what's next? Another bunch of
       | MORONS will launch attacks on medical equipement to see if
       | they're able to kill someone and then cry if they fail?
        
       | treesknees wrote:
       | The full title is "Linux bans University of Minnesota for sending
       | buggy patches in the name of research" and it seems to justify
       | the ban. It's not as though these students were just bad
       | programmers, they were intentionally introducing bugs, performing
       | unethical experimentation on volunteers and members of another
       | organization without their consent.
       | 
       | Unfortunately even if the latest submissions were sent with good
       | intentions and have nothing to do with the bug research, the
       | University has certainly lost the trust of the kernel
       | maintainers.
        
         | xibalba wrote:
         | From the looks of the dialogue, it was all of the above with
         | the addition of lying about what they were up to when
         | confronted. I would think all of this constitutes a serious
         | violation of any real university's research ethics standards.
        
         | xroche wrote:
         | The full titre should actually be "Linux bans University of
         | Minnesota for sending buggy patches in the name of research and
         | thinking they can add insult to injury by playing the victims"
         | 
         | > I respectfully ask you to cease and desist from making wild
         | accusations that are bordering on slander.
         | 
         | > These patches were sent as part of a new static analyzer that
         | I wrote and it's sensitivity is obviously not great. I sent
         | patches on the hopes to get feedback. We are not experts in the
         | linux kernel and repeatedly making these statements is
         | disgusting to hear.
         | 
         | > Obviously, it is a wrong step but your preconceived biases
         | are so strong that you make allegations without merit nor give
         | us any benefit of doubt. I will not be sending any more patches
         | due to the attitude that is not only unwelcome but also
         | intimidating to newbies and non experts.
         | 
         | This idiot should be banned from the University, not from the
         | linux kernel.
        
           | nyolfen wrote:
           | His department presumably allowed this to proceed
        
       | duxup wrote:
       | I don't like this university ban approach.
       | 
       | Universities are places with lots of different students,
       | professors, and different people with different ideas, and
       | inevitably people who make bad choices.
       | 
       | Universities don't often act with a single purpose or intent.
       | That's what makes them interesting. Prone to failure and bad
       | ideas, but also new ideas that you can't do at corporate HQ
       | because you've got a CEO breathing down your neck.
       | 
       | At the University of Minnesota there's 50k+ students at the Twin
       | Cities campus alone, 3k plus instructors. Even more at other
       | University of Minnesota campuses.
       | 
       | None of those people did anything wrong. Putting the onus on them
       | to effect change to me seems unfair. The people banned didn't do
       | anything wrong.
       | 
       | Now the kernel doesn't 'need' any of their contributions, but I
       | think this is a bad method / standard to set to penalize /
       | discourage everyone under an umbrella when they've taken no bad
       | actions themselves.
       | 
       | Although I can't put my finger on why, this ban on whole swaths
       | of people in some ways seems very not open source.
       | 
       | The folks who did the thing were wrong to do so, but the vast
       | majority of people now impacted by this ban didn't do the thing.
        
         | krastanov wrote:
         | I am writing this as someone who is very much "career
         | academic". I am all on board with banning the whole university
         | (and reconsidering the ban once the university shows they have
         | some ethics guidelines in place). This research work should not
         | have passed ethics review. On the other hand, it sounds
         | preposterous that we even would need formal ethics review for
         | CS research... But this "research" really embodies the whole
         | "this is why we can not have nice things" attitude.
        
         | AndyMcConachie wrote:
         | The University of Minnesota IRB never should have approved this
         | research. So this is an institutional level problem. This is
         | not just a problem with some researchers.
         | 
         | It's unfortunate that many people will get caught up in this
         | ban that had nothing to do with it, but the university deserves
         | to take a credibility hit here. The ball is now in their court.
         | They need to either make things right or suffer the ban for all
         | of their staff and students.
        
         | Denvercoder9 wrote:
         | It's not a ban on people, it's a ban on the institution that
         | has demonstrated they can't be trusted to act in good faith.
         | 
         | If people affilated with the UMN want to contribute to the
         | Linux kernel, they can still do that on a personal title. They
         | just can't do it as part of UMN research, but given that UMN
         | has demonstrated they don't have safeguards to prevent bad
         | faith research, that seems reasonable.
        
         | slrz wrote:
         | I don't like it either but it's not as bad as it sounds: the
         | ban almost certainly isn't enforced mindlessly and with no
         | recourse for the affected.
         | 
         | I'm pretty sure that if someone from the University of
         | Minnesota would like to contribute something of value to the
         | Linux kernel, dropping a mail to GregKH will result in that
         | being possible.
        
         | xtorol wrote:
         | Agree that universities don't (and shouldn't) act with a single
         | purpose or intent, but they need to have institutional controls
         | in place that prevent really bad ideas from negatively
         | affecting the surrounding community. Those seem to be lacking
         | in this case, and in their absence I think the kernel
         | maintainers' actions are entirely justified.
        
         | avs733 wrote:
         | I understand where this is coming from, and empathize with this
         | but also empathize with the Kernel.org folx here. I think I'm
         | okay with this because it isn't some government actor.
         | 
         | It is not always easy to identify who works for who at a
         | university in regards to someone's research. The faculty member
         | who seems to be directing this is identifiable, obviously. But
         | it is not so easy to identify anyone acting on his behalf -
         | universities don't maintain public lists of grad or undergrad
         | students working for an individual faculty member. Ad in that
         | there seems to be a pattern of obfuscating these patches
         | through different submission accounts specifically to hide the
         | role of the faculty advisor (my interpretation of what I'm
         | reading).
         | 
         | Putting the onus on others is unfair...but from the perspective
         | of Kernel.org, they do not know who from the population is bad
         | actors and who isn't. The goal isn't to penalize the good
         | folks, the goal is to prevent continued bad behavior under
         | someone elses name. Its more akin to flagging email from a
         | certain server as spam. The goal of the policy isn't to get
         | people to effect change, its to stop a pattern of introducing
         | security holes in critical software.
         | 
         | It is perfectly possible that this was IRB approved, but that
         | doesn't necessarily mean the IRB really understood the
         | implications. There are specific processes for research
         | involving deception and getting IRB approval for deception. but
         | there is no guarantee that IRB members have the knowledge or
         | experience with CS or Open Source communities to understand
         | what is happening. The backgrounds of IRB members vary
         | enormously...
        
         | formerly_proven wrote:
         | > The people banned didn't do anything wrong.
         | 
         | There are ways to do research like this (involve top-level
         | maintainers, prevent patches going further upstream etc.), just
         | sending in buggy code on purpose, then lying about where it
         | came from, is not the way. It very much is wrong in my opinion.
         | And like some other people pointed out, it could quite possibly
         | be a criminal offense in several jurisdictions.
        
           | dev_tty01 wrote:
           | >There are ways to do research like this (involve top-level
           | maintainers, prevent patches going further upstream etc.)
           | 
           | This is what I can't grok. Why would you not contact GKH and
           | work together to put a process in place to do this in an
           | ethical and safe manner? If nothing else, it is just basic
           | courtesy.
           | 
           | There is perhaps some merit to better understanding and
           | avoiding the introduction of security flaws but this was not
           | the way to do it. Boggles the mind that this group felt that
           | this was appropriate behavior. Disappointing.
           | 
           | As far as banning the University, that is precisely the right
           | action. This will force the institution to respond. UMN will
           | have to make changes to address the issue and then the ban
           | can be lifted. It is really the only effective response the
           | maintainers have available to them.
        
         | zulban wrote:
         | It sends a strong message - universities need to make sure
         | their researchers apply ethics standards to any research done
         | on software communities. You can't ignore ethics guidelines
         | like consent and harm just because it's a software community
         | instead of a meatspace community. I doubt the university would
         | have taken any action at all without such a response.
        
           | kahrl wrote:
           | Has the university taken action yet? All I heard was after
           | blowback, UMN had their institutional review board
           | retroactively review the paper. They investigated themselves
           | and found no wrongdoing. (IRB concluded this was not human
           | research)
           | 
           | UMN hasn't admitted to any wrongdoing. The professor wasn't
           | punished in any form whatsoever. And they adamantly state
           | that their research review processes are solid and worked in
           | this case.
           | 
           | An indefinite ban is 100% warranted until such a time that
           | UMN can demonstrate that their university sponsored research
           | is trustworthy and doesn't act in bad faith.
        
           | duxup wrote:
           | >It sends a strong message
           | 
           | At a cost to mostly people who didn't / and I'll even say
           | wouldn't do the bad thing.
        
             | belval wrote:
             | I understand the point that you are making, but you have to
             | look at it from the optics of the maintainer. The email
             | made it clear that they submitted an official complaint to
             | the ethics board and they didn't do anything. In that
             | spirit it effectively means that any patch coming from that
             | university could be vulnerability injection misrepresented
             | as legitimate patches.
             | 
             | The Linux kernel has limited resources and if one
             | university lack of oversight is causing the whole process
             | to be stretched tinner than it already is then a ban seems
             | like a valid solution.
        
             | zeroxfe wrote:
             | In this case, the cost is justified. The potential cost of
             | kernel vulnerabilities is extremely high, and in some cases
             | cause irrecoverable harm.
        
               | maxerickson wrote:
               | If that cost is high, why are they accepting and
               | rejecting code based on email addresses?
               | 
               | https://twitter.com/FiloSottile/status/138488391003998617
               | 9
               | 
               | (Clearly the academic behavior is also a problem, there's
               | no good justification for asking for reviews of known bad
               | patches)
        
             | davidkuhta wrote:
             | @denvercoder9 had a good comment that might assuage your
             | concern:
             | 
             | > It's not a ban on people, it's a ban on the institution
             | that has demonstrated they can't be trusted to act in good
             | faith. If people affilated with the UMN want to contribute
             | to the Linux kernel, they can still do that on a personal
             | title. They just can't do it as part of UMN research, but
             | given that UMN has demonstrated they don't have safeguards
             | to prevent bad faith research, that seems reasonable.
        
         | walrus01 wrote:
         | > I don't like this university ban approach.
         | 
         | I do, because the university needs to dismiss everyone
         | involved, sever their connections with the institution, and
         | then have a person in a senior position email the kernel
         | maintainers with news that such has taken place. At which time
         | the ban can be publicly lifted.
        
           | timkam wrote:
           | I think the ban hits the right institution, but I'd reason
           | the other way around: is it really the primary fault of the
           | individual (arguably somewhat immature, considering the tone
           | of the email) PhD Student? The problem in academia is not
           | "bad apples", but problematic organizational culture and
           | misaligned incentives.
        
             | belval wrote:
             | To me it depends on whether they lied to the ethics board
             | or not. If they truly framed their research as "sending
             | emails" then the individual is 100% at fault. If they
             | clearly defined what they were trying to do and no one
             | raised an issue then it is absolutely the university's
             | fault.
        
               | walrus01 wrote:
               | I think it's more than whether they lied, it's whether
               | the ethics board is even plausibly equipped to fully
               | understand the ramifications of what they proposed to do:
               | https://news.ycombinator.com/item?id=26890490
        
               | belval wrote:
               | Well if the ethics board is not decently equipped to
               | understand the concerns with this type of research I
               | would say a full ban is perfectly understandable.
        
         | the_af wrote:
         | It's definitely killing a mosquito with a nuke, but what are
         | the alternatives? The kernel maintainers claim these bogus
         | commits already put too much load on their time. I understand
         | they banned the whole university out of frustration and also
         | because they simply don't have the time to deal with them in a
         | more nuanced way.
        
           | duxup wrote:
           | Are they even killing a mosquito?
           | 
           | Someone wants to introduce bugs, they can.
           | 
           | Meanwhile lots of people banned for some other person's
           | actions.
        
             | bluGill wrote:
             | Nobody else that the UMN is even contributing patches,
             | other than these bad faith ones, so this is only banning
             | one set of bugs. Given that a lot of bugs have come from
             | one source banning that source bans a lot of bugs. It
             | doesn't stop them all, but it stops some.
        
           | megous wrote:
           | There's a real cost. What's your estimate for going through
           | each of these 190 patches individually, looking at the
           | context of the code change, and whether the "ref counting or
           | whatever" bug fix is real, and doing some real testing to
           | ensure that?
           | 
           | https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh.
           | ..
           | 
           | That looks like quite some significant effort. Now if most of
           | those fixes were real, now after the revert there will be 190
           | known bugs in the kernel, before it's all cleaned up. That
           | may have some cost too.
           | 
           | Looks like a large and expensive mess someone other than that
           | UNI will have to clean out, because they're not trustworthy,
           | ATM.
        
         | luckylion wrote:
         | > None of those people did anything wrong. Putting the onus on
         | them to effect change to me seems unfair. The people banned
         | didn't do anything wrong.
         | 
         |  _Some_ of the people banned didn 't do anything wrong. Others
         | tried to intentionally introduce bugs into the kernel. Their
         | ethics board allowed that or was mislead by them. Obviously
         | they are having serious issues with ethics and processes.
         | 
         | I'm sure the ban can be reversed if they can plausibly claim
         | they've changed. Since this was apparently already their
         | _second_ chance and they 've been reported to the university
         | before and the university apparently decided not to act on that
         | complaint ... I have some doubts that "we've totally changed.
         | This time we mean it" will fly.
        
           | duxup wrote:
           | "Some"
           | 
           | How many people didn't and did? The numbers seem absurd.
        
             | luckylion wrote:
             | No way to tell. How many people at UMN do usually submit
             | kernel patches that aren't malicious? In any case, it did
             | hit the right people, and it potentially causes collateral
             | damage.
             | 
             | Since it's an institutional issue (otherwise it would've
             | stopped after they were reported the first time), it
             | doesn't seem wrong to also deal with the institution.
        
         | aritmo wrote:
         | A university-wide ban helps in converting the issue into an
         | internal issue of that university. The university officials
         | will have to figure out what went wrong and rectify.
        
           | bluGill wrote:
           | Probably not because nobody else at the university is
           | affected, and probably won't be for a dozen more years when
           | someone else happens to get interested in something. Even in
           | CS there are a ton of legitimate projects to work on, so a
           | ban on just one common one isn't going to be noticed without
           | more attention.
           | 
           | Note that I suspect enough people have gotten notice by the
           | press now.
        
       | rurban wrote:
       | This is the big revert, a good overview of all the damage they
       | did. Some were good, most were malicious, most author names were
       | fantasy.
       | 
       | https://lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
        
       | toxik wrote:
       | The problem here is really that they're wasting time of the
       | maintainers without their approval. Any ethics board would
       | require prior consent to this. It wouldn't even be hard to do.
        
         | jnxx wrote:
         | > The problem here is really that they're wasting time of the
         | maintainers without their approval.
         | 
         | Not only that, but they are also doing experiments on a
         | community of people which is against their interest and also
         | could be harmful by creating mistrust. Trust is a big issue,
         | without it it is almost impossible for people to work
         | meaningfully together.
        
           | aflag wrote:
           | Besides that, if their "research" patch gets into a release,
           | it could potentially put thousands or millions of users at
           | risk.
        
           | dmitryminkovsky wrote:
           | Yeah this actually seems more like sociological research
           | except since it's in the comp sci department the
           | investigators don't seem to be trained in acceptable (and
           | legal) standards of conducting such research on human
           | subjects. You definitely need prior consent when doing this
           | sort of thing. Ideally this would be escalated to a research
           | ethics committee at UMN because these researchers need to be
           | trained in acceptable practices when dealing with human
           | subjects. So to me it makes sense the subjects "opted out"
           | and escalated to the university.
        
             | DistressedDrone wrote:
             | Already cited in another comment:
             | 
             | > We send the emails to the Linux communityand seek their
             | feedback. The experiment is not to blame any maintainers
             | but to reveal issues in the process. The IRB of University
             | of Minnesota reviewed the procedures of the experiment and
             | determined that this is not human research. We obtained a
             | formal IRB-exempt letter. The experiment will not collect
             | any personal data, individual behaviors, or personal
             | opinions. It is limited to studying the patching process
             | OSS communities follow, instead of individuals.
             | 
             | So they did think of that. Either they misconstrued their
             | research or the IRB messed up. Either way, they can now see
             | for themselves exactly how human a pissed off maintainer
             | is.
        
               | [deleted]
        
               | bagacrap wrote:
               | how is this not experimentation on humans? "can we trick
               | this human" is the entire experiment.
        
         | InsomniacL wrote:
         | 1) They identified vulnerabilities with a process 2) They
         | contributed the correct code after showing the maintainer the
         | security vulnerability they missed. 3) Getting the consent of
         | the people behind the process would invalidate the results.
        
           | nicklecompte wrote:
           | > 3) Getting the consent of the people behind the process
           | would invalidate the results.
           | 
           | This has not been a valid excuse since the 1950s. Scientists
           | are not allowed to ignore basic ethics because they want to
           | discover something. Deliberately introducing bugs into any
           | open source project is plainly unethical; doing so in the
           | Linux kernel is borderline malicious.
        
             | InsomniacL wrote:
             | No bugs were introduced and they didn't intend to introduce
             | any bugs. infact, they have resolved over 1000+ bugs in the
             | linux kernel.
             | 
             | >> https://www-
             | users.cs.umn.edu/~kjlu/papers/clarifications-hc.... "We did
             | not introduce or intend to introduce any bug or
             | vulnerability in the Linux kernel. All the bug-introducing
             | patches stayed only in the email exchanges, without being
             | adopted or merged into any Linux branch, which was
             | explicitly confirmed by maintainers. Therefore, the bug-
             | introducing patches in the email did not even become a Git
             | commit in any Linux branch. None of the Linux users would
             | be affected. The following shows the specific procedure of
             | the experiment"
        
               | rincebrain wrote:
               | And now all their patches are getting reverted because
               | nobody trusts them to have been made in good faith, so
               | their list of resolved bugs goes to 0.
        
               | InsomniacL wrote:
               | so instead of fixing the issue they found of being able
               | to introduce backdoors in to their code, they are going
               | to rollback thousand + of other bug fixes.
               | 
               | That's more of a story than what the researchers have
               | done...
        
               | rincebrain wrote:
               | What would you do, if you had a group of patch authors
               | who you didn't trust the contributions of anymore, other
               | than setting aside the time for someone trusted to audit
               | all 390 commits they've had since 2014?
        
               | rincebrain wrote:
               | Yup, he's really ripping them all out.
               | 
               | https://lore.kernel.org/lkml/20210421130105.1226686-1-gre
               | gkh...
        
               | Avamander wrote:
               | It's indeed unfortunate what a few bruised egos will
               | result in.
        
               | rincebrain wrote:
               | I don't think it's necessarily a bruised ego here - I
               | think what upset him is that the paper was published a
               | few months ago and yet, based on this patch, the author
               | seems to still be attempting to submit deeply flawed
               | patches to LKML, and complaining when people don't trust
               | them to be innocent mistakes for some reason.
        
             | bezout wrote:
             | We should ban A/B testing then. Google didn't tell me they
             | were using me to understand which link color is more
             | profitable for them.
             | 
             | There are experiments and experiments. Apart from the fact
             | that they provided the fix right away, they didn't do
             | anyone harm.
             | 
             | And, by the way, it's their job. Maintainers must approve
             | patches after they ensured that the patch is fine. It's
             | okay to do mistakes, but don't tell me "you're wasting my
             | time" after I showed you that maybe there's something wrong
             | with the process. If anything, you should thank me and
             | review the process.
             | 
             | If your excuse is "you knew the patch was vulnerable", then
             | how are you going to defend the project from bad actors?
        
               | corty wrote:
               | > they didn't do anyone harm.
               | 
               | Several of the patches are claimed to have landed in
               | stable. Also, distributions and others (like the
               | grsecurity people) pick up lkml patches that are not
               | included in stable but might have security benefits. So
               | even just publishing such a patch is harmful. Also, fixes
               | were only provided to the maintainers privately as it
               | seems, and unsuccessfully. Or not at all.
               | 
               | > If your excuse is "you knew the patch was vulnerable",
               | then how are you going to defend the project from bad
               | actors?
               | 
               | Exactly the same way as without that "research".
               | 
               | If you try to pry open my car door, I'll drag you to the
               | next police station. "I'm just researching the security
               | of car doors" won't help you.
        
               | account42 wrote:
               | > We should ban A/B testing then. Google didn't tell me
               | they were using me to understand which link color is more
               | profitable for them.
               | 
               | Yes please.
        
               | [deleted]
        
               | wccrawford wrote:
               | Actually, I think participants in an A/B test _should_ be
               | informed of it.
               | 
               | I think people _should_ be informed when market research
               | is being done on them.
               | 
               | For situations where they are already invested in the
               | situation, it should be optional.
               | 
               | For other situations, such as new customer acquisition,
               | the person would have the option of simply leaving the
               | site to avoid it.
               | 
               | But either way, they should be informed.
        
           | arcatek wrote:
           | Getting specific consent from the project leads is entirely
           | doable, and would have avoided most of the concerns.
        
             | Avamander wrote:
             | It really wouldn't have and would've made the patches not
             | pass all levels of review.
        
               | arcatek wrote:
               | How do you think social engineering audits work? You
               | first coordinate with the top layer (in private, of
               | course) and only after getting their agreement do you
               | start your tests. This isn't any different.
        
               | Avamander wrote:
               | > You first coordinate with the top layer (in private, of
               | course) and only after getting their agreement do you
               | start your tests.
               | 
               | The highest level is what had to be tested as well, or do
               | you imagine only consulting Linus? Do you think that
               | wouldn't've gotten him lynched?
        
           | twic wrote:
           | You're right, and it is depressing how negative the reaction
           | has been here. This work is the technical equivalent of
           | "Sokalling", and it is a good and necessary thing.
           | 
           | The thing that people should be upset about is that such an
           | important open source project so easily accepts patches which
           | introduce security vulnerabilities. Forget the researchers
           | for a moment - if it is this easy, you can be certain that
           | malicious actors are also doing it. The only difference is
           | that they are not then disclosing that they have done so!
           | 
           | The Linux maintainers should be grateful that researchers are
           | doing this, and researchers should be doing it to every
           | significant open source project.
        
             | anonymousab wrote:
             | > The thing that people should be upset about is that such
             | an important open source project so easily accepts patches
             | which introduce security vulnerabilities
             | 
             | They were trusting of contributors to not be malicious, and
             | in particular, were trusting of a university to not be
             | wholly malicious.
             | 
             | Sure, there is a possible threat model where they would
             | need to be suspicious of entire universities.
             | 
             | But in general, human projects will operate under some
             | level of basic trust, with some sort of means to establish
             | that trust. To be able to actually get anything done; you
             | cannot perfectly formally review everything with finite
             | human resources. I don't see where they went wrong with any
             | of that here.
             | 
             | There's also the very simple fact that responding to an
             | incident is also a part of the security process, and
             | broadly banning a group whole-cloth will be more secure
             | than not. So both them and you are getting what you want it
             | of it - more of the process to research, and more security.
             | 
             | If the changes didn't make it out to production systems,
             | then it seems like the process worked? Even if some of it
             | was due to admissions that would not happen with truly
             | malicious actors, so too were the patches accepted because
             | the actors were reasonably trusted.
        
               | twic wrote:
               | The Linux project _absolutely cannot trust contributors
               | to not be malicious_. If they are doing that, then this
               | work has successfully exposed a risk.
        
               | anonymousab wrote:
               | Then they would not be accepting any patches from any
               | contributors, as the only truly safe option when dealing
               | with an explicitly and admittedly, or assumed known
               | malicious actor is to disregard their work entirely. You
               | cannot know the scope of a malicious plot in advance, and
               | any benign piece of work can be fatal in some unknown
               | later totality.
               | 
               | As with all human projects, some level and balance of
               | trust and security is needed to get work done. And the
               | gradient shifts as downstream forks have higher security
               | demands / less trust, and (in the case of nation states)
               | more resources and time to both move slower, validate
               | changes and establish and verify trust.
        
           | waihtis wrote:
           | Go hack a random organization without a vulnerability
           | disclosure program in place and see how much goodwill you
           | have. There is a very established best practice in how to do
           | responsible disclosure and this is far from it.
        
             | XorNot wrote:
             | Also by and large reputation is a good first step in a
             | security process.
             | 
             | While any USB stick might have malware on it if it's ever
             | been out of your sight, that one you found in the parking
             | lot is a much bigger problem.
        
             | Avamander wrote:
             | Propose a way to test this without invalidating the
             | results.
        
               | waihtis wrote:
               | 1) Contact a single maintainer and explore feasibility of
               | the study 2) Create a group of maintainers who know the
               | experiment is going to happen, but leave a certain
               | portion of the org out of it 3) Orchestrate it so that
               | someone outside of the knowledge group approves one or
               | more of these patches 4) Interfere before any further
               | damage is done
               | 
               | Besides, are you arguing that ends justify the means if
               | the intent behind the research is valid?
        
               | mahogany wrote:
               | > 3) Orchestrate it so that someone outside of the
               | knowledge group approves one or more of these patches
               | 
               | Isn't this part still experimenting on people without
               | their consent? Why does one group of maintainers get to
               | decide that you can experiment on another group?
        
               | waihtis wrote:
               | It is, but that is how security testing goes about in
               | general (in the commercial world.) Of its application to
               | research and ethics, I'm not much of an authority.
        
               | ncann wrote:
               | In general you try to obtain consent from their boss, so
               | that if the people you pentested on complain you can
               | point to their boss and say "Hey they agreed to it" and
               | that will be the end of the story. In this case it's not
               | clear who the "boss" is but something like the Linux
               | Foundation would be a good start.
        
               | bezout wrote:
               | It depends.
               | 
               | Does creating a vaccine justify the death of some lab
               | animals? Probably.
               | 
               | Does creating supermen justify mutilating people
               | physically and psychologically without their consent?
               | Hell no.
               | 
               | You can't just ignore the context.
        
               | Avamander wrote:
               | > 1) Contact a single maintainer and explore feasibility
               | of the study
               | 
               | That has the risk that the contacted maintainer is later
               | accused of collaborating with saboteurs or that they
               | consult others. Either very awful or possibly invalidates
               | results.
               | 
               | > 2) Create a group of maintainers who know the
               | experiment is going to happen, but leave a certain
               | portion of the org out of it
               | 
               | Assuming the leadership agrees and won't break
               | confidentiality, which they might if the results could
               | make them look bad. Results would be untrustworthy or
               | potentially increase complacency.
               | 
               | > 4) Interfere before any further damage is done
               | 
               | That was done, was it not?
               | 
               | > Besides, are you arguing that ends justify the means if
               | the intent behind the research is valid?
               | 
               | Linux users are lucky they got off this easy.
        
               | rincebrain wrote:
               | > That was done, was it not?
               | 
               | The allegation being made on the mailing list is that
               | some incorrect patches of theirs made it into git and
               | even the stable trees. As there is not presently an
               | enumeration of them, or which ones are alleged to be
               | incorrect, I cannot state whether this is true.
               | 
               | But that's the claim.
               | 
               | edit: And looking at [1], they have a bunch of relatively
               | tiny patches to a lot of subsystems, so depending on how
               | narrowly gregkh means "rip it all out", this may be a big
               | diff.
               | 
               | edit 2: On rereading [2], I may have been incorrectly
               | conflating the assertion about "patches containing
               | deliberate bugs" with "patches that have been committed".
               | Though if they're ripping everything out anyway, it
               | appears they aren't drawing a distinction either...
               | 
               | [1] - https://git.kernel.org/pub/scm/linux/kernel/git/sta
               | ble/linux...
               | 
               | [2] - https://lore.kernel.org/linux-
               | nfs/YH%2F8jcoC1ffuksrf@kroah.c...
        
               | rincebrain wrote:
               | Too late for the edit deadline, but [1] is a claim of an
               | example patch that made it to stable with a deliberate
               | bug.
               | 
               | [1] - https://lore.kernel.org/linux-
               | nfs/YIAta3cRl8mk%2FRkH@unreal/
        
               | MaxBarraclough wrote:
               | Perhaps I'm missing something obvious, but what's the
               | point of all this subterfuge in the first place? Couldn't
               | they just look at the history of security vulnerabilities
               | in the kernel, and analyze how long it took for them to
               | be detected? What does it matter whether the contributor
               | knew ahead of time that they were submitting insecure
               | code?
               | 
               | It's seems equivalent to vandalising Wikipedia to see how
               | long it takes for someone to repair the damage you
               | caused. There's no point doing this, you can just search
               | Wikipedia's edits for corrections, and start your
               | analysis from there.
        
               | TeMPOraL wrote:
               | > _What does it matter whether the contributor knew ahead
               | of time that they were submitting insecure code?_
               | 
               | It's a specific threat model they were exploring: a
               | malicious actor introducing vulnerability on purpose.
               | 
               | > _Couldn 't they just look at the history of security
               | vulnerabilities in the kernel, and analyze how long it
               | took for them to be detected?_
               | 
               | Perhaps they could. I guess it'd involve _much_ more
               | work, and could 've yielded zero results - after all, I
               | don't think there are any documented examples when a
               | vulnerability was proven to have been introduced on
               | purpose.
               | 
               | > _what 's the point of all this subterfuge in the first
               | place?_
               | 
               | Control over the experimental setup, which is important
               | for validity of research. Notice how most research
               | involves gathering up _fresh_ subjects and controls -
               | scientists don 't chase around the world looking for
               | people or objects that, by chance, already did the things
               | they're testing for. They want fresh subjects to better
               | account for possible confounders, and hopefully make the
               | experiment reproducible.
               | 
               | (Similarly, when chasing software bugs, you could analyze
               | old crash dumps all day to try and identify a bug - and
               | you may start with that - but you always want to
               | eventually reproduce the bug yourself. Ultimately, "I can
               | and did that" is always better than "looking at past
               | data, I guess it could happen".)
               | 
               | > _It 's seems equivalent to vandalising Wikipedia to see
               | how long it takes for someone to repair the damage you
               | caused._
               | 
               | Honestly, I wouldn't object to that experiment either. It
               | wouldn't do much harm (little additional vandalism
               | doesn't matter on the margin, the base rate is already
               | absurd), and could yield some social good. Part of the
               | reason to have public research institutions is to allow
               | researchers to do things that would be considered bad if
               | done by random individual.
               | 
               | Also note that both Wikipedia and Linux kernel are
               | essentially infrastructure now. Running research like
               | this against them makes sense, where running the same
               | research against a random small site / OSS project
               | wouldn't.
        
               | MaxBarraclough wrote:
               | > I guess it'd involve much more work, and could've
               | yielded zero results - after all, I don't think there are
               | any documented examples when a vulnerability was proven
               | to have been introduced on purpose.
               | 
               | In line with UncleMeat's comment, I'm not convinced it's
               | of any consequence that the security flaw was introduced
               | deliberately, rather than by accident.
               | 
               | > scientists don't chase around the world looking for
               | people or objects that, by chance, already did the things
               | they're testing for
               | 
               | That doesn't sound like a fair description of what's
               | happening here.
               | 
               | There are two things at play. Firstly, an analysis of the
               | survival function [0] associated with security
               | vulnerabilities in the kernel. Secondly, the ability of
               | malicious developers to _deliberately_ introduce new
               | vulnerabilities. (The technical specifics detailed in the
               | paper are not relevant to our discussion.)
               | 
               | I'm not convinced that this unethical study demonstrates
               | anything of interest on either point. We already know
               | that security vulnerabilities make their way into the
               | kernel. We already know that malicious actors can write
               | code with intentional vulnerabilities, and that it's
               | possible to conceal these vulnerabilities quite
               | effectively.
               | 
               | > Honestly, I wouldn't object to that experiment either.
               | It wouldn't do much harm (little additional vandalism
               | doesn't matter on the margin, the base rate is already
               | absurd), and could yield some social good.
               | 
               | That's like saying _It 's ok to deface library books,
               | provided it's a large library, and provided other people
               | are also defacing them._
               | 
               | Also, it would not yield a social good. As I already
               | said, it's possible to study Wikipedia's ability to
               | repair vandalism, without committing vandalism. This
               | isn't hypothetical, it's something various researchers
               | have done. [0][1]
               | 
               | > Part of the reason to have public research institutions
               | is to allow researchers to do things that would be
               | considered bad if done by random individual.
               | 
               | It isn't. Universities have ethics boards. They are held
               | to a higher ethical standard, not a lower one.
               | 
               | > Running research like this against them makes sense
               | 
               | No one is contesting that Wikipedia is worthy of study.
               | 
               | [0] https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Sig
               | npost/2...
               | 
               | [1] https://en.wikipedia.org/wiki/Wikipedia:Counter-
               | Vandalism_Un...
        
               | UncleMeat wrote:
               | > It's a specific threat model they were exploring: a
               | malicious actor introducing vulnerability on purpose.
               | 
               | But does that matter? We can imagine that the error-prone
               | developer who submitted the buggy patch just had a
               | different mindset. Nothing about the patch changes. In
               | fact, a malicious actor is explicitly trying to act like
               | an error-prone developer and would (if skilled) be
               | indistinguishable from one. So we'd expect the maintainer
               | response to be the same.
        
               | waihtis wrote:
               | Ah, but youre missing the fact that discovered
               | vulnerabilities are now trophies in the security
               | industry. This is potentially gold in your CV.
        
               | splithalf wrote:
               | It potentially has long term negative impact on the
               | experimental subjects involved and has no research
               | benefit. The researchers should be removed from
               | university and the university itself should be sued and
               | lose enough money that they act more responsible in the
               | future. It's a very slippery slope to from casual irb
               | wavers to Tuskegee experiments.
        
               | dcminter wrote:
               | 1. Get permission 2. Submit patches from a cover
               | identity.
        
               | occamrazor wrote:
               | If you can't make an experiment without violating ethical
               | standards, you simply don't do it, you can't use this as
               | an excuse to violate ethical standards.
        
               | Avamander wrote:
               | Misplaced trust was broken, that's it. Linux users are
               | incredibly lucky this was a research group and not an
               | APT.
        
               | tetha wrote:
               | In every commercial pentest I have been in, you have 1-2
               | usually senior employees on the blue team in the know.
               | They have the job to stop employees from going to far on
               | defense, as well as stop the pentesters from going too
               | far. The rest of the team stays in the dark to test their
               | response and observation.
               | 
               | In this case, in my opinion, a small set of maintainers
               | and linus as "management" would have to be in the know to
               | e.g. stop a merge of such a patch once it was accepted by
               | someone in the dark.
        
               | ProblemFactory wrote:
               | There doesn't have to be a way.
               | 
               | Kernel maintainers are volunteering their time and effort
               | to make Linux better, not to be entertaining test
               | subjects for the researchers.
               | 
               | Even if there is no ethical violation, they are justified
               | to be annoyed at having their time wasted, and taking
               | measures to discourage and prevent such malicious
               | behaviour in the future.
        
               | Avamander wrote:
               | > There doesn't have to be a way.
               | 
               | Given the importance of the Linux kernel, there has to be
               | a way to make contributions safer. Some people even
               | compare it to the "water supply" and others bring in
               | "national security".
               | 
               | > they are justified to be annoyed at having their time
               | wasted, and taking measures to discourage and prevent
               | such malicious behaviour in the future.
               | 
               | "Oh no, think of the effort we have to spend at defending
               | a critical piece of software!"
        
               | [deleted]
        
       | Fordec wrote:
       | In Ireland there was a referendum to repeal the ban on abortion
       | referendum there was very heated arguments, bot twitter accounts
       | and general toxicity. For the sake of peoples sanity, there was a
       | "Repeal Shield" implemented that blocked bad faith actors.
       | 
       | This news makes me wish to implement my own block on the same
       | contributors to any open source I'm involved with. At the end of
       | the day, their ethics is their ethics. Those ethics are not Linux
       | specific, it was just the high profile target in this instance. I
       | would totally subscribe to or link to a group sourced file
       | similar to a README.md or CONTRIBUTORS.md (CODERS_NON_GRATA.md?)
       | that pulled such things.
        
         | dm319 wrote:
         | I think that is a sensible way to deal with this problem. The
         | linux community is based on trust (as are a lot of other very
         | successful communities), and ideally we trust until we have
         | reason not to. But at that point we do need to record who we
         | don't trust. It is the same in academia and sports.
        
           | Fordec wrote:
           | The tech community, especially in sub-niches is far smaller
           | than people think it is. It's easy to feel like it's a sea of
           | tech to some when it's all behind a screen. But reputation is
           | a powerful thing in both directions.
           | 
           | There is also a more nuclear option which I'm specifically
           | _not_ advocating for quite yet here but I will note none the
           | less;
           | 
           | We're starting to see in discourse regarding companies co-
           | opting open source projects for their own profit ( _cough_
           | Amazon) and how license agreements limit them more than
           | regular contributors. That has come about, at the core of it,
           | also because of a demonstrated trend of bad faith but also
           | combined with a larger surface area contact with society. I
           | could foresee a potential future trend where _individuals_
           | who also act in bad faith are excluded from use of open
           | source projects through their licenses. Imagine if the
           | license for some core infrastructure tech like a networking
           | library or the Linux kernel banned  "Joe Blackhat" from using
           | python for professional use. Now he still could, but in
           | reputable companies, particularly larger ones with a legal
           | department that person would be more of a liability than they
           | are worth. There can be potentially huge professional
           | consequences of a type that do not currently exist really in
           | the industry.
        
       | jtdev wrote:
       | University of Minnesota is involved with the Confucius
       | Institute... what could go wrong when a U.S. university accepts
       | significant funding from a hostile foreign power?
       | 
       | https://experts.umn.edu/en/organisations/confucius-institute
        
       | nitinreddy88 wrote:
       | Can anyone enlighten me why these were not caught in review
       | process itself?
        
       | hzzhang wrote:
       | This type of research just looks like: let's prove people will
       | die if being killed, by really killing someone.
        
       | incrudible wrote:
       | From an infosec perspective, I think this is a knee-jerk response
       | to someone attempting a penetration test _in good faith_ and
       | failing.
       | 
       | The system appears to have worked, so that's _good news_ for
       | Linux. On the other hand, now that the university has been
       | banned, they won 't be able to find holes in the process that may
       | remain, that's _bad news_ for Linux.
        
       | cblconfederate wrote:
       | I guess someone had to do this unethical experiment, but otoh,
       | what is the value here? There's a high chance someone would later
       | find these "intentional bugs" , it's how open source works
       | anyway. They just proved that OSS is not military-grade , but
       | nobody thought so anyway
        
         | goodpoint wrote:
         | > They just proved that OSS is not military-grade , but nobody
         | thought so anyway
         | 
         | ...and yet FOSS and especially Linux is very widely used in
         | military devices including weapons.
         | 
         | Because it's known to be less insecure than most alternatives.
        
           | cblconfederate wrote:
           | I assume they don't use the bleeding edge though
        
             | goodpoint wrote:
             | Like in most industrial, military, transportation, banking
             | environments people tend to prefer very stable and
             | thoroughly tested platform.
             | 
             | What HN would call "ancient".
        
         | Avamander wrote:
         | > but nobody thought so anyway
         | 
         | A lot of people claim that there's a lot of eyes on the code
         | and thus introducing vulnerabilities is unlikely. This research
         | clearly has bruised some egos bad.
        
           | geocar wrote:
           | > A lot of people claim that there's a lot of eyes on the
           | code.
           | 
           | Eric Raymond claimed so, and a lot of people repeated his
           | claim, but I don't think this is the same thing as "a lot of
           | people claim" -- and even if a lot of people claim something
           | that is obviously stupid, it doesn't make the thing less
           | obviously stupid, it just means it's less obvious to some
           | people for some reasons.
        
             | jodrellblank wrote:
             | Eric Raymond observed it, as a shift in software
             | development to take advantage of the wisdom of crowds. I
             | don't see that he speaks about security directly in the
             | original essay[2]. He's discussing the previously held idea
             | that stable software comes from highly skilled developers
             | working on deep and complex debugging between releases, and
             | instead of that if all developers have different skillsets
             | then with a large enough number of developers any bug will
             | meet someone who thinks that bug is an easy fix. Raymond is
             | observing that the Linux kernel development and
             | contribution process was designed as if Linus Torvalds
             | believed this, preferring ease of contribution and low
             | friction patch commit to tempt more developers.
             | 
             | Raymond doesn't seem to claim anything like "there _are_
             | sufficient eyes to swat all bugs in the kernel ", or "there
             | are eyes on all parts of the code", or "'bugs' covers all
             | possible security flaws", or etc. He particularly mentions
             | uptime and crashing, so less charitably the statement is
             | "there are no crashing or corruption bugs so deep that a
             | large enough quantity of volunteers can't bodge some way
             | past them". Which leaves plenty of room for less used
             | subsystems to have nobody touching them if they don't cause
             | problems, patches that fix stability at the expense of
             | security, absense of careful design in some areas, the
             | amount of eyes needed being substantially larger than the
             | amount of eyes involved or available, that maliciously
             | submitted patches are different from traditional bugs, and
             | more.
             | 
             | [1] https://en.wikipedia.org/wiki/Linus%27s_law
             | 
             | [2] http://www.unterstein.net/su/docs/CathBaz.pdf
        
           | goodpoint wrote:
           | > A lot of people claim that there's a lot of eyes on the
           | code
           | 
           | And they are correct. Unfortunately sometimes the number of
           | eyes is not enough.
           | 
           | The alternative is closed source, which has prove to be
           | orders of magnitude worse, on many occasions.
        
           | varispeed wrote:
           | Nothing is perfect, but is it better than not having any
           | eyes? If anything, this shows that more eyes is needed.
        
             | oefrha wrote:
             | The argument isn't having no eyes is better than some eyes.
             | Rather, it's commonly argued that open source is better for
             | security because there are more eyes on it.
             | 
             | What this research demonstrates is that you can quite
             | easily slip back doors into an open contribution (which is
             | often but not always associated with open source) project
             | with supposedly the most eyes on it. That's not true for
             | any closed source project which is definitely not open
             | contribution. (You can go for an open source supply chain
             | attack, but that's again a problem for open source.)
        
               | goodpoint wrote:
               | > it's commonly argued that open source is better for
               | security because there are more eyes on it.
               | 
               | > What this research demonstrates is that you can quite
               | easily slip back doors into an open contribution
               | 
               | To make a fair comparison you should contrast it with
               | companies or employees placing a backdoors into their own
               | closed source software.
               | 
               | It's extremely easy to do and equally difficult to spot
               | for end users.
        
               | dataflow wrote:
               | To make it a fair comparison you should contrast... an
               | inside job with an outside job?
        
               | goodpoint wrote:
               | This is an arbitrary definition of inside vs outside. You
               | are implying that employees are trusted and benign and
               | other contributors are high-risk, ignoring than an
               | "outside" contributor might be improving security with
               | bug reports and patches.
               | 
               | For the end user, the threat model is about the presence
               | of a malicious function in some binary.
               | 
               | Regardless if the developers are an informal community, a
               | company, a group of companies, an NGO. They are all
               | "outside" to the end user.
               | 
               | Closed source software (e.g. phone apps) breach user's
               | trust constantly, e.g. with privacy breaching
               | telemetries, weak security and so on.
               | 
               | If Microsoft weakens encryption under pressure from NSA
               | is it "inside" or "outside"? What matters to end users is
               | the end result.
        
               | dataflow wrote:
               | The insiders are the maintainers. The outsiders are
               | everyone else. If this is an arbitrary definition to you
               | I... don't know what to tell you.
               | 
               | There's absolutely no reason everyone's threat model
               | _has_ to equate insiders with outsiders. If a stranger on
               | the street gives you candy, you 'll probably check it
               | twice or toss it away out of caution. If a friend or
               | family member does the same thing, you'll probably trust
               | them and eat it. Obviously at the end of the day, your
               | concern is the same: you not getting poisoned. That
               | doesn't mean you can (or should...) treat your loved ones
               | like they're strangers. It's outright insane for most
               | people to live in that manner.
               | 
               | Same thing applies to other things in life, including
               | computers. Most people have some root of trust, and that
               | usually includes their vendors. There's no reason they
               | have to trust you and (say) Microsoft employees/Apple
               | employees/Linux maintainers equally. Most people, in
               | fact, should not do so. (And this should not be a
               | controversial position...)
        
               | goodpoint wrote:
               | The candy comparison is wrong on two levels.
               | 
               | 1) Unless you exclusively run software written by close
               | friends both Linux and $ClosedOSCompany are equally
               | "outsiders"
               | 
               | 2) I regularly trust strangers to make medicines I ingest
               | any fly airplanes I'm on. I would not trust any person I
               | know to fly the plane because they don't have the
               | required training.
               | 
               | So, trust is not so simple, and that's why risk analysis
               | takes time.
               | 
               | > There's no reason they have to trust you and (say)
               | Microsoft employees/Apple employees/Linux maintainers
               | equally
               | 
               | ...and that's why plenty of critical system around the
               | world, including weapons, run on Linux and BSD,
               | especially around countries that don't have the best
               | relations with US.
        
               | oefrha wrote:
               | Recruiting a rogue employee is orders of magnitude harder
               | than receiving ostensibly benign patches in emails from
               | Internet randos.
               | 
               | Rogue companies/employees is really a different security
               | problem that's not directly comparable to drive-by
               | patches (the closest comparison is a rogue open source
               | maintainer).
        
               | goodpoint wrote:
               | The reward for implanting a rogue employee is orders of
               | magnitude higher, with the ability to plant backdoors or
               | weaken security for decades.
               | 
               | And that's why nation-state attackers do it routinely.
        
               | oefrha wrote:
               | Yes, it's a different problem that's way less likely to
               | happen and potentially more impactful, hence not
               | comparable. And entities with enough resources can do the
               | same to open source, except with more risk; how much more
               | is very hard to say.
        
               | goodpoint wrote:
               | Despite everything, even NSA is an avid user of Linux for
               | their critical systems. That says a lot.
        
               | corty wrote:
               | Maybe for employees, but usually it is a contractor of a
               | contractor in some outsourced department replacing your
               | employees. I'd argue that in such common situations, you
               | are worse off than with randos on the internet sending
               | patches, because no-one will ever review what those
               | contractors commit.
               | 
               | Or you have a closed-source component you bought from
               | someone who pinky-swears to be following secure coding
               | practices and that their code is of course bug-free...
        
           | jeltz wrote:
           | They were only banned after accusing Greg of slander after he
           | called them out on their experiment and asked them to stop.
           | They were banned for bring dishonest and rude.
        
         | rlpb wrote:
         | > They just proved that OSS is not military-grade...
         | 
         | As if there is some other software that is "military-grade" by
         | the same measure? What definition are you using for that term,
         | anyway?
        
       | eecc wrote:
       | > I will not be sending any more patches due to the attitude that
       | is not only unwelcome but also intimidating to newbies and non
       | experts.
       | 
       | Woah, this attempt to incite wokes and cancellers is particularly
       | pernicious.
        
         | person101x wrote:
         | Cancel Linux! Anyone?
        
           | laurent92 wrote:
           | One may wonder whether the repetitive attacks against Linus
           | on the tone he's using, until he had to take a break, isn't a
           | way to cut down Linux' ability to perform by cutting its
           | head, which would be absolutely excellent for closed-source
           | companies and Amazon.
           | 
           | Imagine: If Linux loses its agility, we may have to either
           | "Use Windows because it has continued upgrades" or "purchase
           | Amazon's version of Linux" which would be the only ones
           | properly maintained and thus certified for, say, government
           | purpose or GDPR purpose.
           | 
           | (I'm paying Debian but I'm afraid that might not be enough).
        
             | bluGill wrote:
             | There are always the BSDs if something happens. Not quite
             | as popular, but the major ones are good enough that to take
             | over completely. (as in if you thought someone would kill
             | you for using Linux you can replace all your linux with
             | some BSD by the end of the day and in a month forget about
             | it) Don't take that as better - that is a different
             | discussion, but they are good enough to substitute and move
             | on for the most part.
        
       | dsr12 wrote:
       | Plonk is a Usenet jargon term for adding a particular poster to
       | one's kill file so that poster's future postings are completely
       | ignored.
       | 
       | Link: https://en.wikipedia.org/wiki/Plonk_(Usenet)
        
       | uglygoblin wrote:
       | If the researchers desired outcome is more vigilance during
       | patches and contributions I guess they might achieve that
       | outcome?
        
       | andi999 wrote:
       | Somebody should have told them that since microsoft is now pro-
       | open source this wouldnt land any of them a cushy position after
       | the blowup at uni.
        
       | davidkuhta wrote:
       | I think the root of the problem can be traced back to the
       | researcher's erroneous claim that "This was _not_ human research
       | ".
        
       | 1970-01-01 wrote:
       | So be it. Greg is a very trusted member, and has overwhelming
       | support from the community for swinging the banhammer. We have a
       | living kernel to maintain. Minnesota is free to fork the kernel,
       | build their own, recreate the patch process, and send suggestions
       | from there.
        
       | DonHopkins wrote:
       | Shouldn't the university researchers compensate their human
       | guinea pigs with some nice lettuce?
        
       | gnfargbl wrote:
       | From https://lore.kernel.org/linux-
       | nfs/CADVatmNgU7t-Co84tSS6VW=3N...,
       | 
       |  _> A lot of these have already reached the stable trees._
       | 
       | If the researchers were trying to prove that it is possible to
       | get malicious patches into the kernel, it seems like they
       | succeeded -- at least for an (insignificant?) period of time.
        
         | op00to wrote:
         | I wonder whether they broke any laws intentionally putting bugs
         | in software that is critical to national security.
        
         | testplzignore wrote:
         | It may be unethical from an academic perspective, but I like
         | that they did this. It shows there is a problem with the review
         | process if it is not catching 100% of this garbage. Actual
         | malicious actors are certainly already doing worse and maybe
         | succeeding.
         | 
         | In a roundabout way, this researcher has achieved their goal,
         | and I hope they publish their results. Certainly more
         | meaningful than most of the drivel in the academic paper mill.
        
           | johnvaluk wrote:
           | The paper indicates that the goal is to prove that OSS in
           | particular is vulnerable to this attack, but it seems that
           | any software development ecosystem shares the same
           | weaknesses. The choice of an OSS target seems to be one of
           | convenience as the results can be publicly reviewed and this
           | approach probably avoids serious consequences like arrests or
           | lawsuits. In that light, their conclusions are misleading,
           | even if the attack is technically feasible. They might get
           | more credibility if they back off the OSS angle.
        
             | bluGill wrote:
             | Not really. You can't introduce bugs like this into my
             | companies code base because the code is protected from
             | random people on the internet accessing it. So your first
             | step would be to find an exploitable bug in github, but
             | then you are bypassing peer review as well to get in.
             | (Actually I think we would notice that, but that is more
             | because of a process we happen to have that most don't)
        
               | symlinkk wrote:
               | Actually you can, just get hired first.
        
               | bluGill wrote:
               | Exactly my point.
        
           | scbrg wrote:
           | I'm not sure what we learned. Were we under the impression
           | that it's impossible to introduce new (security) bugs in
           | Linux?
        
             | Avamander wrote:
             | > Were we under the impression that it's impossible to
             | introduce new (security) bugs in Linux?
             | 
             | I've heard it many times that they're thoroughly reviewed
             | and back doors are very unlikely. So yes, some people were
             | under the impression.
        
               | pertymcpert wrote:
               | uhhh...from who?
        
               | yjftsjthsd-h wrote:
               | And this _was_ caught, albeit after some delay, so that
               | impression won 't change.
        
           | jnxx wrote:
           | > It shows there is a problem with the review process if it
           | is not catching 100% of this garbage.
           | 
           | Does that add anything new to what we know since the creation
           | of the "obfuscated C contest" in 1984?
        
           | mratsim wrote:
           | By your logic, you allow recording people without their
           | consent, experimenting on PTSD by inducing PTSD without
           | people consent, or medical experimentation without the
           | subject consent.
           | 
           | Try to introduce yourself in the White House and when you get
           | caught tell them "I was just testing your security
           | procedures".
        
           | chrisjc wrote:
           | Unable to follow the kernel thread (stuck in an age between
           | twitter and newsgroups, sorry), but...
           | 
           | did these "researchers" in any way demonstrate that they were
           | going to come clean about what they had done before their
           | "research" made to anywhere close to release/GA?
        
           | s_dev wrote:
           | >It shows there is a problem with the review process if it is
           | not catching 100% of this garbage
           | 
           | What review process catches 100% garbage? It's a mechanism to
           | catch 99% of garbage -- otherwise Linux kernel would have no
           | bugs.
        
             | cameronh90 wrote:
             | It does raise questions though. Should there be a more
             | formal scrutiny process for less trusted developers? Some
             | kind of background check process?
             | 
             | Runs counter to how open source is ideally written, but for
             | such a core project, perhaps stronger checks are needed.
        
               | burnished wrote:
               | These researchers were in part playing on the reputation
               | of their university, right? Now people at that university
               | are no longer trusted. I'm not sure a more formal
               | scrutiny process will bring about better results, I think
               | it would be reasonable to see if the university ban is
               | sufficient to discourage similar behavior in the future.
        
           | happymellon wrote:
           | > It shows there is a problem with the review process if it
           | is not catching 100% of this garbage.
           | 
           | It shows nothing of the sort. No review process is 100%
           | foolproof, and opensource means that everything can be
           | audited if it is important to you.
           | 
           | The other option is closed source everything and I can
           | guarentee that review processes let stuff through, even if
           | its only "to meet deadlines" and you will unlikely be able to
           | audit it.
        
           | jnxx wrote:
           | It more shows up a very serious problem with the incentives
           | present in scientific research and a poisonous culture which
           | obviously seems to reward malicious behavior. Science enjoys
           | a lot of freedom and trust from citizens but this trust must
           | not be misused. If some children playing throw fireworks
           | under your car, or mix sugar into the gas tank, just to see
           | how you react, this would have negative community effects,
           | too. Adult scientists should be totally aware of that.
           | 
           | This will lead in effect to that even valuable contributions
           | from universities will be seen with more suspicion and will
           | be very damaging in the long run.
        
         | st_goliath wrote:
         | I tangentially followed the debacle unfold for a while and this
         | particular thread now has lead to heated debates on some IRC
         | channels I'm on.
         | 
         | While it is _maybe_ "scientifically interesting", intentionally
         | introducing bugs into Linux that could potentially make it into
         | production systems while work on this paper is going on, could
         | IMO be described as utterly reckless _at best_.
         | 
         | Two messages down in the same thread, it more or less
         | culminates with the university e-mail suffix being banned from
         | several kernel mailing lists and associated patches being
         | removed[1], which might be an appropriate response to
         | discourage others from similar stunts "for science".
         | 
         | [1] https://lore.kernel.org/linux-
         | nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
        
           | gspr wrote:
           | > While it is maybe "scientifically interesting",
           | intentionally introducing bugs into Linux that could
           | potentially make it into production systems while work on
           | this paper is going on, could IMO be described as utterly
           | reckless at best.
           | 
           | I agree. I would say this is kind of a "human process" analog
           | of your typical computer security research, and that this
           | behavior is akin to black hats exploiting a vulnerability.
           | Totally not OK as research, and totally reckless!
        
             | chrisjc wrote:
             | Out of interest, is there any way to have some sort of
             | automated way to test this weak link that is human trust?
             | (I understand how absurd this question is)
             | 
             | It's awfully scary to think about how vulnerabilities might
             | be purposely introduced into this important code base (as
             | well as many other) only to be exploited at a later date
             | for an intended purpose.
             | 
             | Edit: NM, see st_goliath response below
             | 
             | https://news.ycombinator.com/item?id=26888538
        
             | gnfargbl wrote:
             | Yep. To take a physical-world analogy: Would it be okay to
             | try and prove the vulnerability of a country's water supply
             | by intentionally introducing a "harmless" chemical into the
             | treatment works, without the consent of the works owners?
             | Or would that be a _go directly to jail_ sort of an
             | experiment?
             | 
             | I share the researchers' intellectual curiosity about
             | whether this would work, but I don't see how a properly-
             | informed ethics board could ever have passed it.
        
               | [deleted]
        
               | rcxdude wrote:
               | The US navy did actually basically this with some
               | pathogens in the 50s:
               | https://en.wikipedia.org/wiki/Operation_Sea-Spray ; the
               | idea of 'ethical oversight' was not something a lot of
               | scientists operated under in those days.
        
               | md_ wrote:
               | https://www.theonion.com/reporters-expose-airport-
               | security-l...
        
               | Avamander wrote:
               | > Would it be okay to try and prove the vulnerability of
               | a country's water supply by intentionally introducing a
               | "harmless" chemical into the treatment works, without the
               | consent of the works owners?
               | 
               | The question should also be due to who's neglect they
               | gained access to the "water supply". If you also truly
               | want to make this comparison.
        
               | corty wrote:
               | The question is also: "Will this research have benefits?"
               | If the conclusion is "well, you can get access to the
               | water supply and the only means to prevent it is to
               | closely guard every brook, lake and river, needing half
               | the population as guards". Well, then it is useless. And
               | taking risks for useless research is unethical, no matter
               | how minor those risks might be.
        
               | Avamander wrote:
               | > If the conclusion is "well, you can get access to the
               | water supply and the only means to prevent it is to
               | closely guard every brook, lake and river, needing half
               | the population as guards".
               | 
               | I don't think that was the conclusion.
        
               | corty wrote:
               | And what was? I cannot find constructive criticism in the
               | related paper or any of your comments.
        
           | konschubert wrote:
           | Are there any measures being discussed that could make such
           | attacks harder in future?
        
             | PKop wrote:
             | Force the university to take reponsibility for screening
             | their researchers. i.e. a blanket ban, scorched earth
             | approach punishing the entire university's reputation is a
             | good start.
             | 
             | People want to claim these are lone rogue researchers and
             | good people at the university shouldn't be punished, but
             | this is the _only_ way you can reign in these types of
             | rogues individuals: by getting the collective reputation of
             | the whole university on the line to police their own
             | people. Every action of individual researchers _must_ be
             | assumed to be putting the reputation of the university as a
             | whole on the line. This is the cost of letting individuals
             | operate within the sphere of the university.
             | 
             | Harsh, "over reaction" punishment is the only solution.
        
             | st_goliath wrote:
             | Such as? Should we assume that every patch was submitted in
             | bad faith and tries to sneakily introduce bugs?
             | 
             | The whole idea of the mailing list based submission process
             | is that it allows others on the list to review your patch
             | sets and point out obvious problems with your changes and
             | discuss them, before the maintainer picks the patches up
             | from the list (if they don't see any problem either).
             | 
             | As I pointed out elsewhere, there are already test farms
             | and static analysis tools in place. On some MLs you might
             | occasionally see auto generated mails that your patch set
             | does not compile under configuration such-and-such, or that
             | the static analysis bot found an issue. This is already a
             | thing.
             | 
             | What happened here is basically a _con_ in the patch review
             | process. IRL con men can scam their marks, because most
             | people assume, when they leave the house that the majority
             | of the others outside aren 't out there to get them. Except
             | when they run into one where the assumption doesn't hold
             | and end up parted from their money.
             | 
             | For the paper, bypassing the review step worked _in some
             | instances of the many patches they submitted_ because a)
             | humans aren 't perfect, b) have a mindset that most of the
             | time, most people submitting bug fixes do so in good faith.
             | 
             | Do you maintain a software project? On GitHub perhaps? What
             | do you do if somebody opens a pull request and says "I
             | tried such and such and then found that the program crashes
             | here, this pull request fixes that"? When reviewing the
             | changes, do you immediately, by default jump to the
             | assumption that they are evil, lying and trying to sneak a
             | subtle bug into your code?
             | 
             | Yes, I know that this review process isn't _perfect_ , that
             | _there are problems_ and I 'm not trying to dismiss any
             | concerns.
             | 
             | But what technical measure would you propose that can
             | _effectively_ stop con men?
        
               | dash2 wrote:
               | > Such as? Should we assume that every patch was
               | submitted in bad faith and tries to sneakily introduce
               | bugs?
               | 
               | Do the game theory. If you _do_ assume that, you 'll
               | always be wrong. But if you _don 't_ assume it, you won't
               | always be right.
        
               | oefrha wrote:
               | > Should we assume that every patch was submitted in bad
               | faith and tries to sneakily introduce bugs?
               | 
               | Yes, especially for critical projects?
               | 
               | > Do you maintain a software project? On GitHub perhaps?
               | What do you do if somebody opens a pull request and says
               | "I tried such and such and then found that the program
               | crashes here, this pull request fixes that"? When
               | reviewing the changes, do you immediately, by default
               | jump to the assumption that they are evil, lying and
               | trying to sneak a subtle bug into your code?
               | 
               | I don't jump to the conclusion that the random
               | contributor is evil. I do however think about the
               | potential impact of the submitted patch, security or not,
               | and I do assume a random contributor can sneak in subtle
               | bugs, usually not intentionally, but simply due to a lack
               | of understanding.
        
               | st_goliath wrote:
               | > > Should we assume that every patch was submitted in
               | bad faith and tries to sneakily introduce bugs?
               | 
               | >
               | 
               | > Yes, especially for critical projects?
               | 
               | People don't act that way I described intentionally, or
               | because they are dumb.
               | 
               | Even if you go in with the greatest paranoia and the best
               | of intentions, most of the time, most of the other people
               | _don 't act maliciously_ and your paranoia eventually
               | returns to a reasonable level (i.e. assuming that most
               | people might not be malicious, but also not infallible).
               | 
               | It's a kind of fatigue. It's simply human. No matter how
               | often you say "DUH of course they should".
               | 
               | In my entire life, I have only met a single guy who
               | managed to keep that "everybody else is potentially evil"
               | attitude up over time. IIRC he was eventually prescribed
               | something with Lithium salts in it.
        
               | konschubert wrote:
               | > Such as? Should we assume that every patch was
               | submitted in bad faith and tries to sneakily introduce
               | bugs?
               | 
               | I'm not a maintainer but naively I would have thought
               | that the answer to this is "Yes".
               | 
               | I didn't mean any disrespect. I didn't write "I can't
               | believe they haven't implemented a perfect technical
               | process that fully prevents these attacks".
               | 
               | I just asked if there are any ideas being discussed.
               | 
               | Two things can be true at the same time: 1. What the
               | "researchers" did was unethical. 2. They uncovered
               | security flaws.
        
             | rodgerd wrote:
             | The University and researchers involved are now default-
             | banned from submitting.
             | 
             | So yes.
        
             | coldpie wrote:
             | The only real fix for this is to improve tooling and/or
             | programming language design to make these kinds of exploits
             | more difficult to slip past maintainers. Lots of folks are
             | working in that space (see recent discussion around Rust),
             | but it's only becoming a priority now that we're seeing the
             | impact of decades of zero consideration for security. It'll
             | take a while to steer this ship into the right direction,
             | and in the meantime the world continues to turn.
        
           | md_ wrote:
           | I'm confused. The cited paper contains this prominent
           | section:
           | 
           | Ensuring the safety of the experiment. In the experiment, we
           | aim to demonstrate the practicality of stealthily introducing
           | vulnerabilities through hypocrite commits. Our goal is not to
           | introduce vulnerabilities to harm OSS. Therefore, we safely
           | conduct the experiment to make sure that the introduced UAF
           | bugs will not be merged into the actual Linux code. In
           | addition to the minor patches that introduce UAF conditions,
           | we also prepare the correct patches for fixing the minor
           | issues. We send the minor patches to the Linux community
           | through email to seek their feedback. Fortunately, there is a
           | time window between the confirmation of a patch and the
           | merging of the patch. Once a maintainer confirmed our
           | patches, e.g., an email reply indicating "looks good", we
           | immediately notify the maintainers of the introduced UAF and
           | request them to not go ahead to apply the patch.
           | 
           | Are you saying that despite this, these malicious commits
           | made it to production?
           | 
           | Taking the authors at their word, it seems like the biggest
           | ethical consideration here is that of potentially wasting the
           | time of commit reviewers--which isn't nothing by any stretch,
           | but is a far cry from introducing bugs in production.
           | 
           | Are the authors lying?
        
             | DetroitThrow wrote:
             | >Are you saying that despite this, these malicious commits
             | made it to production?
             | 
             | Vulnerable commits reached stable trees as per the
             | maintainers in the above email exchange, though the
             | vulnerabilities may not have been released to users yet.
             | 
             | The researchers themselves acknowledge the patches were
             | accepted in the above email exchange, so it's hard to
             | believe that they're being honest or are fully aware of
             | their ethics violations/vulnerability introductions and
             | that they would've prevented the patches from being
             | released without gkh's intervention.
        
               | md_ wrote:
               | Ah, I must've missed that. I do see people saying patches
               | have reached stable trees, but the researchers' own email
               | is missing (I assume removed) from the archive. Where did
               | you find it?
        
               | DetroitThrow wrote:
               | It's deleted so I was going off of the quoted text in
               | Greg's response that their patches were being submitted
               | without any pretext of "don't let this reach stable".
               | 
               | I trust Greg to have not edited or misconstrued their
               | response.
               | 
               | https://lore.kernel.org/linux-
               | nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
        
               | md_ wrote:
               | Yeah, I saw that. But the whole thing is a bit too
               | unclear to me to know what happened.
               | 
               | I'm not saying this is innocent, but it's not at all
               | clear to me that vulnerabilities were deliberately
               | introduced with the goal of allowing them to reach a
               | release.
               | 
               | Anyway, like I said, too unclear for me to have an
               | opinion.
        
               | DetroitThrow wrote:
               | I'm a little confused what's unclear if you happened to
               | see that comment - as mentioned elsewhere in this thread,
               | the bad actors state in a clarification paper that no
               | faulty commits reached a stable branch, in the original
               | paper state that the no patches were being applied at all
               | and that essentially state the research was all email
               | communication AND worded it such that they 'discovered'
               | bad commits rather than introduced them (seemingly just
               | obtuse enough for a review board exemption on human
               | subject research), despite submitting patches,
               | acknowledging they submitted commits, and Leon and Greg
               | finding several vulnerable commits that reached stable
               | branches and releases. For example:
               | https://github.com/torvalds/linux/commit/8e949363f017
               | 
               | While I'm sure a room of people might find it useful to
               | psychoanalyze their 'unclear' but probably malicious
               | intent, their actions are clearly harmful to researchers,
               | Linux contributors, direct Linux users, and indirect
               | Linux users (such as the billions of people who trust
               | Linux systems to store or process their PII data).
        
               | londons_explore wrote:
               | The linked patch is pointless, but does not introduce a
               | vulnerability.
               | 
               | Perhaps the researchers see no harm in letting that be
               | released.
        
               | DetroitThrow wrote:
               | The linked one is harmless (well it introduces a race
               | condition which is inherently harmful to leave in the
               | code but I suppose for the sake of argument we can
               | pretend that it can't lead to a vulnerability), but the
               | maintainers mention vulnerabilities of various severity
               | in other patches managing to reach stable. If they were
               | not aware of the severity of their patches, then clearly
               | they needed to be working with a maintainer(s) who is
               | experienced with security vulnerabilities in a branch and
               | would help prevent harmful patches from reaching stable.
               | 
               | It might be less intentionally harmful if we presume they
               | didn't know other patches introduced vulnerabilities, but
               | this is also why this research methodology is extremely
               | reckless and frustrating to read about, when this could
               | have been done with guard rails where they were needed
               | without impacting the integrity of the results.
        
             | p49k wrote:
             | They aren't lying, but their methods are still dangerous
             | despite their implying the contrary. Their approach
             | requires perfection on both the submitter and reviewer.
             | 
             | The submitter has to remember to send the "warning, don't
             | apply patch" mail in the short time window between
             | confirmation and merging. What happens if one of the
             | students doing this work gets sick and misses some days of
             | work, withdraws from the program, just completely forgets
             | to send the mail?
             | 
             | What if the reviewer doesn't see the mail in time or it
             | goes to spam?
        
             | metalliqaz wrote:
             | even if they didn't, they waste the community's time.
             | 
             | I think they are saying that it's possible that some code
             | was branched and used elsewhere, or simply compiled into a
             | running system by a user or developer.
        
               | md_ wrote:
               | Agreed on the time issue--as I noted above. I think it's
               | still of a pretty different cost character to actually
               | allowing malicious code to make it to production, but (as
               | you note) it's hard to be sure that this would not make
               | it to some non-standard branch, as well, so there are
               | real risks in this approach.
               | 
               | Anyway, my point wasn't that this is free of ethical
               | concerns, but it seems like they put _some_ thought into
               | how to reduce the potential harm. I'm undecided if that's
               | enough.
        
               | DetroitThrow wrote:
               | > I'm undecided if that's enough.
               | 
               | I don't think it's anywhere close to enough and I think
               | their behavior is rightly considered reckless and
               | unethical.
               | 
               | They should have contacted the leadership of the project
               | to announce to maintainers that anonymous researchers may
               | experiment on the contribution process, allowed
               | maintainers to opt out, and worked with a separate
               | maintainer with knowledge of the project to ensure
               | harmful commits were tracked and reversions were applied
               | before reaching stable branches.
               | 
               | Instead their lack of ethical considerations throughout
               | this process has been disappointing and harmful to the
               | scientific and open source communities, and go beyond the
               | nature of the research itself by previously receiving an
               | IRB exemption by classifying this as non-human research,
               | and potentially misleading UMN on the subject matter and
               | impact.
        
             | skywhopper wrote:
             | The particular patches being complained about seem to be
             | subsequent work by someone on the team that wrote that
             | paper, but submitted since the paper was published, ie,
             | followup work.
        
             | gnfargbl wrote:
             | It seems that Greg K-H has now released a patch of "the
             | easy reverts" of umn.edu commits... all 190 of them. https:
             | //lore.kernel.org/lkml/20210421130105.1226686-1-gregkh...
             | 
             | The final commit in the reverted list
             | (d656fe49e33df48ee6bc19e871f5862f49895c9e) is originally
             | from 2018-04-30.
             | 
             | EDIT: Not all of the 190 reverted commits are obviously
             | malicious:
             | 
             | https://lore.kernel.org/lkml/20210421092919.2576ce8d@gandal
             | f...
             | 
             | https://lore.kernel.org/lkml/20210421135533.GV8706@quack2.s
             | u...
             | 
             | https://lore.kernel.org/lkml/CAMpxmJXn9E7PfRKok7ZyTx0Y+P_q3
             | b...
             | 
             | https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7
             | c...
             | 
             | What a mess these guys have caused.
        
             | bmcahren wrote:
             | This is one of the commits that went live with "built-in
             | bug" according to Leon:
             | 
             | https://github.com/torvalds/linux/commit/8e949363f017
        
               | pxx wrote:
               | I'm not convinced. Yes, there's a use after free (since
               | fixed), but it's there before the patch too.
        
             | aglavine wrote:
             | 'race conditions' like this one are inherently dangerous.
        
             | azernik wrote:
             | GKH, in that email thread, _did_ find commits that made it
             | to production; most likely the authors just weren 't
             | following up very closely.
        
           | alrs wrote:
           | If they're public IRC channels, do you mind mentioning them
           | here? I'm trying to find the remnant. :)
        
           | d0100 wrote:
           | I assume that having these go into production could make the
           | authors "hackers" according to law, no?
           | 
           | Haven't whitehat hackers doing unsolicited pen-testing been
           | prosecuted in the past?
        
         | philliphaydon wrote:
         | I think that the patches that hit stable were actually OK,
         | based on the apparent intent to 'test' the maintainers and
         | notify them of the bug and submit the valid patch after, but
         | the thought process from the maintainers is:
         | 
         | "if they are attempting to test us by first submitting
         | malicious patches as an experiment, we can't accept what we
         | have accepted as not being malicious and so it's safer to
         | remove them than to keep them".
         | 
         | my 2c.
        
           | jnxx wrote:
           | The earlier patches could in theory be OK, but they also
           | might combine with other or later patches which introduce
           | bugs more stealthily. Bugs can be very subtle.
           | 
           | Obviously, trust should not be the only thing that
           | maintainers rely on, but it is a social endeavour and trust
           | always matters in such endeavors. Doing business with people
           | you can't trust makes no sense. Without trust I agree fully
           | that it is not worth the maintainer's time to accept anything
           | from such people, or from that university.
           | 
           | And the fact that one can do damage with malicious code is
           | nothing new at all. It is well known and nothing new that bad
           | code can ultimately kill people. It is also more than obvious
           | that I _can_ ring the door of my neighbor, ask him or her for
           | a cup of sugar, and blow a hammer over their head. Or people
           | can go to a school and shoot children. Does anyone in his
           | right mind has to do such damage in order to prove something?
           | No. Does it prove anything? No. Does the fact that some
           | people do things like that  "prove" that society is wrong and
           | trust and collaboration is wrong? What an idiocy, of course
           | not!
        
         | ignoranceprior wrote:
         | It is worrying to consider that in all likelihood, some people
         | with actually malicious motives, rather than clinical academic
         | curiosity, have probably introduced introduced serious security
         | bugs into popular FOSS projects such as the Linux kernel.
         | 
         | Before this study came out, I'm pretty sure there were already
         | known examples of this happening, and it would have been
         | reasonable to assume that some such vulnerabilities existed.
         | But now we have even more reason to worry, given that they
         | succeeded doing this multiple times as a two person team
         | without real institutional backing. Imagine what a state-level
         | actor could do.
        
         | wang_li wrote:
         | There's no research going on here. Everyone knows buggy patches
         | can get into a project. Submitting intentionally bad patches
         | adds nothing beyond grandstanding. They could perform analysis
         | of review/acceptance by looking at past patches that introduced
         | bugs without being the bad actors that they apparently are.
         | 
         | From FOSDEM 2014, NSA operation ORCHESTRA annual status report.
         | It's pretty entertaining and illustrates that this is nothing
         | new.
         | 
         | https://archive.fosdem.org/2014/schedule/event/nsa_operation...
         | https://www.youtube.com/watch?v=3jQoAYRKqhg
        
           | jnxx wrote:
           | > They could perform analysis of review/acceptance by looking
           | at past patches that introduced bugs without being the bad
           | actors that they apparently are.
           | 
           | Very good point.
        
       | arua442 wrote:
       | Disgusting.
        
       | whack wrote:
       | Let me play devil's advocate here. Such pen-testing is absolutely
       | essential to the safety of our tech ecosystem. Countries like
       | Russia, China and USA are without a doubt, doing exactly the same
       | thing that this UMN professor is doing. Except that instead of
       | writing a paper about it, they are going to abuse the
       | vulnerabilities for their own nefarious purposes.
       | 
       | Conducting such pen-tests, and then publishing the results
       | openly, helps raise awareness about the need to assume-bad-faith
       | in all OSS contributions. If some random grad student was able to
       | successfully inject 4 vulnerabilities before finally getting
       | caught, I shudder to think how many vulnerabilities were
       | successfully injected, and hidden, by various nation-states. In
       | order to better protect ourselves from cyberwarfare, we need to
       | be far more vigilant in maintaining OSS.
       | 
       | Ideally, such research projects should gain prior approval from
       | the project maintainers. But even though they didn't, this paper
       | is still a net-positive contribution to society, by highlighting
       | the need to take security more seriously when accepting OSS
       | patches.
        
         | 1970-01-01 wrote:
         | An excellent point, however without prior approval and safety
         | mechanisms, they were absolutely malicious in their acts.
         | Treating them as anything but malicious, even if "for the
         | greater good of OSS" sets a horrible precedent. The road to
         | hell is paved with good intentions is the quote that comes to
         | mind. Minnesota got exactly what they deserve.
        
         | pertymcpert wrote:
         | No, this did not teach anyone anything new except that members
         | of that UMN group are untrustworthy. Nothing else new was
         | learned here at all.
        
         | marcinzm wrote:
         | > this paper is still a net-positive contribution to society
         | 
         | There's claims that one vulnerability got committed and was not
         | reverted by the research group. In fact the research group
         | didn't even notice that it got committed. So I'd argue that
         | this was a net negative to society because it introduced a live
         | security vulnerability into linux.
        
         | paxys wrote:
         | Pen testing is essential, yes, but there are correct and
         | incorrect ways to do it. This was the latter. In fact attempts
         | like this harm the entire industry because it reflects poorly
         | on researchers/white hat hackers who _are_ doing the right
         | thing. For example, making sure your testing is non-destructive
         | is the bare minimum, as is promptly informing the affected
         | party when you find an exploit. These folks did neither.
        
         | dm319 wrote:
         | The world works better without everyone being untrusting of
         | everyone else, and this is especially true of large
         | collaborative projects. The same goes in science - it has been
         | shown over and over again that if researchers submit
         | deliberately fraudulent work, it is unlikely to be picked up by
         | peer review. Instead, it is simply deemed as fraud, and
         | researchers that do that face heavy consequences, including
         | jail time.
         | 
         | Without trust, these projects will fail. Research has shown
         | that even in the presence of untrustworthy actors, trusting is
         | usually still beneficial [1][2]. Instead, trust until you have
         | reason to believe you shouldn't has been found to be an optimal
         | strategy [2], so G K-H is responding exactly appropriately
         | here. The linux community trusted them until they didn't, and
         | now they are unlikely to trust them going forward.
         | 
         | [1] https://www.nature.com/articles/s41598-019-55384-4#Sec13
         | [2] https://medium.com/greater-than-experience-design/game-
         | theor...
        
           | whack wrote:
           | If an open-source project adopt a trusting attitude, nation-
           | states can and will take advantage of this, in order to
           | inject dangerous vulnerabilities. Telling University
           | professors to not pen-test OSS does not stop nation-states
           | from doing the same thing secretly. It just sweeps the
           | problem under the rug.
           | 
           | Would I prefer to live in a world where everyone behaved in a
           | trustworthy manner in OSS? Absolutely. But that is not the
           | world we live in. A professor highlighting this fact, and
           | forcing people to realize the dangers in trusting people,
           | does more good than harm.
           | 
           | --------------
           | 
           | On a non-serious and humorous note, this episode reminds me
           | of the Sokal Hoax. Most techies/scientists I've met were very
           | appreciative of this hoax, even though it wasn't conducted
           | with pre-approval from the subjects. It is interesting to see
           | the shoe on the other foot
           | 
           | https://en.wikipedia.org/wiki/Sokal_affair
        
           | colinmhayes wrote:
           | If that's the model Linux uses there's no doubt in my mind
           | that the US, China, and probably Russia have vulnerabilities
           | in the kernel.
        
         | MeinBlutIstBlau wrote:
         | Then do it through pen testing companies. Not official channels
         | masquerading as research.
        
         | EarthLaunch wrote:
         | It's always useful to search for, and upvote, a reasonable
         | alternative opinion. Thank you for posting it.
         | 
         | There are a lot of people reading these discussions who aren't
         | taking 'sides' but trying to think about the subject. Looking
         | at different angles helps with thinking.
        
       | shadowgovt wrote:
       | Academic reputation has always mattered, but I can't recall the
       | last time I've seen an example as stark as "I attend a university
       | that is forbidden from submitting patches to the Linux kernel."
        
       | kerng wrote:
       | Where does such "research" end... sending phishing mails to all
       | US citizens to see how many passwords can be stolen?
        
       | squarefoot wrote:
       | "Yesterday, I took a look on 4 accepted patches from Aditya and 3
       | of them added various severity security "holes"."
       | 
       | Sorry for being the paranoid one here, but reading this raises a
       | lot of warning flags.
        
       | balozi wrote:
       | Uff da! I really do hope the administrators at University of
       | Minnesota truly understand the gravity of this F* up. I doubt
       | they will though.
        
       | redmattred wrote:
       | I thought there were ethical standards for research where a good
       | study should not knowingly do harm or at the very least make
       | those involved aware of their participation
        
       | perfunctory wrote:
       | I don't quite understand the outrage. Quite sure most HN readers
       | were doing/involved in similar experiments one way or another.
       | Isn't A/B testing an experiment on consumers (people) without
       | their consent?
        
         | 1_player wrote:
         | There is a sea of difference between A/B testing your own
         | property, and maliciously introducing a bug on a critical piece
         | of software that's running on billions of devices.
        
           | InsomniacL wrote:
           | >> https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
           | hc....
           | 
           | "We did not introduce or intend to introduce any bug or
           | vulnerability in the Linux kernel. All the bug-introducing
           | patches stayed only in the email exchanges, without being
           | adopted or merged into any Linux branch, which was explicitly
           | confirmed by maintainers. Therefore, the bug-introducing
           | patches in the email did not even become a Git commit in any
           | Linux branch. None of the Linux users would be affected."
        
             | Koiwai wrote:
             | https://lore.kernel.org/linux-
             | nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
             | 
             | This message contradicted that.
        
             | [deleted]
        
             | _djo_ wrote:
             | That's a false claim, though. There's evidence that at
             | least one of the students involved did not do anything to
             | alert kernel maintainers or prevent their code from
             | reaching stable. https://git.kernel.org/pub/scm/linux/kerne
             | l/git/stable/linux...
        
             | DetroitThrow wrote:
             | That seems to directly contradict gkh and others (including
             | the researchers) in the email exchange in the original post
             | - these vulnerable patches reached stable trees and
             | maintainers had to revert them.
             | 
             | They may not have been included in a release, but should
             | gkh not have intervened *this would have reached users*,
             | especially if the researchers weren't apparently aware
             | their commits were reaching stable.
        
         | duxup wrote:
         | Isn't a/b testing usually things like changing layout or two
         | things that....work as opposed to bugs?
        
       | francoisp wrote:
       | Me thinks that If you hold a degree from the University of
       | Minnesota it would be a good idea to let your university know
       | what you think of this.
        
         | bluGill wrote:
         | I'm trying to figure out how to do that. How can I get my
         | degree changed? Will the university of (anyplace) look at my
         | transcript and let me say I have a degree from them without
         | much effort? I learned a lot, and I generally think my degree
         | is about as good as any other university. (though who knows
         | what has changed since then)
         | 
         | I'm glad I never contributed again as an alumni...
        
           | francoisp wrote:
           | If it would be my univ, I'd send a personal email to the
           | dean. https://cse.umn.edu/college/office-
           | dean#:~:text=Dean%20Mosta....
           | 
           | If enough grads do that, I would expect the university will
           | do something about it, and that would send a message. It's
           | about where the money comes from in the end; (tuition,
           | grants, research partnerships etc) IMO none of these sources
           | would be very happy about what might amount to defacement of
           | public property and waste of the time of people that are
           | working for the good of mankind by providing free
           | tools(bicycle of the mind) to future generations.
           | 
           | There is no novelty in this research; bad actors have been
           | trying to introduce bad patches for as long as open source
           | has been open.
        
             | WkndTriathlete wrote:
             | That's my univ, and I just did exactly that. Mos Kaveh
             | happened to be head of the EE department when I was there
             | for EE. He's a good guy and had a good way of managing
             | stressed-out upper-division honors EE students, so I'm
             | hopeful that he will take action on this.
        
         | McGlockenshire wrote:
         | > it would be a good idea to let your university know what you
         | think of this.
         | 
         | Unless there's something particularly different about
         | University of Minnesota compared to other universities,
         | something tells me that they won't give a crap unless you're a
         | donor.
        
         | bredren wrote:
         | Not a great selling point for the CS department.
         | 
         | "Yes, we are banned from submitting patches to Linux due to
         | past academic research and activities of our PhD students.
         | However, we have a world-class program here."
        
           | a-posteriori wrote:
           | I would argue it should be "due to past academic research and
           | activities of our PhD students and professors"
        
             | threatofrain wrote:
             | Stanford continues to employ the famous Philip Zimbardo,
             | and is Stanford not one of the top universities for
             | psychology in the US?
             | 
             | Getting banned from Linux contribution is an ouchy for the
             | university, but the damage has been done.
        
       | javier10e6 wrote:
       | The researched yielded non surprising results: Stealthy patches
       | without a proper smoke screen to provide a veil of legitimacy
       | will cause the the purveyor of the patches to become black
       | listed....DUH!
        
       | ENOTTY wrote:
       | Later down thread from Greg K-H:
       | 
       | > Because of this, I will now have to ban all future
       | contributions from your University.
       | 
       | Understandable from gkh, but I feel sorry for any unrelated
       | research happening at University of Minnesota.
       | 
       | EDIT: Searching through the source code[1] reveals contributions
       | to the kernel from umn.edu emails in the form of an AppleTalk
       | driver and support for the kernel on PowerPC architectures.
       | 
       | In the commit traffic[2], I think all patches have come from
       | people currently being advised by Kangjie Liu[3] or Liu himself
       | dating back to Dec 2018. In 2018, Wenwen Wang was submitting
       | patches; during this time he was a postdoc at UMN and co-authored
       | a paper with Liu[4].
       | 
       | Prior to 2018, commits involving UMN folks appeared in 2014,
       | 2013, and 2008. None of these people appear to be associated with
       | Liu in any significant way.
       | 
       | [1]: https://github.com/torvalds/linux/search?q=%22umn.edu%22
       | 
       | [2]:
       | https://github.com/torvalds/linux/search?q=%22umn.edu%22&typ...
       | 
       | [3]: https://www-users.cs.umn.edu/~kjlu/
       | 
       | [4]: http://cobweb.cs.uga.edu/~wenwen/
        
         | Foxboron wrote:
         | It's important to note that they used temporary emails for the
         | patches in this research. It's detailed in the paper.
         | 
         | The main _problem_ is that they have (so far) refused to
         | explain in detail how the patches where reviewed and how. I
         | have not gotten any links to any lkml post even after Kangjie
         | Lu personally emailed me to address any concerns.
        
         | temac wrote:
         | I don't feel sorry at all. If you want to contribute from
         | there, show that the rogue professor and their students have
         | been prevented from doing further malicious contributions (that
         | is probably at least: from doing any contribution at all during
         | a quite long period -- and that is fair against repeated
         | infractions), and I'm sure that you will be able to contribute
         | back again under the University umbrella.
         | 
         | If you don't manage to reach that goal, too bad, but you can
         | contribute on a personal capacity, and/or go work elsewhere.
        
           | seoaeu wrote:
           | How could a single student or professor possibly achieve
           | that? Under the banner of "academic freedom" it is very hard
           | to get someone fired because you don't like their research.
           | 
           | It sounds like you're making impossible demands of unrelated
           | people, while doing nothing to solve the actual problem
           | because the perpetrators now know to just create throwaway
           | emails when submitting patches.
        
         | walrus01 wrote:
         | > I think all patches have come from people currently being
         | advised by Kangjie Liu[3] or Liu himself dating back to Dec
         | 2018
         | 
         | New plan: Show up at Liu's house with a lock picking kit while
         | he's away at work, pick the front door and open it, but don't
         | enter. Send him a photo, "hey, just testing, bro! Legitimate
         | security research!"
        
           | aidenn0 wrote:
           | Put a flaming bag of shit on the doorstep, ring the doorbell,
           | and write a paper about the methods Liu uses to extinguish
           | it?
        
           | Cthulhu_ wrote:
           | If they wanted to do security research, they could have done
           | so in the form of asking the reviewers to help; send them a
           | patch and ask 'Is this something you would accept?', instead
           | of intentionally sending malicious commits and causing static
           | on the commit tree and mailing lists.
        
             | TOMDM wrote:
             | Even better
             | 
             | Notify someone up the chain that you want to submit
             | malicious patches, and ask them if they want to
             | collaborate.
             | 
             | If your patches make it through, treat it as though they
             | essentially just got red teamed, everyone who reviewed it
             | and let it slip gets to have a nervous laugh and the commit
             | gets rejected, everyone having learned something.
        
               | stingraycharles wrote:
               | Exactly what I was thinking. This should have been set up
               | like a normal pen test, where only seniors very high up
               | the chain are in on it.
        
             | scaladev wrote:
             | Wouldn't that draw more attention to the research patches,
             | compared to a "normal" lkml patch? If you (as a maintainer)
             | expected the patch to be malicious, wouldn't you be extra
             | careful in reviewing it?
        
               | Dobbs wrote:
               | You don't have to say you are studying the security
               | implications, you could be say you are studying something
               | else like turn around time for patches, or level of
               | critique, or any number of things.
        
               | colechristensen wrote:
               | Yes you do. In no circumstances is it ethical to do
               | penetrating tests without approval.
        
               | joshuamorton wrote:
               | In the thread you're in, the assumption is that the
               | patches are never actually submitted.
        
               | aflag wrote:
               | You probably can learn more and faster about new drugs by
               | testing them in humans rather than rats. However, science
               | is not above ethics. That is a lesson history has taught
               | us in the most unpleasant of ways.
        
             | tapland wrote:
             | Dd they keep track of and submit a list of additions to
             | revert after they managed to get it added?
             | 
             | From the looks of it they didn't even when it was heading
             | out to stable releases?
             | 
             | That's just using the project with no interest in not
             | causing issues.
        
               | Verdex wrote:
               | Yeah, so an analogy would be to put human feces into food
               | and then see if the waiter is going to actually give it
               | to the dinning customer. And then if they do, just put a
               | checkmark on a piece of paper and then leave without
               | warning someone that they're about to eat poop.
        
           | dataflow wrote:
           | This is funny, but not at all a good analogy. There's
           | obviously not _remotely_ as much public interest or value in
           | testing the security of this professor 's private home to
           | justify invading his privacy for the public interest. On the
           | other hand, if he kept dangerous things at home (say, BSL-4
           | material), then his house would need 24/7 security and you'd
           | probably be able to justify testing it regularly for the
           | public's sake. So the argument here comes down to which
           | extreme you believe the Linux kernel is closer to.
        
             | Edman274 wrote:
             | Everyone has been saying "This affects software that runs
             | on billions of machines and could cause untold amounts of
             | damage and even loss of human life! What were the
             | researchers thinking?!" and I guess a follow-up thought,
             | which is that "Maintainers for software that runs on
             | billions of machines, where bugs could cause untold amounts
             | of damage and even loss of human life didn't have a robust
             | enough system to prevent this?" never occurs to anyone. I
             | don't understand why.
        
               | thatfunkymunki wrote:
               | People are well aware of theoretical risk of bad commits
               | by malicious actors. They are justifiably extremely upset
               | that someone is intentionally changing this from a
               | theoretical attack to a real life issue.
        
               | Edman274 wrote:
               | I'm not confused about why people are upset at the
               | researchers that introduced bugs and did it
               | irresponsibly. I'm confused about why people aren't upset
               | that an organization managing critical infrastructure is
               | so under prepared at dealing with risks posed by rank
               | amateurs, which they should've known about and had a
               | mechanism of dealing with for years.
               | 
               | What this means is that anyone who could hijack a
               | university email account, or could be a student at a
               | state university for a semester or so, or work at a FAANG
               | corporation could pretty much insert backdoors without a
               | lot of scrutiny in a way that no one detects, because
               | there aren't robust safeguards in place to actually
               | verify that commits don't do anything sneaky beyond
               | trusting that everyone is acting in good faith because of
               | how they act in a code review process. I have trouble
               | understanding the thought process that ends up basically
               | ignoring the maintainers' duty to make sure that the code
               | being committed doesn't endanger security or lives
               | because they assumed that everything was 'cool'. The
               | security posture in this critical infrastructure is
               | deficient and no one wants to actually address it.
        
               | splistud wrote:
               | After absorbing what the researchers did, I believe it's
               | time to skip right over the second part and just
               | concentrate on why so many critical systems are run on
               | unforked Linux.
        
               | shakna wrote:
               | > I have trouble understanding the thought process that
               | ends up basically ignoring the maintainers' duty to make
               | sure that the code being committed doesn't endanger
               | security or lives because they assumed that everything
               | was 'cool'. The security posture in this critical
               | infrastructure is deficient and no one wants to actually
               | address it.
               | 
               | They're banning a group known to be bad actors. And
               | proactively tearing out the history of commits related to
               | those known actors, before reviewing each commit.
               | 
               | That seems like the kernel team are taking a proactive
               | stance on the security side of this. The LKML thread also
               | talks about more stringent requirements that they're
               | going to bring in, which was already going to be brought
               | up at the next kernel conference.
               | 
               | None of these things seem like ignoring any of the
               | security issues.
        
             | walrus01 wrote:
             | It wasn't intended to be serious. But on the other hand, he
             | has now quite openly and publicly declared himself to be
             | part of a group of people who mess around with security
             | related things as a "test".
             | 
             | He shouldn't be surprised if it has some unexpected
             | consequences to his own personal security, like some
             | unknown third parties porting away his phone number(s) as a
             | social engineering test, pen testing his office, or
             | similar.
        
             | dragonwriter wrote:
             | > This is funny, but not at all a good analogy
             | 
             | Yeah, for one thing, to be a good analogy, rather than
             | lockpicking without entering when he's not home and leaving
             | a note, you'd need to be an actual service worker for a
             | trusted home service business and use that trust to enter
             | when he is home, conduct sabotage, and not say anything
             | until the sabotage is detected and traced back to you and
             | cited in his cancelling the contract with the firm for
             | which you work, and _then_ cite the "research" rationale.
             | 
             | Of course, if you did that you would be both unemployed and
             | facing criminal charges in short order.
        
               | dataflow wrote:
               | Your strawman would be more of a steelman if you actually
               | addressed the points I was making.
        
             | UncleMeat wrote:
             | There's also not nearly as much harm as there is in wasting
             | maintainer time and risking getting faulty patches merged.
        
           | remram wrote:
           | The actual equivalent would be to steal his computer, wait a
           | couple days to see his reaction, get a paper published,
           | _then_ offer to return the computer.
        
           | rbanffy wrote:
           | I wouldn't be surprised if the good, conscientious members of
           | the UMN community showed up at his office (or home) door to
           | explain, in vivid detail, the consequences of doing unethical
           | research.
        
         | bionhoward wrote:
         | seems extreme. one unethical researcher blocks work for others
         | just because they happen to work at the same employer? they
         | might not even know the author of the paper...
        
           | vesinisa wrote:
           | Well, the decision can always be reversed, but on the outset
           | I would say banning the entire university and publicly naming
           | them is a good start. I don't think this kind of "research"
           | is ethical, and the issue needs to be raised. Banning them is
           | a good opener to engage the instiution in a dialogue.
        
             | soneil wrote:
             | It seems fair enough to me. They were curious to see what
             | happens, this happens. Giving them a free pass because
             | they're a university would be artificially skewing the
             | results of the research.
             | 
             | Low trust and negative trust should be fairly obvious costs
             | to messing with a trust model - you could easily argue this
             | is working as intended.
        
           | hn8788 wrote:
           | The university reviewed the "study" and said it was
           | acceptable. From the email chain, it looks like they've
           | already complained to the university multiple times, and have
           | apparently been ignored. Banning anyone at the university
           | from contributing seems like the only way to handle it since
           | they can't trust the institution to ensure its students are
           | doing unethical experiments.
        
             | rob74 wrote:
             | Plus, it sets a precedent: if your university condones this
             | kind of "research", you will have to face the consequences
             | too...
        
           | op00to wrote:
           | The University approved this research. How can one trust
           | anything from that university now?
        
             | JBorrow wrote:
             | That's not really how it works. Nobody's out there
             | 'approving' research (well, not seemingly small projects
             | like this), especially at the university level. Professors
             | (all the way down to PhD students!) are usually left to do
             | what they like, unless there are specific ethical concerns
             | that should be put before a review panel. I suppose you
             | could argue that this work should have been brought before
             | the ethics committee, but it probably wasn't, and in CS
             | there isn't a stringent process like there is in e.g.
             | psychology or biology.
        
               | chromatin wrote:
               | Wrong!
               | 
               | If you read the research paper linked in the lkml post,
               | the authors at UMN state that they submitted their
               | research plan to the University of Minnesota
               | Institutional Research Board and received a human
               | subjects exempt waiver.
        
               | gbrown wrote:
               | A human subjects determination isn't really an approval,
               | just a note that the research isn't HSR, which it sounds
               | like this wasn't.
        
               | bluGill wrote:
               | Well it was, but not the type of thing that HSR would
               | normally worry about.
        
               | pdpi wrote:
               | The emails suggest this work has been reported in the
               | past. A review by the ethics committee after the fact
               | seems appropriate, and it should've stopped a repeat
               | offence.
        
             | kalwz wrote:
             | It approved the research, which I don't find objectionable.
             | 
             | The objectionable part is that the group allegedly
             | continued after having been told to stop by the kernel
             | developers.
        
               | vntok wrote:
               | Why is that objectionable, do actual bad actors typically
               | stop trying after being told to do so?
        
               | extropy wrote:
               | It's objectionable because of severe consequences beyond
               | just annoying people. If there was a malicious purpose,
               | not just research, you could bring criminal charges
               | against them.
               | 
               | In typical grey hat research you get pre-approval from
               | target company leadership (engineers don't know) to avoid
               | charges once discovered.
        
               | dkersten wrote:
               | Which just demonstrates that these guys _are_ actual bad
               | actors, so blocking everyone at the university seems like
               | a reasonable attempt at stopping them.
        
           | wongarsu wrote:
           | They reported unethical behavior to the university and the
           | university failed to prevent it from happening again.
        
           | [deleted]
        
           | yxhuvud wrote:
           | It is an extreme response to an extreme problem. If the other
           | researchers don't like the situation? They are free to raise
           | the problem to the university and have the university clean
           | up the mess they obviously have.
        
           | jnxx wrote:
           | Well, shit happens. Imaging doctors working in organ
           | transplants, and one of them damages trust of people by
           | selling access to organs to rich patients. Of course that
           | damages the field for everyone. And to deal with such issues,
           | doctors have some ethics code, and in many countries
           | associations which will sanction bad eggs. Perhaps scientists
           | need something like that, too?
        
         | [deleted]
        
         | GoblinSlayer wrote:
         | Forking the kernel should be sufficient for research.
        
           | jaywalk wrote:
           | This research is specifically about getting patches accepted
           | into open source projects, so that wouldn't work at all.
        
             | GoblinSlayer wrote:
             | For other research happening in the university. This
             | particular research is trivial anyway, see
             | https://news.ycombinator.com/item?id=26888417
        
           | hjalle wrote:
           | Not if the research involves the reviewing aspects of open
           | source projects.
        
             | hellow0rldz wrote:
             | Apparently they aren't doing human experiments, it's only
             | processes and such. So they can easily emulate the
             | processes in-house too!
        
         | pmiller2 wrote:
         | I find it hard to believe this research passed IRB.
        
           | AjtevhihBudhirv wrote:
           | It didn't. Rather, it didn't until _after_ it had been
           | conduct.
           | 
           | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
           | hc....
        
           | bobthechef wrote:
           | How thorough is IRB review? My gut feeling is that these are
           | not necessarily the most conscientious or informed bodies.
           | Add into the mix a proposal that conceals the true nature of
           | what's happening.
           | 
           | (All of this ASSUMING that the intent was as described in the
           | thread.)
        
             | bluGill wrote:
             | They are probably more familiar with medical research and
             | the types of things that go wrong there. Bad ethics in
             | medical situations is well understood, including
             | psychology. However it is hard to figure out how a
             | mechanical engineer could violate ethics.
        
               | cedilla wrote:
               | To be fair, the consequences of unethical research in
               | medicine or psychology can be much more dire than what
               | happened here.
        
               | pmiller2 wrote:
               | Perhaps more dire than what actually happened, but, can
               | you imagine the consequences if any of those malicious
               | patches had actually stuck around in the kernel? Keep in
               | mind when you think about this that Android, which has an
               | 87% market share globally in smartphones[0] runs on top
               | of a modified Linux kernel.
               | 
               | --
               | 
               | [0]: https://www.statista.com/statistics/272307/market-
               | share-fore...
        
               | pmiller2 wrote:
               | I had to do human subjects research training in grad
               | school, just to be able to handle _test score data_ for a
               | math education project. I literally never saw an actual
               | student the whole time I was working on it.
        
             | DavidPeiffer wrote:
             | It varies a lot. A professor I worked for was previously at
             | a large company in an R&D setting. He dealt with 15-20
             | different IRB's through various research partnerships, and
             | noted Iowa State (our university) as having the most
             | stringent requirements he had encountered. In other
             | universities, it was pretty simple to submit and get
             | approval without notable changes to the research plan. If
             | they were unsure on something, they would ask a lot of
             | questions.
             | 
             | I worked on a number of studies through undergrad and grad
             | school, mostly involving having people test software. The
             | work to get a study approved was easily 20 hours for a
             | simple "we want to see how well people perform tasks in the
             | custom software we developed. They'll come to the
             | university and use our computer to avoid security concerns
             | about software security bugs". You needed a script of
             | everything you would say, every question you would ask, how
             | the data would be collected, analyzed, and stored securely.
             | Data retention and destruction policies had to be noted.
             | The key linking a person's name and their participant ID
             | had to be stored separately. How would you recruit
             | participants, the exact poster or email you intend to send
             | out. The reading level of the instructions and the aptitude
             | of audience were considered (so academic mumbo jumbo didn't
             | confuse participants).
             | 
             | If you check the box that you'll be deceiving participants,
             | there was another entire section to fill out detailing how
             | they'd be deceived, why it was needed for the study, etc.
             | Because of past unethical experiments in the academic
             | world, there is a lot of scrutiny and you typically have to
             | reveal the deception in a debriefing after the completion
             | of the study.
             | 
             | Once a study was accepted (in practice, a multiple month
             | process), you could make modifications with an order of
             | magnitude less effort. Adding questions that don't involve
             | personal information of the participant is a quick form and
             | an approval some number of days later.
             | 
             | If you remotely thought you'd need IRB approval, you
             | started a conversation with the office and filled out some
             | preliminary paperwork. If it didn't require approval, you'd
             | get documentation stating such. This protects the
             | participants, university, and professor from issues.
             | 
             | --
             | 
             | They took it really seriously. I'm familiar with one study
             | where participants would operate a robot outside. An IRB
             | committee member asked what would happen if a bee stung the
             | participant? If I remember right, the resolution was an
             | epipen and someone trained in how to use it had to be
             | present during the session.
        
         | jnxx wrote:
         | > Understandable from gkh, but I feel sorry for any unrelated
         | research happening at University of Minnesota.
         | 
         | That's the university's problem to fix.
        
           | [deleted]
        
           | shawnz wrote:
           | What's the recourse for them though? Just beg to have the
           | decision reversed?
        
             | ttyprintk wrote:
             | The comment about IRB --- institutional research board ---
             | is clear, I think.
        
               | shawnz wrote:
               | The suggestion about the IRB was made by a third party.
               | Look at the follow up comment from kernel developer Leon
               | Romanovsky.
               | 
               | > ... we don't need to do all the above and waste our
               | time to fill some bureaucratic forms with unclear
               | timelines and results. Our solution to ignore all
               | @umn.edu contributions is much more reliable to us who
               | are suffering from these researchers.
        
               | shawnz wrote:
               | To follow up on my comment here, I think Greg KH's later
               | responses were more reasonable.
               | 
               | > ... we have the ability to easily go back and rip the
               | changes out and we can slowly add them back if they are
               | actually something we want to do.
               | 
               | > I will be working with some other kernel developers to
               | determine if any of these reverts were actually valid
               | changes, were actually valid, and if so, will resubmit
               | them properly later. ... future submissions from anyone
               | with a umn.edu address should be by default-rejected
               | unless otherwise determined to actually be a valid fix
        
             | bradleyjg wrote:
             | Expel the students and fire the professor. That will
             | demonstrate their commitment to high ethical standards.
        
               | bogwog wrote:
               | Or fire the IRB people who approved it, and the
               | professor(s) who should've known better. Expelling
               | students seems a bit unfair IMO.
        
               | DetroitThrow wrote:
               | I agree in this case the driver of the behavior seems to
               | be the professor, but graduate researchers are informed
               | about ethical research and there many ways students alone
               | can cause harm through research potentially beyond the
               | supervision of the university and even professor. It's
               | usually much more neglible than this, but everyone has a
               | responsibility in abiding by ethical norms.
        
               | sophacles wrote:
               | The students do need a bit of punishment - they are
               | adults who chose to act this way. In this context though,
               | switching their advisor and requiring a different
               | research track would be sufficient - that's a lot of work
               | down the drain and a hard lesson. I agree that expulsion
               | would be unfair - (assuming good faith scholarship) the
               | advisor/student relationship is set up so that the
               | students can learn to research effectively (which
               | includes ethically) with guidance from a trusted
               | researcher at a trusted institution. If the professor
               | suggests or OKs a plan, it is reasonable for the students
               | to believe it is a acceptable course of action.
        
               | [deleted]
        
               | ramraj07 wrote:
               | If the student blatantly lied about why and how he made
               | those commits then that's grounds for expulsion though.
        
               | sophacles wrote:
               | 1. What the student code at umn says and what i think the
               | student deserves are vastly different things.
               | 
               | 2. Something being grounds for expulsion and what a
               | reasonable response would be are vastly different things.
               | 
               | 3. The rules saying "up to and including" (aka grounds
               | for) and the full range of punishment are not the same -
               | the max of a range is not the entirety of the range.
               | 
               | 4. So what?
        
               | rodgerd wrote:
               | Dunking on individual maintainers for academic bragging
               | rights seems pretty unfair, too.
        
               | bradleyjg wrote:
               | The student doubled down on his unethical behavior by
               | writing that his victim was "making wild accusations that
               | are bordering on slander."
               | 
               | You can't make a silk purse out of a sow's ear.
        
             | _jal wrote:
             | The main thing you want here is a demonstration that they
             | realize they fucked up, realize the magnitude of the
             | fuckup, and have done something reasonable to lower the
             | risk of it happening again, hopefully very low.
             | 
             | Given that the professor appears to be a frequent flyer
             | with this, the kernel folks banning him and the university
             | prohibiting him from using Uni resources for anything
             | kernel related seems reasonable and gets the point across.
        
             | dataflow wrote:
             | Probably that, combined with "we informed the professor of
             | {serious consequences} should this happen again".
        
             | burnished wrote:
             | Well, yes? Seems like recourse in their case would be to
             | make a convincing plea or plan to rectify the problem that
             | satisfies decision makers in the linux project.
        
         | duxup wrote:
         | Seems like a bit of a strong response. Universities are large
         | places with lots of professors and people with different ideas,
         | opinions, views, and they don't work in concert, quite the
         | opposite. They're not some corporation with some unified goal
         | or incentives.
         | 
         | I like that. That's what makes universities interesting to me.
         | 
         | I don't like the standard here of of penalizing or lumping
         | everyone there together, regardless of they contribute in the
         | past, now, in the future or not.
        
           | bluGill wrote:
           | One way to get everyone in a university on the same page is
           | to punish them all for the bad actions of a few. It appears
           | like this won't work here because nobody else is contributing
           | and so they won't notice.
        
             | fencepost wrote:
             | It's not the number of people directly affected that will
             | matter, it's the reputational problems of "umn.edu's CS
             | department got the entire UMN system banned from submitting
             | to the Linux kernel and probably some other open source
             | projects."
        
             | duxup wrote:
             | And anyone without much power to effect change SOL.
             | 
             | I know the kernel doesn't need anyone's contributions
             | anyhow, but as a matter of policy this seems like a bad
             | one.
        
           | abought wrote:
           | I'd concur: the university is the wrong unit-of-ban.
           | 
           | For example: what happens when the students graduate- does
           | the ban follow them to any potential employers? Or if the
           | professor leaves for another university to continue this
           | research?
           | 
           | Does the ban stay with UMN, even after everyone involved
           | left? Or does it follow the researcher(s) to a new
           | university, even if the new employer had no responsibility
           | for them?
        
             | thelean12 wrote:
             | On the other hand: What obligation do the Linux kernal
             | maintainers have to allow UMN staff and students to
             | contribute to their project?
        
             | dragonwriter wrote:
             | > Does the ban stay with UMN, even after everyone involved
             | left?
             | 
             | It stays with the university until the university provides
             | a good reason to believe they should not be particularly
             | untrusted.
        
             | duxup wrote:
             | If they use a different email but someone knows they work
             | at the university?
             | 
             | It's a chain that gets really unpleasant.
        
           | tinco wrote:
           | The goal is not penalizing or lumping everyone together. The
           | goal is to have the issue fixed in the most effective manner.
           | It's not the Linux team's responsibility to allow
           | contributions from some specific university, it's the
           | university's. This measure enforces that responsibility. If
           | they want access, they should rectify.
        
             | duxup wrote:
             | I would then say that the goal and the choice aren't
             | aligned because "penalizing or lumping everyone together"
             | is exactly the choice made.
        
               | brainwad wrote:
               | They would presumably reconsider blanket ban, if the
               | university says they will prohibit these specific
               | researchers from committing to Linux.
        
               | jeffffff wrote:
               | the university can easily resolve the issue by firing the
               | professors
        
               | burnished wrote:
               | do you know that would resolve the issue? this just seems
               | like idle, retributive speculation.
        
               | duxup wrote:
               | The people who are effected by the rule or discouraged by
               | it cannot do so.
        
               | joshuamorton wrote:
               | The University can presumably not in fact do this.
        
               | dragonwriter wrote:
               | Tenure does not generally prohibit for-cause termination,
               | and there is a whole pile of cause here.
        
           | kevincox wrote:
           | This was approved by the university ethics board so if trust
           | of the university is by part because the actions of the
           | students need to pass an ethics bar it makes sense to remove
           | that trust until the ethics committee has shown that they
           | have improved.
        
             | rurban wrote:
             | The ethics board is most likely not at fault here. They
             | were simply lied to, if we take Lu's paper serious. I would
             | just expell the 3 malicious actors here, the 2 students and
             | the Prof who approved it. I don't see any fault in Wang
             | yet.
             | 
             | The damage is not that big. Only 4 committers to linux in
             | the last decade, 2 of them, the students, with malicious
             | backdoors, the Prof not with bad code but bad ethics, and
             | the 4th, the Ass Prof did good patches and already left
             | them.
        
         | henearkr wrote:
         | Not a big loss: these professors likely hate open source.
         | [edit: they do not. See child comments.]
         | 
         | They are conducting research to demonstrate that it is easy to
         | introduce bugs in open source...
         | 
         | (whereas we know that the strength of open source is its
         | auditability, thus such bugs are quickly discovered and fixed
         | afterwards)
         | 
         | [removed this ranting that does not apply since they are
         | contributing a lot to the kernel in good ways too]
        
           | gspr wrote:
           | > It's likely a university with professors that hate open
           | source.
           | 
           | This is a ridiculous conclusion. I do agree with the kernel
           | maintainers here, but there is no way to conclude that the
           | researchers in question "hate open source", and _certainly_
           | not that such an attitude is shared by the university at
           | large.
        
             | henearkr wrote:
             | [Edit: they seem to truly love OSS. See child comments.
             | Sorry for my erroneous judgement. It reminded too much of
             | anti-opensource FUD, I'm probably having PTSD of that
             | time...]
             | 
             | I fixed my sentence.
             | 
             | I still think that these professors, either genuinely or by
             | lack of willingness, do not understand the mechanism by
             | which free software warrants its greater quality compared
             | to proprietary ones (which is a fact).
             | 
             | They just remind me the good old days of FUD against open
             | source by Microsoft and its minions...
        
               | meepmorp wrote:
               | What papers or statements has this professor made to
               | support that kind of allegation? Can you provide some
               | links or references, please?
        
               | henearkr wrote:
               | I don't have the name of the professor.
               | 
               | [Edited: it seems like they do love OSS and contribute a
               | lot. See child comments.]
               | 
               | I had based my consideration on the way they are testing
               | the open-source development model.
               | 
               | These professors actually love OSS... but they need to
               | respect kernel maintainers request to stop these
               | "experiments".
        
               | Pyramus wrote:
               | From the researchers:
               | 
               | > In the past several years, we devote most of our time
               | to improving the Linux kernel, and we have found and
               | fixed more than one thousand kernel bugs; the extensive
               | bug finding and fixing experience also allowed us to
               | observe issues with the patching process and motivated us
               | to improve it. Thus, we consider ourselves security
               | researchers as well as OSS contributors. We respect OSS
               | volunteers and honor their efforts.
               | 
               | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
               | hc....
               | 
               | It feels to me you have jumped to an untenable conclusion
               | without even considering their point of view.
        
               | henearkr wrote:
               | Yes. I removed a lot of my ranting accordingly. Thanks,
               | and sorry.
        
               | Pyramus wrote:
               | Thank you, appreciated.
        
             | theknocker wrote:
             | Seems like a reasonable default assumption to me, until the
             | people repeatedly attempting to sabotage the open source
             | community condescend to -- you know -- _stop doing it_ and
             | then explain wtf they are thinking.
        
           | AnIdiotOnTheNet wrote:
           | > (whereas we know that the strength of open source is its
           | auditability, thus such bugs are quickly discovered and fixed
           | afterwards)
           | 
           | Which is why there have never been multi-year critical
           | security vulnerabilities in FOSS software.... right?
           | 
           | Sarcasm aside, because of how FOSS software is packaged on
           | Linux we've seen critical security bugs introduced by package
           | maintainers into software that didn't have them!
        
             | henearkr wrote:
             | You need to compare what happens with vulnerabilities in
             | OSS vs in proprietary.
             | 
             | A maintainer pakage is just one more open source software
             | (thus also in need of reviews and audits)... which is why
             | some people prefer upstream-source-based distribs, such as
             | Gentoo, Arch when you use git-based AUR packages, or LFS
             | for the hardcore fans.
        
               | acdha wrote:
               | > You need to compare what happens with vulnerabilities
               | in OSS vs in proprietary.
               | 
               | Yes, you do need to make that comparison. Taking it as a
               | given without analysis is the same as trusting the
               | proprietary software vendors who claim to have robust QA
               | on everything.
               | 
               | Security is hard work and different from normal review.
               | The number of people who hypothetically could do it is
               | much greater than the number who actually do, especially
               | if there isn't an active effort to support that type of
               | analysis.
               | 
               | I'm not a huge fan of this professor's research tactic
               | but I would ask what the odds are that, say, an
               | intelligence agency isn't doing the same thing but with
               | better concealment. Thinking about how to catch that
               | without shutting down open-source contributions seems
               | like an important problem.
        
           | throwaway823882 wrote:
           | > the strength of open source is its auditability, thus such
           | bugs are quickly discovered and fixed afterwards
           | 
           | That's not true at all. There are many internet-critical
           | projects with tons of holes that are not found for decades,
           | because nobody except the core team ever looks at the code.
           | You have to actually write tests, do fuzzing, static/memory
           | analysis, etc to find bugs/security holes. Most open source
           | projects _don 't even have tests_.
           | 
           | Assuming people are always looking for bugs in FOSS projects
           | is like assuming people are always looking for code
           | violations in skyscrapers, just because a lot of people walk
           | around them.
        
           | rusticpenn wrote:
           | At least in the university where I did my studies, each
           | professor had their own way of thinking and you could not
           | group them into any one basket.
        
             | henearkr wrote:
             | Fair point.
             | 
             | I'll just leave my comment as it is. The university
             | administration still bears responsibility in the fact that
             | they waived the IRB.
        
               | tehwebguy wrote:
               | From the link, not sure if accurate:
               | 
               | > Those commits are part of the following research:
               | 
               | > https://github.com/QiushiWu/QiushiWu.github.io/blob/mai
               | n/pap...
               | 
               | > They introduce kernel bugs on purpose. Yesterday, I
               | took a look on 4 accepted patches from Aditya and 3 of
               | them added various severity security "holes".
        
               | dash2 wrote:
               | Interestingly, that paper states that they introduced 3
               | patches with bugs, but after acceptance, they immediately
               | notified the maintainers and replaced the patches with
               | correct, bug-free ones. So they claim the bugs never hit
               | any git tree. They also state that their research had
               | passed the university IRB. I don't know if that research
               | relates to what they are doing now, though.
        
           | TeMPOraL wrote:
           | > _Not a big loss: these professors likely hate open source._
           | 
           | > _They are conducting research to demonstrate that it is
           | easy to introduce bugs in open source..._
           | 
           | That's a very dangerous thought pattern. "They try to find
           | flaws in a thing I find precious, therefore they must hate
           | that thing." No, they may just as well be trying to identify
           | flaws to make them visible and therefore easier to fix.
           | Sunlight being the best disinfectant, and all that.
           | 
           | (Conversely, people trying to destroy open source would not
           | publicly identify themselves as researchers and reveal what
           | they're doing.)
           | 
           | > _whereas we know that the strength of open source is its
           | auditability, thus such bugs are quickly discovered and fixed
           | afterwards_
           | 
           | How do we _know_ that? We know things by regularly testing
           | them. That 's literally what this research is - checking how
           | likely it is that intentional vulnerabilities are caught
           | during review process.
        
             | Telemakhos wrote:
             | Ascribing a salutary motive to sabotage is just as
             | dangerous as assuming a pernicious motive. Suggesting that
             | people "would" likely follow one course of action or
             | another is also dangerous: it is the oldest form of
             | sophistry, the eikos argument of Corax and Tisias. After
             | all, if publishing research rules out pernicious motives,
             | academia suddenly becomes the best possible cover for
             | espionage and state-sanctioned sabotage designed to
             | undermine security.
             | 
             | The important thing is not to hunt for motives but to
             | identify and quarantine the saboteurs to prevent further
             | sabotage. Complaining to the University's research ethics
             | board might help, because, regardless of intent, sabotage
             | is still sabotage, and that is unethical.
        
             | dsr_ wrote:
             | The difference between:
             | 
             | "Dear GK-H: I would like to have my students test the
             | security of the kernel development process. Here is my
             | first stab at a protocol, can we work on this?"
             | 
             | and
             | 
             | "We're going to see if we can introduce bugs into the Linux
             | kernel, and probably tell them afterwards"
             | 
             | is the difference between white-hat and black-hat.
        
               | bluGill wrote:
               | It should probably be a private email to Linus Torvalds
               | (or someone in his near chain of patch acceptance), that
               | way some easy to scan for key can be introduced in all
               | patches. Then the top levels can see what actually made
               | it through review, and in turn figure out who isn't
               | reviewing as well as they should.
        
               | wink wrote:
               | Yes, someone like Greg K-H. I'm not up to date on the
               | details, but he should be one of most important 5 people
               | caring for the kernel tree, this would've been the exact
               | person to seek approval.
        
             | henearkr wrote:
             | Auditability is at the core of its advantage over closed
             | development.
             | 
             | Submitting bugs is not really testing auditability, which
             | happens over a longer timeframe and involves an order of
             | magnitude more eyeballs.
             | 
             | To adress your first critic: benevolence, and assuming
             | everyone wants the best for the project, is very important
             | in these models, because the resources are limited and
             | dependent on enthusiasm. Blacklisting bad actors (even if
             | they have "good reasons" to be bad) is very well justified.
        
               | idiotsecant wrote:
               | If the model assumes benevolence how can it possibly be
               | viable long-term?
        
               | henearkr wrote:
               | Like that: malevolent actors are banned as soon as
               | detected.
        
               | idiotsecant wrote:
               | What do you suppose is the ratio of undetected bad actors
               | / detected bad actors? If it is anything other than zero
               | I think the original point holds.
        
               | RangerNS wrote:
               | Most everything boils down to trust at some point. That
               | human society exists is proof that people are, or act,
               | mostly, "good", over the long term.
        
               | TeMPOraL wrote:
               | > _That human society exists is proof that people are, or
               | act, mostly, "good", over the long term._
               | 
               | That's very true. It's worth noting that various legal
               | and security tools deployed by the society help us
               | understand what are the real limits to "mostly".
               | 
               | So for example, the cryptocurrency crowd is very
               | misguided in their pursuit of replacing trust with math -
               | trust is _the_ trick, _the_ big performance hack, that
               | allowed us to form functioning societies without burning
               | ridiculous amounts of energy to achieve consensus. On the
               | other hand, projects like Linux kernel, which play a core
               | role in modern economy, cannot rely on assumption of
               | benevolence alone - incentives for malicious parties to
               | try and mess with them are too great to ignore.
        
               | TeMPOraL wrote:
               | > _Auditability is at the core of its advantage over
               | closed development._
               | 
               | That's an assertion. A hypothesis is verified through
               | observing the real world. You can do that in many ways,
               | giving you different confidence levels in validity of the
               | hypothesis. Research such as the one we're discussing
               | here is one of the ways to produce evidence for or
               | against this hypothesis.
               | 
               | > _Submitting bugs is not really testing auditability,
               | which happens over a longer timeframe and involves an
               | order of magnitude more eyeballs._
               | 
               | It is if there's a review process. Auditability itself is
               | really most interesting _before_ a patch is accepted.
               | Sure, it 's nice if vulnerabilities are found eventually,
               | but the longer that takes, the more likely it is they
               | were already exploited. In case of an intentionally bad
               | patch in particular, the window for reverting it before
               | it does most of its damage is very small.
               | 
               | In other words, the experiment wasn't testing the entire
               | auditability hypothesis. Just the important part.
               | 
               | > _benevolence, and assuming everyone wants the best for
               | the project, is very important in these models, because
               | the resources are limited and dependent on enthusiasm_
               | 
               | Sure. But the project scope matters. Linux kernel isn't
               | some random OSS library on Github. It's _core
               | infrastructure of the planet_. Assumption of benevolence
               | works as long as the interested community is small and
               | has little interest in being evil. With infrastructure-
               | level OSS projects, the interested community is very
               | large and contains a lot of malicious actors.
               | 
               | > _Blacklisting bad actors (even if they have "good
               | reasons" to be bad) is very well justified._
               | 
               | I agree, and in my books, if a legitimate researcher gets
               | banned for such "undercover" research, it's just the flip
               | side of doing such experiment.
        
               | henearkr wrote:
               | I will not adress everything but only this point:
               | 
               | Before a patch is accepted, "auditability" is the same in
               | OSS vs in proprietary, because both pools of engineers in
               | the review groups have similar qualifications and
               | approximatively the same number of people are involved.
               | 
               | So, the real advantage of OSS is on the auditability
               | after the patch is integrated.
        
               | TeMPOraL wrote:
               | > _So, the real advantage of OSS is on the auditability
               | after the patch is integrated._
               | 
               | If that's the claim, then the research work discussed
               | here is indeed not relevant to it.
               | 
               | But also, if that's the claim, then it's easy to point
               | out that the "advantage" here is hypothetical, and not
               | too important in practice. Most people and companies
               | using OSS rely on release versions to be stable and
               | tested, and don't bother doing their own audit. On the
               | other hand, intentional vulnerability submission is an
               | unique threat vector that OSS has, and which proprietary
               | software doesn't.
               | 
               | It is therefore the window between patch submission and
               | its inclusion in a stable release (which may involve
               | accepting the patch to a development/pre-release tree),
               | that's of critical importance for OSS - if
               | vulnerabilities that are already known to some parties
               | (whether the malicious authors or evil onlookers) are not
               | caught in that window, the threat vector here becomes
               | real, and from a risk analysis perspective, negates some
               | of the other benefits of using OSS components.
               | 
               | Nowhere here I'm implying OSS is _worse /better_ than
               | proprietary. As a community/industry, we want to have an
               | accurate, multi-dimensional understanding of the risks
               | and benefits of various development models (especially
               | when applied to core infrastructure project that the
               | whole modern economy runs on). That kind of research
               | definitely helps here.
        
               | henearkr wrote:
               | > _On the other hand, intentional vulnerability
               | submission is an unique threat vector that OSS has, and
               | which proprietary software doesn 't._
               | 
               | Very fair point. Inside threat also exists in
               | corporations, but it's probably harder.
        
               | johncalvinyoung wrote:
               | > On the other hand, intentional vulnerability submission
               | is an unique threat vector that OSS has, and which
               | proprietary software doesn't.
               | 
               | On this specific point, it only holds if you restrict the
               | assertion to 'intentional submission of vulnerabilities
               | by outsiders'. I don't work in fintech, but I've read
               | allegations that insider-created vulnerabilities and
               | backdoors are a very real risk.
        
         | xbar wrote:
         | This is not responsible research. This is similar to initiating
         | fluid mechanics experiments on the wings of a Lufthansa A320 in
         | flight to Frankfurt with a load of Austrians.
         | 
         | There are a lot of people to feel bad for, but none is at the
         | University of Minnesota. Think of the Austrians.
        
           | einpoklum wrote:
           | > This is similar to initiating fluid mechanics experiments
           | on the wings of a Lufthansa A320 in flight to Frankfurt with
           | a load of Austrians.
           | 
           | This analogy is invalid, because:
           | 
           | 1. The experiment is not on live, deployed, versions of the
           | kernel.
           | 
           | 2. There are mechanisms in place for preventing actual
           | merging of the faulty patches.
           | 
           | 3. Even if a patch is merged by mistake, it can be easily
           | backed out or replaced with another patch, and the updates
           | pushed anywhere relevant.
           | 
           | All of the above is not true for the in-flight airline.
           | 
           | However - I'm not claiming the experiment was not ethically
           | faulty. Certainly, the U Minnesota IRB needs to issue a
           | report and an explanation on its involvement in this matter.
        
             | inetknght wrote:
             | > _1. The experiment is not on live, deployed, versions of
             | the kernel._
             | 
             | The patches were merged and the email thread discusses that
             | the patches made it to the stable tree. Some (many?)
             | distributions of Linux have and run from stable.
             | 
             | > _2. There are mechanisms in place for preventing actual
             | merging of the faulty patches._
             | 
             | Those mechanisms failed.
             | 
             | > _3. Even if a patch is merged by mistake, it can be
             | easily backed out or replaced with another patch, and the
             | updates pushed anywhere relevant._
             | 
             | Arguably. But I think this is a weak argument.
        
               | einpoklum wrote:
               | > The patches were merged
               | 
               | The approved methodology - described in the linked paper
               | - was that when a patch with the introduced
               | vulnerabilities is accepted by its reviewer, the patch
               | submitter indicates that the patch introduces a
               | vulnerability exists, and sends a no-vulnerability
               | version. That's what the paper describes.
               | 
               | If the researchers did something other than what the
               | methodology called for (and what the IRB approved), then
               | perhaps the analogy may be valid.
        
             | tremon wrote:
             | How would you feel about researchers delivering known-
             | faulty-under-some-conditions AoA sensors to Boeing, just to
             | see if Boeing's QA process would catch those errors before
             | final assembly?
        
               | splistud wrote:
               | I would feel that I'm wasting time that I could be using
               | to find out why Boeing makes this possible (or any other
               | corporate or government body with a critical system).
        
               | einpoklum wrote:
               | I would feel that you are switching analogies...
               | 
               | This analogy is pretty valid, the in-flight-experiment
               | analogy is invalid.
        
             | matheusmoreira wrote:
             | You seem to think this experiment was performed on the
             | Linux kernel itself. It was not. This research was
             | performed on _human beings_.
             | 
             | It's irrelevant whether any bugs were ultimately introduced
             | into the kernel. The fact is the researchers deliberately
             | abused the trust of other human beings in order to
             | experiment on them. A ban on further contributions is a
             | very light punishment for such behavior.
        
               | einpoklum wrote:
               | You seem to think I condone the experiment because I
               | described an analogy as invalid.
        
           | mort96 wrote:
           | No, it's totally okay to feel sorry for good, conscientious
           | researchers and students at the University of Minnesota who
           | have been working on the kernel in good faith. It's sad that
           | the actions of irresponsible researchers and associated
           | review boards affect people who had nothing to do with
           | professor Lu's research.
           | 
           | It's not wrong for the kernel community to decide to blanket
           | ban contributions from the university. It obviously makes
           | sense to ban contributions from institutions which are known
           | to send intentionally buggy commits disguised as fixes. That
           | doesn't mean you can't feel bad for the innocent students and
           | professors.
        
             | unanswered wrote:
             | > good, conscientious researchers and students at the
             | University of Minnesota who have been working on the kernel
             | in good faith
             | 
             | All you have to do is look at the reverted patches to see
             | that these are either mythical or at least few and far in
             | between.
        
         | some_random wrote:
         | It definitely would suck to be someone at UMN doing legitimate
         | work, but I don't think it's reasonable to ask maintainers to
         | also do a background check on who the contributor is and who
         | they're advised by.
        
       | Quarrelsome wrote:
       | This is ridiculously unethical research. Despite the positive
       | underlying reasons treating someone as a lab rat (in this case
       | maintainers reviewing PRs) feels almost sociopathic.
        
         | jnxx wrote:
         | > Despite the positive underlying reasons
         | 
         | I think that is thinking too kind of them. Sociopaths are often
         | very well-versed to give "reasons" about what they do, but at
         | the core it is powerplay.
        
         | Quarrelsome wrote:
         | how do I deserve -4 for this?
        
       | TacticalCoder wrote:
       | Or some enemy state pawn(s) trying to add backdoors and then use
       | the excuse of "university research paper" should they get caught?
        
       | dawnbreez wrote:
       | logged into my ancient hn account just to tell all of you that
       | pentesting without permission from higher-ups is a bad idea
       | 
       | yes, this is pentesting
        
       | PHDchump wrote:
       | lol this is also how Russia does their research with Solarwinds.
       | Do not try to attack supply chain or do security research without
       | permission. They should be investigated by FBI for doing recon to
       | a supply chain to make sure they weren't trying to do something
       | worse. Minnesota leads the way in USA embarrassment once again.
        
       | ansible wrote:
       | I still don't get the point of this "research".
       | 
       | You're just testing the review ability of particular Linux kernel
       | maintainers at a particular point in time. How does that
       | generalize to the extent needed for it to be valid research on
       | open source software development in general?
       | 
       | You would need to run this "experiment" hundreds or thousands of
       | times across most major open source projects.
        
         | scoutt wrote:
         | >the point of this "research".
         | 
         | I think it's mostly "finger pointing": you need one exception
         | to break a rule. If the rule is "open source is more secure
         | than closed source because community/auditing/etc.", now with a
         | paper demonstrating that this rule is not always true you can
         | write a nice Medium article for your closed-source product,
         | quoting said paper, claiming that your closed-source product is
         | more secure than the open competitor.
        
           | UncleMeat wrote:
           | I don't think this is correct. The authors have contributed a
           | large number of legitimate bugfixes to the kernel. I think
           | they really did believe that process changes can make the
           | kernel safer and that by doing this research they can
           | encourage that change and make the community better.
           | 
           | They were grossly wrong, of course. The work is extremely
           | unethical. But I don't believe that their other actions are
           | consistent with a "we hate OSS and want to prove it is bad"
           | ethos.
        
             | [deleted]
        
         | fouric wrote:
         | The Linux kernel is one of the largest open-source projects in
         | existence, so my guess is that they were aiming to show that
         | "because the Linux kernel review process doesn't protect
         | against these attacks, most open-source project will also be
         | vulnerable" - "the best can't stop it, so neither will the
         | rest".
        
           | ansible wrote:
           | But we have always known that someone with sufficient
           | cleverness may be able to slip vulnerabilities past reviewers
           | of whatever project.
           | 
           | Exactly how clever? That varies from reviewer to reviewer.
           | 
           | There will be large projects, with many people that review
           | the code, which will not catch sufficiently clever
           | vulnerabilities. There will be small projects with a single
           | maintainer that will catch just about anything.
           | 
           | There is a spectrum. Without conducting a wide-scale (and
           | unethical) survey with a carefully calibrated scale of
           | cleverness for vulnerabilities, I don't see how this is
           | useful research.
        
             | fouric wrote:
             | > But we have always known that someone with sufficient
             | cleverness may be able to slip vulnerabilities past
             | reviewers of whatever project.
             | 
             | ...which is why the interestingness of this project depends
             | on how clever they were - which I'm not able to evaluate,
             | but which someone would need to before they could possibly
             | invalidate the idea.
             | 
             | > (and unethical)
             | 
             | How is security research unethical, exactly?
        
       | hardsoftnfloppy wrote:
       | Remember, the university of Minnesota was number 8 for top .edu
       | addresses dumped in the Ashley Madison hack.
       | 
       | Scum of the earth.
        
       | GNOMES wrote:
       | Am I missing how these patches were caught/flagged? Was it an
       | automated process or physically looking at the pull requests?
        
       | kml wrote:
       | Aditya Pakki should be banned from any open source projects. Open
       | source depends on contributors who collectively try to do the
       | right thing. People who purposely try to veer projects off course
       | should face real consequences.
        
       | enz wrote:
       | I wonder if they can be sued (by the Linux Foundation, maybe) for
       | that...
        
       | soheil wrote:
       | First thing that comes to mind is The Underhanded C Contest [0]
       | where contestants try to introduce code that looks harmless, but
       | actually is malicious and even if caught should look like an
       | innocent bug at worse.
       | 
       | [0] http://www.underhanded-c.org
        
       | LegitShady wrote:
       | They should be reported to the authorities for attempting to
       | introduce security vulnerabilities into software intentionally.
       | This is not ok.
        
         | farisjarrah wrote:
         | What these researchers did was clearly and obviously wrong, but
         | is it actually illegal?
        
           | [deleted]
        
           | LegitShady wrote:
           | It should be reported anyways. This might be only some small
           | part of the malfeasance they're getting up to.
        
         | chews wrote:
         | Maybe it was those very authorities who wanted them there.
         | Lot's of things have gotten patched and the backdoors don't
         | work as well as they used to... gotta get clever.
        
         | ActorNightly wrote:
         | The fact that both of the researchers seem to be of Chinese
         | origin should definitely raise some questions. Not the first
         | time things like this have been tried.
        
           | jlduan wrote:
           | this is classic national origin discrimination. racists are
           | coming out.
        
             | ActorNightly wrote:
             | Id have the same suspicion if they were Russian. Nothing to
             | do with race, everything to do with national affiliation.
        
               | jlduan wrote:
               | What you are proposing happened in WW2, it is called
               | Japanese American internment.
        
               | ActorNightly wrote:
               | Afaik, there werent really any efforts by Japanese in WW2
               | to infiltrate and undermine US.
               | 
               | China and Russia, in modern times, on the other hand....
        
               | jessaustin wrote:
               | Does the name "Aditya Pakki" really seem remotely
               | "Chinese" or "Russian"? You _might_ be a racist if you
               | can 't identify an obviously South Asian name as such.
               | Although, honestly, even racists should be able to figure
               | out the surname.
        
               | ActorNightly wrote:
               | I was talking about the names on the original paper.
        
         | amir734jj wrote:
         | I'm a PhD student myself. What he did is not okay! We study
         | computer science to do good not to harm.
        
         | dylan604 wrote:
         | What authorities whould that be? The Department of Justice? The
         | same DoJ that is constantly pushing for backdoors to
         | encryption? Good luck with that! The "researchers" just might
         | receive junior agent badges instead.
        
       | donatj wrote:
       | I wish the title were clearer. Linux bans University of Minnesota
       | for sending buggy patches _on purpose_.
        
         | SAI_Peregrinus wrote:
         | Or just "Linux bans University of Minnesota for sending
         | malicious patches."
        
         | ajross wrote:
         | The term of art for an intentional bug that deliberately
         | introduces a security flaw is a "trojan" (from "Trojan Horse",
         | of course). UMN trojaned the kernel. This is indeed just wildly
         | irresponsible.
        
       | wglb wrote:
       | While it is easy to consider this a unsportsmanlike, one might
       | view this as a supply chain attack. I don't particularly support
       | this approach, but consider for a moment that as a defender (in
       | the security team sense), you need to be aware of all possible
       | modes of attack and compromise. While the motives of this class
       | are clear, ascribing to attackers any particular motive is likely
       | to miss.
       | 
       | To the supply chain type of attacks, there isn't an easy answer.
       | Classical methods left both the SolarWinds and Codecov attacks in
       | place for way too many days.
        
       | ilamont wrote:
       | Reminded me of story more than a decade ago about an academic who
       | conducted a series of "breaching experiments" in City of
       | Heroes/City of Villains to study group behavior, basically
       | breaking the social rules (but not the game rules) without other
       | participants' or the game studio's knowledge. It was discussed on
       | HN in 2009 (https://news.ycombinator.com/item?id=690551)
       | 
       | Here's how the professor (a sociologist) described his
       | methodology:
       | 
       |  _These three sets of behaviors - rigidly competitive pvp tactics
       | (e. g., droning), steadfastly uncooperative social play outside
       | the game context (e. g., refusing to cooperate with zone
       | farmers), and steadfastly uncooperative social play within the
       | game context (e. g., playing solo and refusing team invitations)
       | - marked Twixt's play from the play of all others within RV._
       | 
       | Translation: He killed other players in situations that were
       | allowed by the game's creators but frowned upon by the majority
       | of real-life participants. For instance, "villains" and "heroes"
       | aren't supposed to fraternize, but they do anyway. When "Twixt"
       | happened upon these and other situations -- such as players
       | building points by taking on easy missions against computer-
       | generated enemies -- he would ruin them, often by "teleporting"
       | players into unwinnable killzones. The other players would either
       | die or have their social relations disrupted. Further, "Twixt"
       | would rub it in by posting messages like:
       | 
       |  _Yay, heroes. Go good team. Vills lose again._
       | 
       | The reaction to the experiment and to the paper was what you
       | would expect. The author later said it wasn't an experiment in
       | the academic sense, claiming:
       | 
       |  _... this study is not really an experiment. I label it as a
       | "breaching experiment" in reference to analogous methods of
       | Garfinkel, but, in fact, neither his nor my methods are
       | experimental in any truly scientific sense. This should be
       | obvious in that experimental methods require some sort of control
       | group and there was none in this case. Likewise, experimental
       | methods are characterized by the manipulation of a treatment
       | variable and, likewise, there was none in this case._
       | 
       | Links:
       | 
       | http://www.nola.com/news/index.ssf/2009/07/loyola_university...
       | 
       | https://www.ilamont.com/2009/07/academic-gets-rise-from-brea...
        
       | werber wrote:
       | Could this have just been someone trying to cover up being a
       | mediocre programmer in academia by framing it in a lens that
       | would work in the academy with some nonsense vaguely liberal arts
       | sounding social experiment premise?
        
       | motohagiography wrote:
       | This isn't friendly pen-testing in a community, this is an attack
       | on critical infrastructure using a university as cover. The
       | foundation should sue the responsible profs personally and seek
       | criminal prosecution. I remember a bunch of U.S. contractors said
       | they did the same thing to one of the openbsd vpn library
       | projects about 15 years ago as well.
       | 
       | What this professor is proving out is that open source and
       | (likely, other) high trust networks cannot survive really
       | mendacious participants, but perhaps by mistake, he's showing how
       | important it is to make very harsh and public examples of said
       | actors and their mendacity.
       | 
       | I wonder if some of these or other bug contributors have also
       | complained that the culture of the project governance is too
       | aggressive, that project leads can create an unsafe environment,
       | and discourage people from contributing? If counter-intelligence
       | prosecutors pull on this thread, I have no doubt it will lead to
       | unravelling a much broader effort.
        
         | aluminum96 wrote:
         | Not everything can be fixed with the criminal justice system.
         | This should be solved with disciplinary action by the
         | university (and possibly will be [1]).
         | 
         | [1] https://cse.umn.edu/cs/statement-cse-linux-kernel-
         | research-a...
        
         | frombody wrote:
         | I am not knowledgeable enough to know if this intent is
         | provable, but if someone can frame the issue appropriately, it
         | feels like it could be good to report this to the FBI tip line
         | so it is at least on their radar.
        
         | eatbitseveryday wrote:
         | > The foundation should sue the responsible profs personally
         | and seek criminal prosecution.
         | 
         | This is overkill and uncalled for.
        
           | motohagiography wrote:
           | Organizing an effort, with a written mandate, to knowingly
           | introduce kernel vulnerabilities, through deception, that
           | will spread downstream into other Linux distributions, likely
           | including firmware images, which may not be patched or
           | reverted for months or years - does not warrant a criminal
           | investigation?
           | 
           | The foundation should use recourse to the law to signal they
           | are handling it, if only to prevent these profs from being
           | mobbed.
        
           | totalZero wrote:
           | How exactly is a lawsuit overkill? If the researchers are in
           | the right, the court will find in their favor.
        
       | dang wrote:
       | This thread is paginated, so to see the rest of the comments you
       | need to click More at the bottom of the page, or like this:
       | 
       | https://news.ycombinator.com/item?id=26887670&p=2
       | 
       | https://news.ycombinator.com/item?id=26887670&p=3
       | 
       | https://news.ycombinator.com/item?id=26887670&p=4
       | 
       | https://news.ycombinator.com/item?id=26887670&p=5
       | 
       | https://news.ycombinator.com/item?id=26887670&p=6
       | 
       | (Posts like this will go away once we turn off pagination. It's a
       | workaround for performance and we're working on fixing that.)
       | 
       | Also, https://www.neowin.net/news/linux-bans-university-of-
       | minneso... gives a bit of an overview. (It was posted at
       | https://news.ycombinator.com/item?id=26889677, but we've merged
       | that thread hither.)
        
       | wolverine876 wrote:
       | I don't see the difference between these and other 'hackers',
       | white-hat, black-hat etc. The difference I see is the institution
       | tested, Linux, is beloved here.
       | 
       | Usually people are admired here for finding vulnerabilities in
       | all sorts of systems and processes. For example, when someone
       | submits a false paper to a peer-reviewed journal, people around
       | here root for them; I don't see complaints about wasting the time
       | and violating the trust of the journal.
       | 
       | But should one of our beloved institutions be tested - now it's
       | an outrage?
        
         | ahepp wrote:
         | The outrage and does seem out of place to me. I think it's fair
         | (even reasonable) for the kernel maintainers to ban those
         | responsible, but I'm not sure why everyone here is getting so
         | offended about fairly abstract harms like "wasting the time of
         | the maintainers"
        
       | robrtsql wrote:
       | Very embarrassed to see my alma mater in the news today. I was
       | hoping these were just some grad students going rogue but it even
       | looks like the IRB allowed this 'research' to happen.
        
         | deelowe wrote:
         | It's very likely the IRB was mislead. Don't feel too bad. I saw
         | in one of the comments that the IRB was told that the
         | researchers would be "sending emails," which seems to be an
         | intentionally obtuse phrasing for them submitting malformed
         | kernel patches.
        
       | davidkuhta wrote:
       | Anyone else find the claim that "This was not human research" as
       | erroneous as I do?
        
       | hn3147 wrote:
       | This would have been way more fun if they had a Black trans Womxn
       | submit the bogus patches. The blowback to the White maintainer's
       | reply would have been hilarious * popcorn *
        
       | maccard wrote:
       | Is there a more readable version of this available somewhere? I
       | really struggle to follow the unformatted mailing list format.
        
         | Synaesthesia wrote:
         | Just keep hitting the "next" link to follow the thread.
        
           | maccard wrote:
           | The next link is one hyperlink buried in the middle of the
           | wall of text, and simply appends the new message to the
           | existing one. It also differentiates between prev and parent?
           | 
           | It's super unclear.
        
             | scbrg wrote:
             | The page has four sections, divided by <hr> tags;
             | 
             | 1) The email message, with a few headers included
             | 
             | 2) A thread overview, with all emails in the thread
             | 
             | 3) Instructions on how to reply
             | 
             | 4) Information about how to access the list archives.
             | 
             | You need only care about (1) and (2). The difference
             | between prev and parent is indicated by the tree view in
             | (2). The _previous_ one is the previous one in the tree,
             | which might not necessarily be the parent if the parent has
             | spawned earlier replies.
        
             | azernik wrote:
             | Scroll down a bit farther to see the full comment tree.
             | 
             | "Next" goes approximately down the tree in the order it's
             | displayed on the page, by depth-first search.
             | 
             | "Prev" just reverses the same process as "Next".
             | 
             | "Parent" differs from "prev" in that it goes to the parent
             | e-mail even if this email has earlier siblings.
             | 
             | (Generally, I just scroll down to the tree view and click
             | around manually.)
        
       | karlding wrote:
       | The University of Minnesota's Department of Computer Science and
       | Engineering released a statement [0] and "suspended this line of
       | research".
       | 
       | [0] https://cse.umn.edu/cs/statement-cse-linux-kernel-
       | research-a...
        
         | bjornsing wrote:
         | They don't seem all that happy about it. :)
        
           | elliekelly wrote:
           | I don't read any emotion in that statement whatsoever.
        
           | kfarr wrote:
           | > We take this situation extremely seriously. We have
           | immediately suspended this line of research. yeah those
           | department heads seemed pretty pissed
        
       | hola1234asdf wrote:
       | asdfadsfsdfd
        
       | up2isomorphism wrote:
       | From an outsider, the main question is: does this expose an
       | actual weakness in the Linux development model?
       | 
       | From what I understand, this answer seems to be a "yes".
       | 
       | Of course, it is understandable that GKH is frustrated, and if
       | his community do not like someone pointing out this issue, it is
       | OK too.
       | 
       | However, one researcher does not represent the whole university,
       | so it seems immature to vent this to other unrelated people just
       | because you can.
        
         | space_rock wrote:
         | The university has an ethics board to review experiments. So
         | what experiments get allowed reflect on the whole university
        
           | up2isomorphism wrote:
           | If you are actually in a graduate school, you will know it is
           | practically impossible to review details like this, otherwise
           | nobody can do any real work.
           | 
           | Besides, how to test the idea without doing what they did?
           | Can you show us a way?
        
       | lamp987 wrote:
       | Unethical and harmful.
        
       | dghlsakjg wrote:
       | Like all research institutions, University of Minnesota has an
       | ethics committee.
       | 
       | https://integrity.umn.edu/ethics
       | 
       | Feel free to write to them
        
       | rurban wrote:
       | I'd really like to review now similar patches in FreeRTOS,
       | FreeBSD and such. Their messages and fixes all follow a certain
       | scheme, which should be easy to detect.
       | 
       | At least both of them they are free from such @umn.edu commits
       | with fantasy names.
        
       | fellellor wrote:
       | What an effing idiot! And then turn around and claiming bullying!
       | At this point I'm not even surprised. Claiming victimhood is now
       | a very effective move in the US academia these days.
        
       | autoconfig wrote:
       | A lot of people seem to consider this meaningless and a waste of
       | time. If we disregard the the problems with the patches reaching
       | stable branches for a second (which clearly is problematic), what
       | is the difference between this and companies conducting red team
       | exercises? It seems to me a potentially real and dangerous attack
       | vector has been put under the spotlight here. Increasing
       | awareness around this can't be all bad, particularly in a time
       | where state sponsored cyber attacks are getting ever more severe.
        
       | seanieb wrote:
       | Regardless of their methods, I think they just proved the kernel
       | security review process is non-existent. Either in the form of
       | static analysis or human review. Whats being done to address
       | those issues?
        
         | foobar33333 wrote:
         | >Whats being done to address those issues?
         | 
         | Moving to rust to limit the scope of possible bugs.
        
           | metalliqaz wrote:
           | this is a dangerous understanding of Rust. Rust helps to
           | avoid certain kinds of bugs in certain situations. Bugs are
           | very much possible in Rust and the scope of bugs usually
           | depends more on the system than the language used to write
           | it.
        
             | [deleted]
        
             | virgilp wrote:
             | I get where you're coming from, but I disagree. They
             | actually prey on seemingly small changes that have large
             | "unintended"/non-obvious side-effects. I argue that finding
             | such situations is much much harder in Rust than in C. Is
             | it _impossible_? Probably not (especially not in unsafe
             | code), but I do believe it limits the attack surface quite
             | a lot. Rust is not a definitive solution, but it can be a
             | (big) part of the solution.
        
               | metalliqaz wrote:
               | yes it definitely limits the attack surface. remember
               | that in systems programming there are bugs that cause
               | errors in computation, which Rust is pretty good at
               | protecting; but there are also bugs which cause
               | unintended behaviors, usually from incorrect or
               | incomplete requirements, or implementation edge cases.
        
         | arbitrage wrote:
         | The bugs were found. Seems like it works to me.
        
         | st_goliath wrote:
         | > non-existent... static analysis .... Whats being done to
         | address those issues?
         | 
         | Static analysis is being done[1][2], in addition, there are
         | also CI test farms[3][4], fuzzing farms[5], etc. Linux is a
         | project that enough large companies have a stake in that there
         | are some willing to throw resources like this at it.
         | 
         | Human review is supposed to be done through the mailing list
         | submission process. How well this works depends in my
         | experience from ML to ML.
         | 
         | [1] https://www.kernel.org/doc/html/v4.15/dev-
         | tools/coccinelle.h...
         | 
         | [2] https://scan.coverity.com/projects/linux
         | 
         | [3] https://cki-project.org/
         | 
         | [4] https://bottest.wiki.kernel.org/
         | 
         | [5] https://syzkaller.appspot.com/upstream
        
         | [deleted]
        
         | viraptor wrote:
         | Not sure why you think they proved that. Human review was done
         | on the same day the patch was submitted and pointed out that
         | it's wrong: https://lore.kernel.org/linux-
         | nfs/20210407153458.GA28924@fie...
        
           | meitham wrote:
           | Human review was done after the patch was merged into stable,
           | hence reverting was necessary. I'm confused why these patches
           | don't get treated as merge requests and get reviewed prior to
           | merging!
        
             | rcxdude wrote:
             | This patch wasn't. Other patches from the university had
             | made it into stable and are likely to be reversed, not
             | because of known problems with the patches, but because of
             | the ban.
        
       | kwdc wrote:
       | It would be fascinating to see the ethics committee exemption. I
       | sense there was none.
       | 
       | Or is this kind of experiment deemed fair game? Red vs blue team
       | kind of thing? Penetration testing.
       | 
       | But if it was me in this situation, I'd ban them for ethics
       | violation as well. Acting like a Evil doer means you might get
       | caught... and punished. I found the email about cease and desist
       | particularly bad behavior. If that student was lying then that
       | university will have to take real action. Reputation damage and
       | all that. Surely a academic reprimand.
       | 
       | I'm sure there's plenty of drama and context we don't know about.
        
         | klodolph wrote:
         | The ethics committee issued a post-hoc exemption after paper
         | was published.
        
           | tgbugs wrote:
           | Wow. That is a flagrant violation of research ethics by
           | everyone involved. UMN needs to halt anything even close to
           | human subjects research until they get their IRB back under
           | control, who knows what else is going on on campus that has
           | not received prior approval. Utter disaster.
        
             | klodolph wrote:
             | Someone made a good case that the IRB may have just been
             | doing their job, according to their guidelines for what is
             | exempt from review & what is "research on human subjects".
             | 
             | Nevertheless it is clear that UMN does not have sufficient
             | controls in place to prevent this kind of unethical
             | behavior. The ban & patch reversions may force the issue.
        
               | detaro wrote:
               | Individual groups at unis are very independent, little
               | oversight is common.
               | 
               | UMN CS associate department head on this: https://twitter
               | .com/lorenterveen/status/1384965111014641667 (TL;DR: they
               | didn't hear about this before because each group does its
               | thing, leadership doesn't get involved in IRB process, in
               | his opinion IRB failed - situation analogous to cases
               | known to be problematic in other subfields https://twitte
               | r.com/lorenterveen/status/1384955467051454466 )
        
               | [deleted]
        
         | kwdc wrote:
         | I didn't read this bit: "The IRB of University of Minnesota
         | reviewed the procedures of the experiment and determined that
         | this is not human research. We obtained a formal IRB-exempt
         | letter"
         | 
         | Um. Ok.
        
           | elliekelly wrote:
           | How does an IRB usually work? Is it the same group of people
           | reviewing all proposals for the entire university? Or are
           | there subject-matter experts (and hopefully lawyers) tapped
           | to review proposals in their specific domain? Applying
           | "ethics" to a proposal is meaningless without understanding
           | not just how they plan to implement it but how it _could be_
           | implemented.
        
           | steelframe wrote:
           | Some people are questioning whether banning the entire
           | university is an appropriate response. It sounds to me like
           | there are systemic institutional issues that they need to
           | address, and perhaps banning them until they can sort those
           | out wouldn't be an entirely unreasonable thing to do.
        
             | kwdc wrote:
             | I think banning them for now is appropriate. Its a shot
             | across their bow to let them know they have done something
             | wrong. Moving forward if it was me I'd later re-evaluate
             | such a wide ban because of the collateral damage. But at
             | the same time, there needs to be redress for wrongdoing
             | since they were actually caught. I'd definitely not re-
             | evaluate until apology and some kind of "we won't waste
             | time like this again" agreement or at least agreed-upon
             | understanding is in place. Whatever shape that needs to be.
             | 
             | As for systematic issues, I'm not sure. But moving forward
             | they'd want to confirm there aren't glaring omissions to
             | let this happen again. Giving them suitable Benefit-of-
             | doubt niceties might imply these are isolated cases. (But
             | both of them?! Perhaps isolated to a small group of
             | academics.)
             | 
             | Messy situation.
        
             | totalZero wrote:
             | The university should be policing its researchers. Banning
             | the whole university reinforces the incentive to do so.
             | Otherwise the fact that a contribution comes from a
             | university researcher would bear no added trust versus a
             | layperson.
        
             | MeinBlutIstBlau wrote:
             | The ban was 100% political. Greg wanted to shine the
             | spotlight as negatively as possible on the bad faith actors
             | so enough pressure can be out on them to be dismissed. I
             | guarantee hell reinstitute it the moment these people are
             | let go.
        
         | jedimastert wrote:
         | I'm gonna guess the committee didn't realize the "patch
         | process" was a manual review of each patches. The way it's
         | worded in the paper you'd think they were testing some sort of
         | integration testing or something.
        
         | jimmar wrote:
         | Institutional review boards are notorious for making sure that
         | all of the i's are dotted and the t's are crossed on the myriad
         | of forms they require, but without actually understanding the
         | nature of the research they are approving.
        
       | [deleted]
        
       | InsomniacL wrote:
       | Seems to me they exposed a vulnerability in the way code is
       | contributed.
       | 
       | If this was Facebook and their response was: > ~"stop wasting our
       | time" > ~"we'll report you" the responses here would be very
       | different.
        
       | znpy wrote:
       | I think it's a fair measure, albeit drastic.
       | 
       | What happens if any of that patches ends up in a kernel release?
       | 
       | It's like setting random houses on fire just to test the
       | responsiveness of local firefighters.
        
       | iou wrote:
       | Did Linus comment on any of this get? :popcorn:
        
       | [deleted]
        
       | causality0 wrote:
       | _I respectfully ask you to cease and desist from making wild
       | accusations that are bordering on slander._
       | 
       | Responding properly to that statement would require someone to
       | step out of the HN community guidelines.
        
       | [deleted]
        
       | theflyinghorse wrote:
       | "It's just a prank, bro!"
       | 
       | Incredible that the university researches decided this was a good
       | idea. Has noone in the university voiced concern that perhaps
       | this is a bad idea?
        
       | waihtis wrote:
       | Should've at least sought approval from the maintainer party, and
       | perhaps tried to orchestrate it so that the patch approver didn't
       | have information about it, but some part of the org did.
       | 
       | In a network security analogy, this is just unsolicited hacking
       | VS being a penetration test which it claims more so to be.
        
         | wang_li wrote:
         | This is no better. All it does is increase the size of the
         | research team. You're still doing research on non-consenting
         | participants.
        
       | ddingus wrote:
       | _plonk_
       | 
       | Aaaaand into the kill file they go.
       | 
       | Been a while since I last saw a proper plonk.
        
         | burnished wrote:
         | Can you link to any others? Personal curiosity.
        
       | eatbitseveryday wrote:
       | Clarification from their work that was posted on the professor's
       | website:
       | 
       | https://www-users.cs.umn.edu/~kjlu/papers/clarifications-hc....
        
       | noxer wrote:
       | Someone does voluntary work and people think that gives them some
       | ethical privilege to be asked before someone puts their work to
       | the test? Sure it would be nice to ask but at the same time it
       | renders the testing useless. They wanted to see how the review
       | goes if they aren't aware that someone is testing them. You cant
       | do this with consent.
       | 
       | The wasting time argument is nonsense too its not like they did
       | this thousands of times and beside that, reviewing a intentional
       | bad code is not wasting time is just as productive as reviewing
       | "good" code and together with the patch-patch it should be even
       | more valuable work. It not only or adds a patch it also make the
       | reviewer better.
       | 
       | Yeah it aint fun if people trick you or point out you did not
       | succeed in what you tried to do. But instead of playing the
       | victim an play the unethical human experiment card maybe focus on
       | improving.
        
         | francasso wrote:
         | Agreed, in fact the review process worked and now they are
         | going to ban all contributions from that university, as it
         | should be. I think it all worked out perfectly
        
           | noxer wrote:
           | Pathetic, it did not work at all, they told em whenever they
           | missed a planted bug.
        
         | rcxdude wrote:
         | Or you could cease to do the voluntary work for them, because
         | they clearly are not contributing to your goals. This is what
         | the kernel maintainers have chosen and they have just as much
         | right to do so. And you can perfectly well do this with
         | consent, there's a wealth of knowledge from psychology and
         | sociology on how you can run tests on people with consent and
         | without invalidating the test.
        
           | noxer wrote:
           | I never said they can not stop reviewing the code. They can
           | do whatever the heck they want. I'm not gonna tell a
           | volunteer what they can and can not do. They just as much
           | dont need anyone's consent to ignore submits as thous who
           | submitting dont need their consent. Its voluntary, if you
           | dont see a benefit you are free to stop, not free to tell
           | other volunteers what to do and not to do.
        
         | mrleinad wrote:
         | A far better approach would be to study patch submissions and
         | see how many bugs were introduced by the result of those
         | patches being accepted and applied, without any interference of
         | any kind.
         | 
         | Problem with that is it's a lot of work and they didn't want to
         | do it in the first place.
        
           | noxer wrote:
           | Exactly, they are just seem mad and blame other for "wrong
           | doings" instead of acknowledging that they need to improve.
        
             | mrleinad wrote:
             | You misunderstood me. I said the ones who tried to "see if
             | the bugs would be detected or not in new submitted patches"
             | are the lazy ones who instead of analyzing the existing
             | code and existing bugs, attempted to submit new ones.
             | Actually working on analyzing existing data would require
             | more work than they were willing to do for their paper.
        
               | noxer wrote:
               | They had no intent to find vulnerability in the code they
               | intended to find/proof vulnerability in the review
               | process, totally different things.
        
               | mrleinad wrote:
               | They could do that by using all the existing patches and
               | reported bugs already in the codebase. But that would've
               | required them to work more than if they submitted new
               | code with new bugs. They chose to effectively waste other
               | people's time instead of putting in the work needed to
               | obtain the analysis they wanted.
        
         | plesiv wrote:
         | > They wanted to see how the review goes if they aren't aware
         | that someone is testing them. You cant do this with consent.
         | 
         | Ridiculous. Does the same apply to pentesting a bank or a
         | government agency. If you wanted to pentest these of course
         | you'd get approval from an executive that has power to sanction
         | this. Why would Linux development be an exception? Just ask GKH
         | or someone to allow you to do this.
        
           | junippor wrote:
           | It's just a prank bro!
        
           | noxer wrote:
           | Ridiculous comparison indeed. There was no pen testing going
           | on. Submitted code does not attack or harming any running
           | system and whoever uses is does so completely voluntary. I
           | dont need anyone's approval for that. The license already
           | states that I'm not liable in any way for what you do with
           | it.
        
         | Chilinot wrote:
         | > Someone does voluntary work and people think that gives them
         | some ethical privilege to be asked before someone puts their
         | work to the test?
         | 
         | Yes. Someone sees the work provided to the community for free
         | and thinks that gives them some ethical privilege to put that
         | work to the test?
        
           | noxer wrote:
           | I have no clue what you try to say, sorry.
        
       | tediousdemise wrote:
       | This not only erodes trust in the University of Minnesota, but
       | also erodes trust in the Linux kernel.
       | 
       | Imagine how downstream consumers of the kernel could be affected.
       | The kernel is used for some extremely serious applications, in
       | environments where updates are nonexistent. These bad patches
       | could remain permanently in situ for mission-critical
       | applications.
       | 
       | The University of Minnesota should be held liable for any damages
       | or loss of life incurred by their reckless decision making.
        
       | kemonocode wrote:
       | I have to question the true motivations behind this. Just a
       | "mere" research paper? Or is it there an ulterior motive, such as
       | undermining Linux kernel development, taking advantage of the
       | perceived hostility of the LKML to make a big show of it;
       | castigate and denounce those elitist Linux kernel devs?
       | 
       | So I hear tinfoil is on sale, mayhaps I should stock up.
        
       | pertymcpert wrote:
       | I want to know how TF the PC at the IEEE conference decided this
       | was acceptable?
        
       | a-dub wrote:
       | so basically they demonstrated that the oss security model, as it
       | operates today, is not working as it had been previously hoped.
       | 
       | it's good work and i'm glad they've done it, but that's
       | depressing.
       | 
       | now what?
        
       | rubyn00bie wrote:
       | This is supremely fucked up and I'd say is borderline criminal.
       | It's really lucky asshole researchers like this haven't caused a
       | bug that cost billions of dollars, or killed someone, because
       | eventually shit like this will... and holy shit will "it was just
       | research" do nothing to save them.
        
         | goatinaboat wrote:
         | It's just a shame there is no mechanism in the license to
         | withdraw permission for this so-called university to use Linux
         | at all
        
           | SuchAnonMuchWow wrote:
           | It is by design, not having these mechanism is one of the
           | goals of free software: free for everyone, no exceptions.
           | 
           | See JSON.org License which says it "shall be used for Good,
           | not Evil" and is not considered free software.
        
             | ar_lan wrote:
             | "Free" being the confusing word here, because it has two
             | meanings, and _often_ are used without context in open
             | source software.
             | 
             | Typically, OSS is _both_ definitions at the same time -
             | free monetarily, and  "free" as in "freedom" to use. JSON
             | is an interesting case of "free" monetarily but not totally
             | "free for use".
        
           | malka wrote:
           | A shame today, a godsent another day.
        
           | andrewzah wrote:
           | That is expressly the opposite goal of open source. If you
           | arbitrarily say foo user cannot use your software, then it is
           | NOT open source. That's more like source-available.
           | 
           | Nobody would continue to use linux if they randomly banned
           | people from using it, regardless of the reason.
           | 
           | [side note] This is why I despise the term "open source". It
           | obscures the important part of user freedom. The term
           | "Free/libre software" is not perfect, but it doesn't obscure
           | this.
        
         | pas wrote:
         | How come there's no ethical review for research that interacts
         | with people? (I mean it's there in medicine and psychology, and
         | probably for many economics experiments too.)
         | 
         | edit: oh, it seems they got an exemption, because it's software
         | research - https://news.ycombinator.com/item?id=26890084 :|
        
           | BolexNOLA wrote:
           | I can't imagine it will stay that way forever. As more and
           | more critical tools and infrastructure go digital, allowing
           | people to just whack away at them or introduce malicious/bad
           | code in the name of research is just going to be way too big
           | of a liability.
        
             | yellowyacht wrote:
             | To bad this stuff does not go on your "permanent record"
        
         | Klwohu wrote:
         | Well it remains to be seen if they're foreign intelligence
         | agents, doesn't it?
        
           | splistud wrote:
           | very insidious foreign actors that publish papers about their
           | op
        
             | Klwohu wrote:
             | They call that a legend in the intelligence community.
        
         | [deleted]
        
         | TheCondor wrote:
         | I agree, I think a more broad ban might be in order. I don't
         | know that I'd want anyone from this "group" contributing to
         | _anything_.
        
         | [deleted]
        
         | ngngngng wrote:
         | This is actually just the elitist version of "it's just a
         | prank, bro!"
         | 
         | And you're right, bugs in the linux kernel could have serious
         | consequences.
        
         | TrackerFF wrote:
         | I agree that it's bad behavior, but if you have billions of
         | dollars resting on open-source infrastructure, you better know
         | the liabilities involved.
        
         | Veserv wrote:
         | Any organization that would deploy software that could kill
         | someone without carefully personally reviewing it for fitness
         | of purpose especially when the candidate software states that
         | it waives all liability and waives any guarantee that it is fit
         | for purpose as stated in sections 11 and 12 of the GPLv2 [1] is
         | criminally irresponsible. Though it is scummy to deliberately
         | introduce defects into a OSS project, any defects that result
         | in a failure to perform are both ethically and legally
         | completely on whoever is using Linux in a capacity that can
         | cost billions of dollars or kill someone.
         | 
         | [1] https://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
        
         | m3kw9 wrote:
         | So aren't there tests and code reviews before pushing them to
         | the Stable code base?
        
           | megous wrote:
           | Yes, there are. Will they find everything? No. Would I be
           | pissed, if this caused silent corruption of my filesystem, or
           | some such crap that's hard to test, due to this uni trying to
           | push in memory misuse vulnerabilities into the kernel into
           | some obscure driver that is not normally that much tested,
           | but I use it on my SBC farm? Yes.
           | 
           | Maybe they had some plan for immediate revert when the bogus
           | patch got into stable, but some people update stable quickly,
           | for a good reason, and it's just not good to do this research
           | this way.
        
       | ArcturianDeath wrote:
       | See also, https://boards.4chan.org/pol/thread/317988167
        
       | moron4hire wrote:
       | It's funny. When someone like RMS or ESR or (formerly) Torvalds
       | is "disrespectful" to open source maintainers, this is called
       | "tough love", but when someone else does it, it's screamed about
       | like it's some kind of high crime, with calls to permanently
       | cancel access for all people even loosely related to the original
       | offender.
        
         | mafuy wrote:
         | I don't see how this is related. Being rude in tone, and
         | wasting someone's time, are different things. You make it sound
         | like they are the same.
         | 
         | But the opposite of what you propose is true. The maintainers
         | are annoyed by others wasting their time in other cases as well
         | as in this case - it's coherent behavior. And in my opinion,
         | it's sensible to be annoyed when someone wasted your time - be
         | it by lazily made patches or by intentionally broken patches.
        
           | moron4hire wrote:
           | I'm not the one who is making them sound like the same thing.
           | There are literally people in this thread, saying that
           | "wasting time" is being "disrespectful" to the maintainers.
        
         | ajuc wrote:
         | https://www.youtube.com/watch?v=I7Umw70Yulw
        
       | mabbo wrote:
       | I think Greg KH would have been wise to add a time limit on this
       | ban. Make it a 10-year block, for example, rather than one with
       | no specific end-date.
       | 
       | Imagine what happens 25 years from now as some ground-breaking
       | security research is being done at Minnesota, and they all groan:
       | "Right, shoot, back in 2021 some dumb prof got us banned forever
       | from submitting patches".
       | 
       | Is there a mechanism for University of Minnesota to appeal,
       | someday? Even murders have parole hearings, eventually.
        
         | thedanbob wrote:
         | Presumably they could just talk to the maintainers at that time
         | and have a reasonable discussion.
        
         | bluGill wrote:
         | It isn't hard to get a gmail type address and submit from
         | there.
        
       | omar12 wrote:
       | This raises the question: "has there been state-sponsored efforts
       | to overwhelm open source maintainers with the intent of sneaking
       | in vulnerabilities to software applications?"
        
       | philsnow wrote:
       | This seems like wanton endangerment. Kernels get baked into
       | medical devices and never, ever updated.
       | 
       | I would be livid if I found that code from these "researchers"
       | was running in a medical device that a family member relied upon.
        
       | CountSessine wrote:
       | I have to wonder what's going to happen to the advisor who
       | oversaw this research. This knee-caps the whole department when
       | conducting OS research and collaboration. If this isn't
       | considered a big deal in the department, it should be. I
       | certainly wouldn't pursue a graduate degree there in OS research
       | now.
        
       | gjvc wrote:
       | see also
       | 
       | https://twitter.com/UMNComputerSci/status/138494868382169497...
        
       | [deleted]
        
       | LordN00b wrote:
       | * plonk * Was a very nice touch.
        
         | barrkel wrote:
         | It's an acronym - Person Leaving Our Newsgroup; Kill-filed.
        
       | darksaints wrote:
       | As a side note to all of the discussion here, it would be really
       | nice if we could find ways to take all of the incredible linux
       | infrastructure, and repurpose it for SeL4. It is pretty scary
       | that we've got ~30M lines of code in the kernel and the primary
       | process we have to catch major security bugs is to rely on the
       | experienced eyes of Greg KH or similar. They're awesome, but
       | they're also human. It would be much better to rely on
       | capabilities and process isolation.
        
       | [deleted]
        
       | jedimastert wrote:
       | Out of curiosity, what would be an actually good way to poke at
       | the pipeline like this? Just ask if they'd OK a patch w/o
       | actually submitting it? A survey?
        
         | roca wrote:
         | Ask Linus to approve it.
        
           | throwawaybbq1 wrote:
           | No .. Linus can approve it on himself. Linus cannot approve
           | such a thing on behalf of other maintainers.
        
             | mafuy wrote:
             | Agree. Since these researchers did not even ask him, they
             | did not fulfill even the most basic requirement. If, and
             | only if, he approves, then we can talk about who else needs
             | to be in the know, etc.
        
         | throwawaybbq1 wrote:
         | This is a good question. You would recruit actual maintainers,
         | [edit: or whoever is your intended subject pool] (who would
         | provide consent, perhaps be compensated for their time). You
         | could then give them a series of patches to approve (some being
         | bug free and others having vulnerabilities).
         | 
         | [edit: specifying the population of a study is pretty
         | important. Getting random students from the University to
         | approve your security patch doesn't make sense. Picking
         | students who successfully completed a computer security course
         | and got a high grade is better than that but again, may not
         | generalize to the real world. One of the most impressive ways I
         | have seen this being done by grad students was a user study by
         | John Ousterhout and others on Paxos vs. Raft. IIRC, they wanted
         | to claim that Raft was more understandable or led to fewer
         | bugs. Their study design was excellent. See here for an
         | example:
         | https://www.youtube.com/watch?v=YbZ3zDzDnrw&ab_channel=Diego...
         | ]
        
           | ret2plt wrote:
           | This wouldn't really be representative. If people know they
           | are being tested, they will be much more careful and cautious
           | than when they are doing "business as usual".
        
           | dataflow wrote:
           | If an actual maintainer (i.e. an "insider") approves your
           | bug, then you're not testing the same thing (i.e. the impact
           | an _outsider_ can have), are you?
        
             | throwawaybbq1 wrote:
             | I meant the same set of subjects they wanted to focus on.
        
               | dataflow wrote:
               | How is this supposed to work? Do you trust everyone
               | equally? If I mailed you something (you being the
               | "subject" in this case), would you trust it just as much
               | as if someone in your family gave it to you?
        
           | robertlagrant wrote:
           | > This took 1 min of thinking btw.
           | 
           | QFT.
        
         | saagarjha wrote:
         | Probably ask the maintainers to consent and add some blinding
         | so that the patches look otherwise legitimate.
        
         | ajuc wrote:
         | Ask about this upfront, get consent, wait rand()*365 days and
         | do the same thing they did. Inform people immediately after it
         | got accepted.
        
       | jrm4 wrote:
       | Sure. And we are _well past_ the time in which we need to develop
       | real legal action and /or policy -- with consequences against
       | this sort of thing.
       | 
       | We have an established legal framework to do this. It's called
       | "tort law," and we need to learn how to point it at people who
       | negligently or maliciously create and or mess with software.
       | 
       | What makes it difficult, of course, is that not only should it be
       | pointed at jerk researchers, but anyone who works on software,
       | provably knows the harm their actions can or do cause, and does
       | it anyway. This describes "black hat hackers," but also quite a
       | few "establishment" sources of software production.
        
       | lfc07 wrote:
       | Their research could have been an advisory email or a blogpost
       | for the maintainers without the nasty experiments. If they really
       | cared for OSS they would have have collaborated with the
       | maintainers and persuaded them to use their software tools for
       | patch work. There is research for good of all and there is
       | research for selfish gains. I am convinced this is the later.
        
       | nickysielicki wrote:
       | UMN has some egg on their face, surely, but I think the IEEE
       | should be equally embarrassed that they accepted this paper.
        
       | wuxb wrote:
       | Sending those patches is just disgraceful. I guess they're using
       | the edu emails so banning the university is a very effective
       | action so someone will respond to it. Otherwise, the researchers
       | will just quietly switch to other communities such as Apache or
       | GNU. Who want buggy patches?
        
       | fefe23 wrote:
       | Reminds me of the Tuskegee Symphilis Study.
       | 
       | Sure we infected you with Syphilis without asking for permission
       | first, but we did it for science!
        
       | matheusmoreira wrote:
       | It's okay to run experiments on humans without their explicit
       | informed consent now?
        
       | ogre_codes wrote:
       | If the university was doing research then they should publish
       | their findings on this most recent follow up experiment.
       | 
       | Suggested title:
       | 
       | "Linux Kernel developers found to reject nonsense patches from
       | known bad actors"
        
       | segmondy wrote:
       | Uhhh, I just read the paper, I stopped reading when I read what I
       | pasted below. You attempt to introduce severe security bugs into
       | the kernel and this is your solution?
       | 
       | To mitigate the risks, we make several suggestions. First, OSS
       | projects would be suggested to update the code of conduct by
       | adding a code like "By submitting the patch, I agree to not
       | intend to introduce bugs."
        
       | pushcx wrote:
       | CS researchers at the University of Chicago did a similar
       | experiment on me and other maintainers a couple years ago:
       | https://github.com/lobsters/lobsters/issues/517
       | 
       | And similarly to U Minn, their IRB covered for them:
       | https://lobste.rs/s/3qgyzp/they_introduce_kernel_bugs_on_pur...
       | 
       | My experience felt really shitty, and I'm sorry to see I'm not
       | alone. If anyone is organizing a broad response to redress
       | previous abuses or prevent future abuse, I'd appreciate hearing
       | about it, my email's on my profile.
        
       | unanswered wrote:
       | Presumably the next step is an attempt to cancel the kernel
       | maintainers on account of some politically powerful - oops, I
       | mean, some politically protected characteristics of the
       | researchers.
        
       | bbarnett wrote:
       | I know this is going to be contentious, but a quick Google shows
       | that
       | 
       | * both originated in China (both attended early university there)
       | 
       | * one appears to be on a student VISA (undergraduate BA in China,
       | now working on PhD at UoM)
       | 
       | China doesn't allow its brightest and best to leave, without
       | cause.
       | 
       | When I see research like this, it also makes me think of how
       | "foolish" China sometimes views the West, and the rest of the
       | world. Both for political reasons, eg to keep the masses under
       | control, and due to a legitimate belief we all have in "we are
       | right".
       | 
       | Frankly, whilst I have no personal animosity against someone
       | working on behalf of what they see as right, for example,
       | forwarding what they believe to be in the best interests of their
       | country, and fellow citizens? I must still struggle against goals
       | which are contrary to the same for my country, and my citizens.
       | 
       | Why all of the above?
       | 
       | Well, such things have been know for decades. And while things
       | are heating up:
       | 
       | https://www.cbc.ca/news/politics/china-canada-universities-r...
       | 
       | "including the claim that some of the core technology behind
       | China's surveillance network was developed in Canadian
       | universities."
       | 
       | When one thinks of the concept? That a foreign power, uses your
       | own research funding, research networks, resources, capabilities,
       | to research weaponry and tools to _destroy you_?
       | 
       | Maybe China _should_ scoff at The West.
       | 
       | And this sort of research is like pen testing, without direct
       | political ramifications for China itself.
       | 
       | Yes, 100%, these two could have just been working within their
       | own personal sphere.
       | 
       | They also could be working on research for China. Like how easily
       | one can affect the kernel source code, in plain sight. And even,
       | once caught, how to regain confidence of those "tricked".
       | 
       | dang: This post does not deserve to be flagged. Downvote? Sure!
       | Flagged? I've seen far more contentious things stated, when
       | referring to the NSA. And all I'm doing here is providing
       | context, and pointing to the possible motivations of those
       | involved.
       | 
       | Others kept stating "Why would the do this?!" and "Why would they
       | be so stupid?".
       | 
       | Further, at the end I additionally validate that I am
       | postulating, that 100% it certainly may not be the case. Only
       | that I am speculating on a possible motivation.
       | 
       | Are we now not allowed to speculate on motive? If so, I wonder,
       | how many other posts should be flagged.
       | 
       | For I see LOADS of people saying "They did this for reason $x".
       | 
       | Lastly, anyone believing that China is not a major security
       | concern to the West, must be living under a rock. There are
       | literally hundreds of thousands of news articles, reports, of the
       | Chinese government doing just this.
       | 
       | Yet to mention it as a potential cause of someone's actions is..
       | to receive a flag?
        
         | ignoranceprior wrote:
         | This is unjustified xenophobia. And besides, if they were
         | really trying to get bugs into the Linux kernel to further some
         | nefarious goal, why would they publish a paper on it?
         | 
         | Simplest explanation is that they just wanted the publication,
         | not to blame it on CCP or the researchers' nationality.
        
           | bbarnett wrote:
           | As I said, the research is the goal. Acknowledging China's
           | past behaviour, and applying it to potential present actions,
           | is not xenophobia.
        
         | 0xdeaddeaf wrote:
         | Talking about flagged posts: why are they so hard to read? If I
         | don't want to read a flagged post, I simply won't read it. Why
         | are you forcing me to not read it by coloring it that way?
        
         | La1n wrote:
         | >This post does not deserve to be flagged.
         | 
         | You start with "I know this is going to be contentious", you
         | know this is flamebait.
        
           | mikewarot wrote:
           | Why would you assume it is flamebait? The person knows they
           | have an opinion that is at the edge of the conversation,
           | which might invoke disagreement, and disclaims it up front?
        
         | nmfisher wrote:
         | > China doesn't allow its brightest and best to leave, without
         | cause.
         | 
         | LOL, this is completely unfounded bollocks.
        
           | bbarnett wrote:
           | Of course, because one doesn't need permission to leave
           | China? Or even a high enough social credit?
        
             | liuw wrote:
             | This is utter bullshit. I didn't need a permission or high
             | enough social credit to leave China.
        
               | bbarnett wrote:
               | You would not have been approved for a passport, if
               | deemed unworthy.
               | 
               | Whilst other countries do this, in the West, denial to
               | issue a passport is typically predicated upon conviction
               | of extremely serious crimes. Not merely because some
               | hidden agency does not like your social standing.
               | 
               | Further you require a valid passport, or an 'exit
               | permit', to exit China. You may not leave legally without
               | one.
               | 
               | Not so in the West. You can not be detained from leaving
               | the country, at all, passport or not. Other countries may
               | refuse you entry, but this is not remotely the same
               | thing.
               | 
               | For example, if I as a Canadian attempt to fly to the US,
               | Canada grants the US CBP the right to setup pre-clearance
               | facilities in Canadian airports. And often airlines
               | handle this for foreign powers as well. However, that is
               | a _foreign power_ denying me entry, not my government
               | denying me the right to exit.
               | 
               | As an example, I can just walk across the border to the
               | US, and have broken not a single Canadian law. US law, if
               | I do not report to CBP, yes.
               | 
               | Meanwhile, one would be breaking China's laws to cross
               | the border from China without a passport, or exit VISA.
        
               | liuw wrote:
               | > You would not have been approved for a passport, if
               | deemed unworthy.
               | 
               | Do you happen to know me in real life? How do you know if
               | I'm worthy or unworthy to the Chinese state?
        
               | [deleted]
        
               | bbarnett wrote:
               | I did not indicate your worth, or lack of worth, to the
               | Chinese state.
               | 
               | Instead, I stated that people are not granted exit VISAs,
               | or passports, if not deemed worthy of one. It seems as if
               | you are attempting to twist my words a bit here.
        
               | dang wrote:
               | You took the thread way off topic and into nationalistic
               | flamewar. We don't want that here. Please don't do it
               | again!
               | 
               | https://news.ycombinator.com/newsguidelines.html
        
             | nmfisher wrote:
             | As of 2 years ago (pre-COVID), no. You needed a passport,
             | and that's it. I doubt things have changed materially since
             | then.
             | 
             | Some people require permission to leave (e.g. certain party
             | members/SOE managers/etc), and I'm sure a lot of others are
             | on government watchlists and will be stopped at the
             | airport.
             | 
             | But it's patently absurd to take that and infer that every
             | single overseas Chinese student was only allowed to leave
             | if they spy/sabotage the West.
        
       | dynm wrote:
       | I'd be interested if there's a more ethical way to do this kind
       | of research, that wouldn't involve actually shipping bugs to
       | users. There certainly is some value in kind of "penetration
       | testing" things to see how well bad actors could get away with
       | this kind of stuff. We basically have to assume that more
       | sophisticated actors are doing this without detection...
        
       | AshamedCaptain wrote:
       | Researcher sends bogus papers to journal/conference, gets them
       | reviewed and approved, uses that to point how ridiculous the
       | review process of the journal is => GREAT JOB, PEER REVIEW SUCKS!
       | 
       | Researcher sends bogus patches to bazaar-style project, gets them
       | reviewed and approved, uses that to point how ridiculous the
       | review process of the project is => DON'T DO THAT! BAD
       | RESEARCHER, BAD!
        
         | snazz wrote:
         | One potentially misleads readers of the journal, the other
         | introduces security vulnerabilities into the world's most
         | popular operating system kernel.
        
           | AshamedCaptain wrote:
           | "Misleading readers of a journal" might actually cause more
           | damages to all of humanity (see
           | https://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt) than
           | inserting a security vulnerability (that is likely not even
           | exploitable) in a driver that no one actually enables (which
           | is likely why no one cares about reviewing patches to it,
           | either).
           | 
           | Thought to be fair, it is also the case that only the most
           | irrelevant journals are likely to accept the most bogus
           | papers. But in both cases I see no reason not to point it
           | out.
           | 
           | The two situations are much more closer than what you think.
           | The only difference I see is in the level of bogusness.
        
         | Rantenki wrote:
         | OK? If somebody else does something ethically dubious, does
         | that make all ethically dubious behaviours acceptable somehow?
         | How does a totally separate instance of ethical misconduct
         | impact this situation?
        
       | johncessna wrote:
       | As a user of linux, I want to see this ban go further. Nothing
       | from the University of MN, it's teaching staff, or it's current
       | or past post-grad students.
       | 
       | Once they clean out the garbage in the Comp Sci department and
       | their research committee that approved this experiment, we can
       | talk.
        
       | [deleted]
        
       | alkonaut wrote:
       | If you really wanted to research how to get malicious code into
       | the highest-profile projects like Linux, the social engineering
       | bit would be the most
       | 
       | Whether some unknown contributor can submit a bad patch isn't so
       | interesting for this type of project. Knowing the payouts for
       | exploits, the question is: how much money would one bad reviewer
       | want to let one past?
        
       | traveler01 wrote:
       | So, for "research" you're screwing around the development of one
       | of the most widely used components in the computer world. Worse,
       | introducing security holes that could reach production
       | environments...
       | 
       | That's a really stupid behavior ...
        
       | LudwigNagasena wrote:
       | I am honestly surprised anything like this can pass the ethic
       | committee. The reputational risk seems huge.
       | 
       | For example, in economics departments there is usually a ban on
       | lying to experiment participants. Many of them even explicitly
       | explain to participants that this is a difference between
       | economics and psychology experiments. The reason is that studying
       | preferences is very important to economists, and if participants
       | don't believe that the experiment conditions are reliable, it
       | will screw the research.
        
       | mosselman wrote:
       | The tone of Aditya Pakki's message makes me think they would be
       | very well served by reading 'How to Win Friends & Influence
       | People' by Dale Carnegie.
       | 
       | This is obviously the complete opposite of how you should be
       | communicating with someone in most situations let alone when you
       | want something from them.
       | 
       | I have sure been there though so if anything, take this as a book
       | recommendation for 'How to Win Friends & Influence People'.
        
         | runeks wrote:
         | I've seen this book mentioned a couple of times on HN now. I'm
         | curious: did you learn about this book from the fourth season
         | of the Fargo? This is where I encountered it first.
        
           | capableweb wrote:
           | I think it's just a common book to recommend people who seem
           | to be lacking in the "social communication" department. I
           | would know, I got it gifted to me when I was young, angsty
           | and smug.
        
           | ska wrote:
           | It's a common recommendation for many decades now, you aren't
           | going to find any one particular vector.
        
           | randylahey wrote:
           | Not the person you're asking, but the book is over 80 years
           | old and one of the best selling books of all time. Not
           | exactly the same, but it's like asking where they heard about
           | the Bible. It's everywhere.
        
             | ngngngng wrote:
             | I've seen the Bible mentioned a couple times now. I'm
             | curious, did you learn about it from watching the VidAngel
             | original series The Chosen now streaming free from their
             | app?
        
           | jacobsenscott wrote:
           | The book is very famous - it launched the "self help" genra.
           | I've never read it, but I've heard it is fairly shallow guide
           | on manipulating people to get what you want out of them.
        
             | inimino wrote:
             | "genre"
             | 
             | > I've never read it, but
             | 
             | If you've never read it, maybe just leave it at that.
             | 
             | > manipulating people
             | 
             | You mean "influencing people", like it says right in the
             | title?
             | 
             | It's a book that has helped millions, which is why it
             | continues to be widely recommended.
             | 
             | It's not for everyone. The advice seems obvious to some,
             | which of course is why it can be so valuable for others.
        
             | op00to wrote:
             | It's more like: "ask people about themselves, they like
             | talking about themselves", than secret jedi mind tricks.
             | Not really nefarious.
        
         | dilawar wrote:
         | His email reminds me the way politicians behaves in my country
         | (India): play victim and start dunking.
        
       | svarog-run wrote:
       | I feel like q lot of people here did not interpret this
       | correctly.
       | 
       | As far as it's known, garbage code was not introduced into
       | kernel.It was caught in the review process literally on the same
       | day.
       | 
       | However, there has been merged code from the same people, which
       | is not necessarily vulnerable. As a precaution the older commits
       | are also being reverted, as these people have been identified as
       | bad actors
        
         | mort96 wrote:
         | Note that the commits which have been merged previously have
         | also been intentionally garbage and misleading code, just
         | without any obvious way to exploit them. For example,
         | https://lore.kernel.org/lkml/20210407000913.2207831-1-pakki0...
         | has been accepted since April 7, and it's an obviously a commit
         | meant to _look_ like a bug fix while having no actual effect.
         | (The line `rm = NULL;` and the line `if (was_on_sock && rm)`
         | operate on different variables called `rm`.)
         | 
         | That means that the researchers got bogus code into the kernel,
         | got it accepted, and then said nothing for two weeks as the
         | bogus commit spread through the Linux development process and
         | ended up in the stable tree, and, potentially, in forks.
        
       | djhaskin987 wrote:
       | Interesting tidbit from the prof's CV where he lists the paper,
       | interpret from it what you will[1]:
       | 
       | > On the Feasibility of Stealthily Introducing Vulnerabilities in
       | Open-Source Software via Hypocrite Commits
       | 
       | > Qiushi Wu, and Kangjie Lu.
       | 
       | > To appear in Proceedings of the 42nd IEEE Symposium on Security
       | and Privacy (Oakland'21). Virtual conference, May 2021.
       | 
       | > Note: The experiment did not introduce any bug or bug-
       | introducing commit into OSS. It demonstrated weaknesses in the
       | patching process in a safe way. No user was affected, and IRB
       | exempt was issued. The experiment actually fixed three real bugs.
       | Please see the clarifications[2].
       | 
       | 1: https://www-users.cs.umn.edu/~kjlu/
       | 
       | 2: https://www-users.cs.umn.edu/~kjlu/papers/clarifications-
       | hc....
        
       | random5634 wrote:
       | How does something like this get through IRB - I always felt IRB
       | was over the top - and then they approve something like this?
       | 
       | UMN looks pretty shoddy - the response from the researcher saying
       | these were automated by a tool looks like a potential lie.
        
         | woodruffw wrote:
         | > the response from the researcher saying these were automated
         | by a tool looks like a potential lie.
         | 
         | To be clear, this is unethical research.
         | 
         | But I read the paper, and these patches _were_ probably
         | automatically generated by a tool (or perhaps guided by a tool,
         | and filled in concretely by a human): their analyses boil down
         | to a very simple LLVM pass that just checks for pointer
         | dereferences and inserts calls to functions that are identified
         | as performing frees /deallocations before those dereferences.
         | Page 9 and onwards of the paper[1] explains it in reasonable
         | detail.
         | 
         | [1]:
         | https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
        
           | random5634 wrote:
           | Thanks for this, very helpful.
           | 
           | Could they have submitted patches to fix the problems based
           | on same tooling or was that not possible (I am not close to
           | kernel development flow)?
        
             | woodruffw wrote:
             | > Could they have submitted patches to fix the problems
             | based on same tooling or was that not possible (I am not
             | close to kernel development flow)?
             | 
             | Depends on what you mean: they knew exactly what they were
             | patching, so they could easily have submitted inverse
             | patches. On the other hand, the obverse research problem
             | (patching existing UAFs rather than inserting new ones) is
             | currently unsolved in the general case.
        
         | [deleted]
        
         | thaeli wrote:
         | They obtained an "IRB-exempt letter" because their IRB found
         | that this was not human research. It's quite likely that the
         | IRB made this finding based on a misrepresentation of the
         | research during that initial stage; once they had an exemption
         | letter the IRB wouldn't be looking any closer.
        
           | devmor wrote:
           | That's what it seemed like to me as well. Based on their
           | research paper, they did not mention the individuals they
           | interacted with at all.
           | 
           | They also lied in the paper about their methodology -
           | claiming that once their code was accepted, they told the
           | maintainers it should not be included. In reality, several of
           | their bad commits made it into the stable branch.
        
             | bogwog wrote:
             | I don't think that's what's happening here. The research
             | paper you're talking about was already published, and
             | supposedly only consisted of 3 patches, not the 200 or so
             | being reverted here.
             | 
             | So it's possible that this situation has nothing to do with
             | that research, and is just another unethical thing that
             | coincidentally comes from the same university. Or it really
             | is a new study by the same people.
             | 
             | Either way, I think we should get the facts straight before
             | the wrong people are attacked.
        
             | rocqua wrote:
             | > In reality, several of their bad commits made it into the
             | stable branch.
             | 
             | Is it known whether these commits were indeed bad? It is
             | certainly worth removing them just in case, but is there
             | any confirmation?
        
               | devmor wrote:
               | I don't think we know if they contain bugs, but from what
               | I gathered reading the mailing list, we do know that they
               | added nothing of value.
        
               | Koliakis wrote:
               | https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90
               | c7c...
        
               | nightpool wrote:
               | This is a completely separate incident, a year apart from
               | the paper under discussion.
        
               | Koliakis wrote:
               | Then just go through the linked mailing list in the OP.
               | It's in the quoted parts. Honestly, the people around
               | here.
        
           | avs733 wrote:
           | Not necessarily. And the conflation of IRB-exemption and not
           | human subjects research is not exactly correct.[0]
           | 
           | Each institution, and each IRB is made up of people and a set
           | of policies. One does not have to meaningfully misrepresent
           | things to IRBs for them to be misunderstood. Further, exempt
           | from IRB review and 'not human subjects research' are not
           | actually the same thing. I've run into this problem
           | personally - IRB _declines to review the research plan_
           | because it does not meet their definition of human subjects
           | research, however the journal will not accept the article
           | without IRB review. Catch-22.
           | 
           | Further, research that involves deception is also considered
           | a perfectly valid form of research in certain fields (e.g.,
           | Psychology). The IRB may not have responded simply because
           | they see the complaint as invalid. Their mandate is
           | protecting human beings from harm, not random individuals who
           | email them from annoyance. They don't have in their framework
           | protecting the linux kernel from harm any more than they have
           | protecting a jet engine from harm (Sorry if that sounds
           | callous). Someone not liking a study is not research
           | misconduct and if the IRB determined within their processes
           | that it isn't even human subjects research, there isn't a lot
           | they can do here.
           | 
           | I suspect that this is just one of those disconnects that
           | happens when people talk across disciplines. no
           | misrepresentation was needed, all that was needed was for
           | someone reviewing this, who's background is medicine and not
           | CS, to not understand the organizational and human processes
           | behind submitting a software 'patch'.
           | 
           | The follow up behavior...not great...but the start of this
           | could be a serious of individually rational actions that
           | combine into something problematic because they were not
           | holistically evaluated in context.
           | 
           | [0] https://oprs.usc.edu/irb-review/types-of-irb-review/
        
             | asciident wrote:
             | Yes, your comment is the only one across the two threads
             | which understands the nuance of the definition of human
             | subjects research. This work is not "about" human subjects,
             | and even the word "about" is interpreted a certain way in
             | IRB review. If they interpret the research to be about
             | software artifacts, and not human subjects, then the work
             | is not under IRB purview (it can still be determined to be
             | exempt, but that determination is from the IRB and not the
             | PI).
             | 
             | However, given that, my interpretation of the federal
             | common rule is that this work would indeed fit the
             | definition of human subjects research, as it comprises an
             | intervention, and it is about generalizable human
             | procedures, not the software artifact.
        
               | avs733 wrote:
               | I agree with your last paragraph, although I can totally
               | understand how somebody who doesn't know much about
               | programming or open source would see otherwise.
        
               | avs733 wrote:
               | Other note...different irbs treat not research vs exempt
               | differently.
               | 
               | One institution I worked with conflated "exempt" and "not
               | human subjects research" and required the same review of
               | both.
               | 
               | Another institution separated them and would first
               | establish if something was human subjects research. If it
               | was, they would then review whether it was exempt from
               | irb review based on certain categories. If they
               | determined it was not human subjects research they would
               | not review whether it met the exempt criteria, because in
               | their mind they could not make such a determination for
               | research that did not involve human subjects
        
             | nightpool wrote:
             | My understanding in this case is not that the IRB declined
             | to review the study plan, but that (quoting the study
             | authors) "The IRB of UMN reviewed the study and determined
             | that this is not human research (a formal IRB exempt letter
             | was obtained)." (more information here: https://www-
             | users.cs.umn.edu/~kjlu/papers/clarifications-hc....)
             | 
             | Do you think that the IRB was correct to make the
             | determination they did? It does sound like a bit of a grey
             | area
        
               | avs733 wrote:
               | From the letter:
               | 
               | > The IRB of UMN reviewed the study and determined that
               | this is not human research (a formal IRB exempt letter
               | was obtained). Throughout the study, we honestly did not
               | think this is human research, so we did not apply for an
               | IRB approval in the beginning.
               | 
               | So the statement is a bit unclear to me, and I'm hesitant
               | to come to a conclusion because I have not seen what they
               | submitted.
               | 
               | As I read this they are saying:
               | 
               | * we explained the study to irb and asked whether it met
               | their definition of human subjects research - based on
               | our description they said it is not human subjects
               | research
               | 
               | * therefore we did not apply to irb to have the study
               | assessed for the appropriate type of review.
               | 
               | Exempt is a type of irb review, basically it is a low
               | level desk review of a study. It does not mean no one
               | looks at it, it just means the whole irb doesn't have to
               | discuss it.
               | 
               | I can see both sides of this. Irbs focus on protection of
               | the rights of research participants. The assumption in
               | cognitive models is of direct participants. This study
               | ended up having indirect participants. I would argue that
               | is the researchers job to clarify and ensure was
               | reviewed. However, there is almost certainty this study
               | would have been approved as exempt.
               | 
               | I think the irb likely did the right thing based on the
               | information provided to them. The harm that HN is
               | identifying does not fall within the normal irb
               | definitions of harm anyways...which is direct harm to
               | people. The causal chain HN is spun up about is very
               | real...just not how irb views research typically
        
             | hannasanarion wrote:
             | > Further, research that involves deception is also
             | considered a perfectly valid form of research in certain
             | fields
             | 
             | The type of deception that is allowable in such cases is
             | lying to participants about what it is that is being
             | studied, such as telling people that they are taking a
             | knowledge quiz when you are actually testing their reaction
             | time.
             | 
             | Allowable deception does not include invading the space of
             | people who did not consent to be studied under false
             | pretenses.
        
             | sampo wrote:
             | Here is a case, where one university's (Portland State
             | University) IRB saw that sending satire articles to social
             | science journals "violated ethical guidelines on human-
             | subjects research".
             | 
             | https://en.wikipedia.org/wiki/Peter_Boghossian#Research_mis
             | c...
        
               | avs733 wrote:
               | that is actually a useful example for comparison.
               | 
               | * The researcher is a professor in the humanities, which
               | typically does not deal with human subjects research and
               | the (often) vague and confusing boundaries. Often, people
               | from outside the social sciences and medical/biology
               | fields struggle a little bit with IRBs because...things
               | don't seem rational until you understand the history and
               | details. Just like someone from CS.
               | 
               | * The researcher in your example DID NOT seek review by
               | IRB (per my memory of the situation). That was the
               | problem. The kernel bug authors seem to have engaged with
               | their IRB. the difference is not doing it vs. a
               | misunderstanding.
               | 
               | * The comments about seeking consent before submitting
               | the fake papers ignore that it is perfectly possible to
               | have done this WITHOUT a priori informed consent. It is
               | perfectly possible for IRBs to review and approve studies
               | involving deception. In those cases, informed consent is
               | not required to collect data.
               | 
               | * Finally, people on IRBs tend to be academics and are
               | highly likely to have some understanding of how a journal
               | works. That would mean they understand the human role in
               | journal publishing. The exact same IRB may well not have
               | anyone with CS experience and may have looked at the
               | kernel study and seen the human role differently than
               | journal study.
               | 
               | * Lastly, the fact that the IRB in your example looked at
               | 'animal rights' is telling. They were trying to figure
               | out what Peter did. He published papers with data about
               | experiments on animals...that would require IRB review.
               | The fact that that charge was dismissed when they figured
               | out no such experiments occurred is telling about who is
               | acting in good faith.
        
             | ncallaway wrote:
             | > They don't have in their framework protecting the linux
             | kernel from harm any more than they have protecting a jet
             | engine from harm (Sorry if that sounds callous).
             | 
             | It sounds pretty callous if that jet engine gets mounted on
             | a plane that carries humans. In this hypothetical the IRB
             | absolutely should have a hand in stopping research that has
             | a methodology that includes sabotaging a jet engine that
             | could be installed on a passenger airplane.
             | 
             | Waiving it off as an inanimate object doesn't feel like it
             | captures the complete problem, given that there are many
             | safety critical systems that can depend on the inanimate
             | object.
        
               | avs733 wrote:
               | Your extrapolation provides clear context about how this
               | can harm people, which is within an irb purview and
               | likely their ability to understand.
               | 
               | I'm not saying it is okay, I'm simply saying how this
               | could happen.
               | 
               | It requires understanding the connection between
               | inanimate object and personal harm, which in this case is
               | 1)non obvious and 2)not even something I necessarily
               | accept within a common rule definition of harm.
               | 
               | Annoyance or inconvenience is not a meaningful human harm
               | within the irb framework
               | 
               | But, fundamentally, the irb did not see this as human
               | research. You and I and the commenters see how that is
               | wrong. That is where their evaluation ended...they did
               | not see human involvement right or wrong.
               | 
               | And irb is part of the discussion of research ethics, it
               | is not the beginning nor the end of doing ethical
               | research.
        
           | lbarrow wrote:
           | My understanding is that it's pretty common for CS
           | departments to get IRB exemption even when human participants
           | are tangentially involved in studies.
        
             | lmkg wrote:
             | I've seen from a distance one CS department struggle with
             | IRB to get approval for using Amazon Mechanical Turk to
             | label pictures for computer vision datasets. I believe the
             | resolution was creating a specialized approval process for
             | that family of tasks.
        
             | waheoo wrote:
             | That sounds like a disconnect from reality.
        
               | WORLD_ENDS_SOON wrote:
               | I think it is because many labs in CS departments do very
               | little research involving human subjects (e.g. a machine
               | learning lab or a theory lab), so within those labs there
               | isn't really an expectation that everything goes through
               | IRB. Many CS graduate students likely never have to
               | interact with IRB at all, so they probably don't even
               | know when it is necessary to involve IRB. The rules for
               | what requires IRB involvement are also somewhat open to
               | interpretation. For example, surveys are often exempt
               | depending on what the survey is asking about.
        
               | waheoo wrote:
               | Machine learning automatically being exempt is a huge red
               | flag for me. There are immense repercussions for the
               | world on every comp sci topic. It's just less direct, and
               | often "digital" which seems separate but it's not.
        
             | corty wrote:
             | It is also quite easy to pull the wool over an IRBs eyes.
             | An IRB is usually staffed with a few people from the
             | medicine, biology, psychology and maybe (for the good
             | ethical looks) philosophy and theology departments. Usually
             | they aren't really qualified to know what a computer
             | scientist is talking about describing their research.
             | 
             | And also, given that the stakes are higher e.g. in
             | medicine, and the bar is lower in biology, one often gets a
             | pass: "You don't want to poke anyone with needles, no LSD
             | and no cages? Why are you asking us then?" Or something to
             | that effect. The IRBs are just not used to such "harmless"
             | things not being justified by the research objective.
        
               | avs733 wrote:
               | see my other comment to the GP. pulling the wool suggests
               | agency and intentionality that isn't necessarily present
               | when you have disciplinary differences like you describe.
               | Simple miscommunication, e.g., using totally normal field
               | terminology that does not translate well, is different.
        
               | corty wrote:
               | It is your job as a researcher to make the committee
               | fully understand all the implications of what you are
               | doing. If you fail in that, you failed in your duties.
               | The committee will also tell you this, as well as any
               | ethical guideline. Given that level of responsibility, it
               | isn't easy to ascribe this to negligence on the part of
               | the researchers, intent is far more likely.
        
               | avs733 wrote:
               | It is absolutely my job, but I don't necessarily have
               | actionable information that I created a misunderstanding.
               | 
               | I submit unclear thing
               | 
               | Thing is approved
               | 
               | Thing must have been clear right?
        
               | davidkuhta wrote:
               | 'Not ignorance, but ignorance of ignorance is the death
               | of knowledge.' - Alfred North Whitehead
        
               | bluGill wrote:
               | No, it is your job as a researcher to make sure you never
               | even bother to submit to the IRB something that might
               | fail review.
               | 
               | Sometimes you might need to make the committee understand
               | before a full review when you are asking where a line is
               | for some tricky part, but you ask about those parts long
               | before you have enough of the study designed to actually
               | put it before the review.
               | 
               | Ethics are a personal responsibility. You should be
               | personally embarrassed if you ever have something fail
               | review, and probably should have your tenure removed as
               | well since if your ethics are so bad as to put before the
               | board something that fails you will also do something
               | even worse without any review.
        
         | [deleted]
        
         | tarboreus wrote:
         | IRB is useless. They don't use much context, including if the
         | speediness of IRB approval would save lives. You could make a
         | reasonable argument that IRB has contributed to millions of
         | preventable deaths at this point, with COV alone it's at least
         | dozens of thousands if not far more.
        
           | random5634 wrote:
           | I don't disagree.
        
           | angry_octet wrote:
           | This is the unfortunate attitude that leads to bad research
           | and reduces trust in science. If you think IRB has
           | contributed to deaths you should make a case, because right
           | now you sound like a blowhard.
        
           | admax88q wrote:
           | By COV do you mean Covid? It sounds like you're alluding to
           | the argument that if they'd only let us test potential
           | vaccines on humans right away then we would have had a
           | vaccine faster. I disagree that that's a foregone conclusion,
           | and you certainly need a strong argument or evidence to make
           | such a claim.
        
         | psychometry wrote:
         | I have a feeling that methods of patching the Linux kernel is a
         | concept most members of IRB boards wouldn't understand at all.
         | It's pretty far outside their wheelhouse.
        
       | shadowgovt wrote:
       | Is banning an entire university's domain from submitting to a
       | project due to the actions of a few of its members an example of
       | cancel culture?
        
         | Minor49er wrote:
         | If the university itself is actively promoting unethical
         | behavior, then no, it isn't "cancel culture". That term is
         | reserved for people or groups who hold unpopular opinions, and
         | this is not that.
        
       | emeraldd wrote:
       | Is there a readable version of the message Greg was replying to
       | https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah... ?
       | Or was there more to it that what Greg quoted?
        
       | angry_octet wrote:
       | Research without ethics is research without value.
       | 
       | Unbelievable that this could have passed ethics review, so I'd
       | bet it was never reviewed. Big black eye for University of
       | Minnesota. Imagine if you are another doctoral student is CS/EE
       | and this tool has ruined your ability to participate in Linux.
        
         | dgritsko wrote:
         | I'm a total neophyte when it comes to the Linux kernel
         | development process, but couldn't they just, y'know, use a
         | Gmail address or something? Couldn't the original researchers
         | have done the same?
        
           | room500 wrote:
           | Yes, they could. This is actually addressed in the original
           | email thread:
           | 
           | > But they can't then use that type of "hiding" to get away
           | with claiming it was done for a University research project
           | as that's even more unethical than what they are doing now.
        
             | mort96 wrote:
             | I was also thinking that commits from e-mails ending in
             | ".edu" are probably more likely to be assumed to be good-
             | faith; they are from real students/professors/researchers
             | at real universities using their real identities. There's
             | probably going to be way more scrutiny on a commits from
             | some random gmail address.
        
               | bombcar wrote:
               | Exactly - the kernel maintainers already "prejudge"
               | submissions and part of that judgement is evaluating the
               | "story" behind why a submission is forthcoming. A
               | submission from Linus, ok, he's employed to work on the
               | kernel, but even that would be suspect if it's an area he
               | never touches or appears to be going around the
               | established maintainer.
               | 
               | And one of the most reasonable "stories" behind a patch
               | is "I'm working at a university and found a bug",
               | probably right behind "I'm working at a company and we
               | found a bug".
               | 
               | Banning U of M won't solve everything, but it is dropping
               | a source of _known bad_ patches.
        
         | gruez wrote:
         | > Research without ethics is research without value.
         | 
         | didn't we learn a lot from nazi/japanese experiments from ww2?
        
           | impalallama wrote:
           | By in large no. The Nazi experiments were based on faulty
           | race science and were indistinguishable from brutal torture
           | and what remains is either useless or impossible to reproduce
           | for ethical reasons.
        
           | gambiting wrote:
           | From my understanding - no, actually. We learnt a bit, on the
           | very extreme scale of things, but most of the "experiments"
           | were not conducted in any kind of way that would yield usable
           | data.
        
             | db48x wrote:
             | Yes and no. It's my understanding that the Germans
             | pioneered the field of implanted medical prostheses (like
             | titanium pins to stabilize broken bones). A lot of that
             | research was done on prisoners, and they were even kind
             | enough to extend the benefits of the medical treatments
             | that they developed to prisoners of war (no sarcasm
             | intended).
        
           | xjlin0 wrote:
           | Learn how to torture? Maybe. Learn real knowledge? No. Most
           | of those info are not just sick but also impractical.
           | 
           | The goal of military is to protect or conquer. The goal of
           | science is to find the truth, and the goal of the engineering
           | is to offer solutions. Any of the true leaders in either
           | fields knows there're more efficient means/systems to get
           | those goals, even in ww2 era.
        
           | kevingadd wrote:
           | Experiments producing lots of data doesn't necessarily mean
           | they were useful. If the experiment was run improperly the
           | data is untrustworthy, and if the experiment was designed to
           | achieve things that aren't useful they may not have
           | controlled for the right variables.
           | 
           | And ultimately, we know what their priorities were and what
           | kind of worldview they were operating under, so the odds are
           | bad that any given experiment they ran would have been
           | rigorous enough to produce results that could be reproduced
           | in other studies and applied elsewhere. I'm not personally
           | aware of any major breakthroughs that would have been
           | impossible without the "aid" of eugenicist war criminals,
           | though it's possible there's some major example I'm missing.
           | 
           | We certainly did bring over lots of German scientists to work
           | on nukes and rockets, so your question is not entirely off-
           | base - but I suspect almost everyone involved in those
           | choices would argue that rocketry research isn't unethical.
        
           | bluGill wrote:
           | We did. Often we wish they could have got more decimal points
           | in a measurement, or had known how to check for some factor.
           | Despite all the gains and potential breakthroughs lost nobody
           | is willing to repeat them or anything like them. I know just
           | enough medically people given 2 weeks to live who were still
           | around 10 years latter that I can't think of any situation
           | where I'd make an exception.
           | 
           | Though what a lot is is also open to question. Much of what
           | we learned isn't that useful to real world problems. However
           | some has been important.
        
         | spikels wrote:
         | Ethics are highly subjective on the margins. In this case they
         | completely missed this issue. However the opposite is more
         | often the case.
         | 
         | A good example is challenge testing Covid vaccines. This was
         | widely deemed to be unethical despite large numbers of
         | volunteers. Perhaps a million lives could have been saved if we
         | had vaccines a few months sooner.
         | 
         | Research without ethics (as currently practiced) can have
         | value.
        
         | dmoy wrote:
         | Some CS labs at UMN take ethics very seriously. Their UXR lab
         | for example.
         | 
         | Other CS labs at UMN, well... apparently not so much.
        
         | b0rsuk wrote:
         | Life support machinery was developed with methods like cutting
         | dog heads, plugging them in and see how long it shows signs of
         | life.
        
         | threatofrain wrote:
         | Individuals should still be able to contribute, just not under
         | the name University of Minnesota.
        
       | metalliqaz wrote:
       | Wow, shocking and completely unethical by that professor.
        
       | CTDOCodebases wrote:
       | Fair. You are either part of the solution, part of the problem or
       | just part of the landscape.
        
       | kevinventullo wrote:
       | Here's a (perhaps naively) optimistic take: by publishing this
       | research and showing it to lawmakers and industry leaders, it
       | will sound alarms on a serious vulnerability in what is critical
       | infrastructure for much of the tech industry and public sector.
       | This could then lead to investment in mitigations for the
       | vulnerability, e.g. directly funding work to proactively improve
       | security issues in the kernel.
        
       | tsujp wrote:
       | This is categorically unethical behaviour. Attempting to get
       | malicious code into an open source project that powers a large
       | set of the worlds infrastructure -- or even a small project --
       | should be punished in my view. Actors are known, its been stated
       | by the actors as intentional.
       | 
       | I think the Linux Foundation should make an example of this.
        
       ___________________________________________________________________
       (page generated 2021-04-21 23:00 UTC)