[HN Gopher] Timeline of the xz open source attack
       ___________________________________________________________________
        
       Timeline of the xz open source attack
        
       Author : todsacerdoti
       Score  : 728 points
       Date   : 2024-04-02 03:58 UTC (19 hours ago)
        
 (HTM) web link (research.swtch.com)
 (TXT) w3m dump (research.swtch.com)
        
       | goombacloud wrote:
       | This might not be complete because this statement "More patches
       | that seem (even in retrospect) to be fine follow." lacks some
       | more backing facts. There were more patches before the SSH
       | backdoor, e.g.: "Lasse Collin has already landed four of Jia
       | Tan's patches, marked by "Thanks to Jia Tan"" and the other stuff
       | before and after the 5.4 release. So far I didn't see someone
       | make a list of all patches and gather various opinions on whether
       | the changes could be maliciously leveraged.
        
         | goombacloud wrote:
         | In
         | https://archive.softwareheritage.org/browse/revision/e446ab7...
         | one can open the patches and then click the "Changes" sub-tab.
         | Stuff like this looks like a perf improvement but who knows if
         | a tricky bug is introduced that was aimed to be exploited
         | https://archive.softwareheritage.org/browse/revision/e446ab7...
         | There are more patches to be vetted unless one would give up
         | and say that 5.2 should be used as last "known-good".
        
         | VonGallifrey wrote:
         | I get that there is a reason not to trust those Patches, but I
         | would guess they don't contain anything malicious. This early
         | part of the attack seems to only focus on installing Jia Tan as
         | the maintainer, and they probably didn't want anything there
         | that could tip Lasse Collin off that this "Jia" might be up to
         | something.
        
           | rsc wrote:
           | Yes, exactly. I did look at many of them, and they are
           | innocuous. This is all aimed at setting up Jia as a trusted
           | contributor.
        
       | dhx wrote:
       | I think this analysis is more interesting if you consider these
       | two events in particular:
       | 
       | 2024-02-29: On GitHub, @teknoraver sends pull request to stop
       | linking liblzma into libsystemd.[1]
       | 
       | (not in the article) 2024-03-20: The attacker is now a co-
       | contributor for a patchset proposed to the Linux kernel, with the
       | patchset adding the attacker as a maintainer and mirroring the
       | attacker's activity with gaining the trust over the development
       | of xz-utils.
       | 
       | A theory is that the attacker saw the sshd/libsystemd/xz-utils
       | vector as closing soon with libsystemd removing its hard
       | dependency on xz-utils. When building a Linux kernel image, the
       | resulting image is compressed by default with gzip [3], but can
       | also be optionally compressed using xz-utils (amongst other
       | compression utilities). There's a lot of distributions of Linux
       | which have chosen xz-utils as the method used to compress kernel
       | images, particularly embedded Linux distributions.[4] xz-utils is
       | even the recommended mode of compression if a small kernel build
       | image is desired.[5]
       | 
       | If the attacker can execute code during the process of building a
       | new kernel image, they can cause even more catastrophic impacts
       | than targeting sshd. Targeting sshd was always going to be
       | limited due to targets not exposing sshd over accessible
       | networks, or implementing passive optical taps and real time
       | behavioural analysis, or receiving real time alerts from servers
       | indicative of unusual activity or data transfers. Targeting the
       | Linux kernel would have far worse consequences possible,
       | particularly if the attacker intended to target embedded systems
       | (such as military transport vehicles [6]) where the chance of
       | detection is reduced due to lack of eyeballs looking over it.
       | 
       | [1] https://github.com/systemd/systemd/pull/31550
       | 
       | [2] https://lkml.org/lkml/2024/3/20/1004
       | 
       | [3]
       | https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
       | 
       | [4] https://github.com/search?q=CONFIG_KERNEL_XZ%3Dy&type=code
       | 
       | [5]
       | https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
       | 
       | [6] https://linuxdevices.org/large-military-truck-runs-
       | embedded-...
        
         | daghamm wrote:
         | I don't understand how this could have worked.
         | 
         | If you compile and build your own image, would that be able to
         | trigger the backdoor?
         | 
         | You can of course change an existing image to something that
         | triggers the backdoor, but with that level of access you won't
         | really need a backdoor, do you?
        
           | bandrami wrote:
           | It's Thompson's "Trusting Trust"[1], right? To the extent XZ
           | is part of the standard build chain you could have a source-
           | invisible replicating vulnerability that infects everything.
           | And if it gets into the image used for, say, a popular mobile
           | device or IoT gadget...
           | 
           | 1: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_R
           | ef...
        
           | dhx wrote:
           | An attack would look something like:
           | 
           | 1. A new "test" is added to the xz-utils repository, and when
           | xz is being built by a distribution such as Debian, the
           | backdoor from the "test" is included into the xz binary.
           | 
           | 2. The backdoored xz is distributed widely, including to a
           | target vendor who wishes to from a Debian development
           | environment compile a kernel for an embedded device that is
           | usually used in a specific industry and/or set of countries.
           | 
           | 3. When backdoored xz is then asked to compress a file, it
           | checks whether the file is a kernel image and checks whether
           | the kernel image is for the intended target (e.g. includes
           | specific set of drivers).
           | 
           | 4. If the backdoored xz has found its target kernel image,
           | search for and modify random number generation code to
           | effectively make it deterministic. Or add a new module which
           | listens on PF_CAN interfaces for a particular trigger and
           | then sends malicious CAN messages over that interface. Or
           | modify dm_crypt to overwrite the first 80% of any key with a
           | hardcoded value. Plenty of nasty ideas are possible.
        
             | Denvercoder9 wrote:
             | Note that none of these steps require the attacker to have
             | any code in the kernel. The kernel patchset is completely
             | orthogonal to the possibility of this attack, and seems to
             | be benign.
        
               | sandstrom wrote:
               | Yeah, but gaining trust with benign patchset would be the
               | first step.
        
         | rsc wrote:
         | Thanks for this comment. I've added that LKML patch set to the
         | timeline.
        
         | delfinom wrote:
         | I don't think the attacker saw the systemd change at all
         | personally.
         | 
         | The way the exploit was setup, they could have pivoted to
         | targeting basically any application server because there's so
         | many interdependencies. python, php and ruby could be targeted
         | because liblzma is loaded via libxml2 as an example.
         | 
         | Gaining trust for linux kernel commits would have just let them
         | continue maximizing profit on their time investment.
         | 
         | >particularly if the attacker intended to target embedded
         | systems (such as military transport vehicles [6])
         | 
         | Said vehicles aren't networked on the public internet and from
         | experience in this particular sector, probably haven't been nor
         | will be updated for decades. "Don't break what isn't broken"
         | applies as well as "military doesn't have a budget to pay
         | defense contractors $1 million to run apt-get update per
         | vehicle".
        
         | tamimio wrote:
         | > target embedded systems (such as military transport vehicles
         | 
         | You are giving a lot of credit to that, I have seen military
         | ones with ancient software, even the "new" updated ones are
         | still on ubuntu 18.04 because of some drivers/sdks
         | compatibility, but it isn't a major issue since most of the
         | times they are not connected to the public internet.
        
       | mseepgood wrote:
       | Never allow yourself to be bullied or pressured into action. As a
       | maintainer, the more a contributor or user nags, the less likely
       | I am to oblige.
        
         | resource_waste wrote:
         | That sounds nice.
         | 
         | I did an engineering/program manager role for 8 years and
         | people pretty much always did what I asked if I showed up at
         | their desk or bothered their boss.
         | 
         | "Squeaky wheel gets the grease?"
         | 
         | But I too like to think that I prioritize my children on merit
         | rather than fuss level. For some reason they continue to cry
         | despite me claiming I don't react to it.
        
         | kenjackson wrote:
         | But in this case he was getting hit by both someone willing to
         | help and then multiple people complaining that things were
         | taking too long. And when you yourself feel like things are
         | taking too long then you're probably more susceptible to all
         | this.
        
         | pixl97 wrote:
         | The issue here is the attackers will quickly move away from an
         | individual attacking you to the group attacking you. The person
         | writing the infected code will never be a jerk to you at all.
         | You'll just suddenly see a huge portion of your mailing list
         | move against you ever so slightly.
         | 
         | We've complained about bots in social media for a long time,
         | but how many people in open source discussions are shady
         | manipulative entities?
        
           | galleywest200 wrote:
           | These days you can even have these emails automatically taken
           | in by an LLM and have the LLM argue with the maintainer for
           | you, no humans needed!
        
             | snerbles wrote:
             | Maintainers will need LLM sockpuppets of their own to
             | automatically answer these automatic emails.
        
         | kjellsbells wrote:
         | True, but a determined adversary like JiaTan/Jugar has an ace
         | up their sleeve: they are good enough, and patient enough, to
         | be able to fork the base project, spend a year or two making it
         | better than the original (releasing the head of steam built up
         | from legitimate requests that the old, overworked maintainer
         | never got too, building goodwill in the process) and then
         | convincing the distros to pick up their fork instead of the
         | older original. At which point it really is game over.
        
           | cesarb wrote:
           | > and then convincing the distros to pick up their fork
           | instead of the older original.
           | 
           | Given the current situation, I'm slightly worried about
           | Fedora's planned move to zlib-ng instead of zlib in the next
           | release
           | (https://fedoraproject.org/wiki/Changes/ZlibNGTransition).
        
       | XorNot wrote:
       | What stands out to me is this particular justification:
       | 
       | > 2024-02-23: Jia Tan merges hidden backdoor binary code well
       | hidden inside some binary test input files. The associated README
       | claims "This directory contains bunch of files to test handling
       | of .xz, .lzma (LZMA_Alone), and .lz (lzip) files in decoder
       | implementations. Many of the files have been created by hand with
       | a hex editor, thus there is no better "source code" than the
       | files themselves."
       | 
       | This is, perhaps, the real thing we should think about fixing
       | here because the justification is on the surface reasonable and
       | the need is quite reasonable - corrupted test files to test
       | corruption handling.
       | 
       | But there has _got_ to be some a way to express this which doesn
       | 't depend on, in essence, "trust me bro" since binary files don't
       | appear in diffs (which is to say: I can think of a number of
       | means of doing it, but there's definitely no conventions in the
       | community I'm aware of).
        
         | OJFord wrote:
         | Also that when dynamically linking A against B, A apparently
         | gets free reign to overwrite B.
         | 
         | It sort of makes sense, since at the end of the day it could
         | just be statically linked or implement B's behaviour itself and
         | do whatever it wants, but it's not really what you expect is
         | it.
        
           | acdha wrote:
           | Yeah, that part struck me as something we should be able to
           | block - the number of times where you actually want that must
           | be small enough to make it practical do something like write-
           | protect pages with a small exception list.
        
         | asvitkine wrote:
         | Well, test files shouldn't be affecting the actual production
         | binary.
         | 
         | But in practice that's not something that can be enforced for
         | arbitrary projects without those projects having set something
         | up specifically.
         | 
         | For example, the project could track the effect on binary size
         | of the production binary after every PR. But then it still
         | requires a human (or I guess an AI bot?) to notice that the
         | increase would be unexpected.
        
       | thrdbndndn wrote:
       | How do you get "Jigar Kumar"'s email address?
       | 
       | I can't seem to find it in the (web version of) the maillist.
       | 
       | Another question:
       | 
       | What is the typo exactly in this commit? I can't seem to find it.
       | 
       | https://git.tukaani.org/?p=xz.git;a=commitdiff;h=a100f9111c8...
        
         | cced wrote:
         | I think the period on the line above void my_sandbox.
        
         | wufocaculura wrote:
         | There's a single dot in a line between #include <sys/prcntl.h>
         | and void my_sandbox(void). It is easy to miss, but makes the
         | compile to fail, thus resulting in HAVE_LINUX_LANDLOCK to be
         | never enabled.
        
           | arrowsmith wrote:
           | Can someone explain to n00bs like me: what's "landlock"
           | anyway and why is it significant here?
        
             | Denvercoder9 wrote:
             | It's a Linux Security Module that allows to sandbox
             | processes: https://docs.kernel.org/userspace-
             | api/landlock.html
        
           | Thorrez wrote:
           | prctl, not prcntl
        
         | swsieber wrote:
         | > How do you get "Jigar Kumar"'s email address?
         | 
         | Hit reply
        
       | progressof wrote:
       | I have found JiaT75 - Jia Tan mentioned in Microsoft C++, C, and
       | Assembler as an community contributor ...
       | https://learn.microsoft.com/en-us/cpp/overview/whats-new-cpp...
       | 
       | Also check this... https://www.abuseipdb.com/check/64.23.252.16
        
         | mseepgood wrote:
         | So all binaries built with a Microsoft compiler must be
         | considered compromised?
        
           | progressof wrote:
           | no
        
             | jhoechtl wrote:
             | Care to enlighten how you come to such a knee-jerk reaction
             | given a highly critivcal observation? What obvous are we
             | missing?
        
               | yuriks wrote:
               | It's just a documentation change. Likely made to add
               | reputation to the account.
        
               | progressof wrote:
               | I didn't come to any conclusion... and I don't think you
               | missed anything... I'm just posting links... you think
               | it's better if I didn't post anything because this is
               | stupid? if so then ok...
        
         | winkelmann wrote:
         | Completely benign documentation change to fix a typo:
         | https://github.com/MicrosoftDocs/cpp-docs/pull/4716
         | 
         | I have no idea what that IP address is supposed to be about...
        
           | BLKNSLVR wrote:
           | Regarding the AbuseIPDB link: some of the SSH payloads
           | mentioned in the instances of 'attack' contain the username
           | jiat75.
           | 
           | Doesn't necessarily validate anything though. Could be
           | progressof planting misdirection given that the IP address
           | only started being detected basically today (and the VPS was
           | likely only just setup today as well, if the hostname is to
           | be trusted).
           | 
           | ... and that progressof's account is about an hour old.
        
       | jhoechtl wrote:
       | > merges hidden backdoor binary code well hidden inside some
       | binary test input files. [...] Many of the files have been
       | created by hand with a hex editor, thus there is no better
       | "source code" than the files themselves.
       | 
       | So much for the folks advocating for binary (driver) blobs in OSS
       | t support otherwise unsupported hardware.
       | 
       | It's either in source form and reproducable or it's not there.
        
         | mseepgood wrote:
         | Not just for hardware support: https://github.com/serde-
         | rs/serde/issues/2538
        
           | cryptonector wrote:
           | !!
        
         | pixl97 wrote:
         | >It's either in source form and reproducable or it's not there.
         | 
         | Wanna know how I know you haven't read into the discussion
         | much?
         | 
         | There are a whole lot of binary test cases in software.
         | Especially when you're dealing with things like file formats
         | and test cases that should specifically fail on bad data of
         | particular types.
        
           | TacticalCoder wrote:
           | > There are a whole lot of binary test cases in software.
           | 
           | That's not how I read GP's point. If _even_ binary blobs in
           | test cases are a place where backdoors are, now as a matter
           | of fact, hidden then, certainly, among the folks advocating
           | for binary drivers in FOSS, there are some who are already
           | --or planning to-- add backdoors there.
           | 
           | Binary blobs are _all_ terrible, terrible, terrible ideas.
           | 
           | Builds should be 100% reproducible from source, bit for bit.
           | At this point it's not open up for discussion anymore.
        
             | pixl97 wrote:
             | Then you figure out how to build a 'source' test case of a
             | bad zip, or bad jpg, or word document or whatever else
             | exists out there. Also figure out how to test that your
             | bit4bit perfect binary isn't doing the wrong damned thing
             | in your environment with actual real data.
        
               | Hackbraten wrote:
               | In cryptography, there's the concept of a nothing-up-my-
               | sleeve number. [1]
               | 
               | Instead of obscure constants, you use known constants, or
               | at least simple methods to derive your constants.
               | 
               | You can do the same thing to come up with your test
               | cases. Bad zip? Construct a good zip of 10 files, each
               | containing the first 10,000 prime numbers. Then corrupt
               | the zip by seeking to position (100/pi) and write a
               | thousand zeroes there.
               | 
               | Bad JPEG? Use Imagemagick to render the first 1000 prime
               | numbers as text into a JPEG file, then apply a simple
               | nothing-up-my-sleeve corruption operation.
               | 
               | There are still cases where this approach isn't going to
               | work: that new icon, helpfully proposed by a contributor,
               | meant to be used in production, might contain malicious
               | code, steganographically embedded. I think there's little
               | you can do to prevent that.
               | 
               | [1]: https://en.wikipedia.org/wiki/Nothing-up-my-
               | sleeve_number
        
           | cryptonector wrote:
           | GP is talking about executable blobs (drivers) more than
           | anything. Yes, binary protocols will lead to binary test
           | blobs, so what.
        
             | pixl97 wrote:
             | The attack was embedded in a binary test blob, or did you
             | just not happen to read anything about the xy attack?
        
               | cryptonector wrote:
               | You can't avoid having to have binary blobs of data. And
               | again, GP was talking about closed-source drivers, not
               | specifically the xz attack.
        
         | tredre3 wrote:
         | Are you running linux-libre?
        
       | lukaslalinsky wrote:
       | The social side of this is really haunting me over the last days.
       | It's surprisingly easy to pressure people to giving up control.
       | I've been there myself. I can't even imagine how devastating this
       | must be to the original author of XZ, especially if he is dealing
       | with other personal issues as well. I hope at least this will
       | serve a strong example to other open source people, to never
       | allow others to pressure them into something they are not
       | comfortable with.
        
         | tommiegannert wrote:
         | The Jigar Kumar nudges are so incredibly rude. I would have
         | banned the account, but perhaps they contributed something
         | positive as well that isn't mentioned.
         | 
         | I wonder if it would be possible to crowdsource FOSS mailing
         | list moderation.
        
           | goku12 wrote:
           | There is a good chance that everyone in that thread except
           | the original maintainer is in on the act. It's likely that
           | all those accounts are managed by a single person or group.
           | Targeting just one account for rudeness isn't going to help,
           | if that's true.
        
             | lukaslalinsky wrote:
             | It does help on the social/psychological side. If you, as
             | an open source project maintainer, have a policy that such
             | rudeness is not acceptable, you are much less likely to
             | become a successful victim of a social attack like this.
        
               | sigmar wrote:
               | That would be true if you could ban the person from using
               | new emails, but I don't think that's true when the thread
               | if rife with sock puppet accounts. You ban the first rude
               | email account, then there will be 2 new accounts
               | complaining about both the lack of commits and the
               | "heavy-handed mailing-list moderation" stifling differing
               | views.
        
               | geodel wrote:
               | Absolutely right. Considering there is a whole cottage
               | industry about _asshole replies_ from Linus Torvalds on
               | linux mailing lists.
               | 
               | For lesser/individual maintainers there is no way to
               | survive this kind of mob attack. Corporate maintainers
               | may be able to manage as it could be considered just as
               | paid job and there are worse ways to make money.
        
               | pixl97 wrote:
               | Yep, as the attacker you bias the entire playing field to
               | your side. If a mailing list has 20 or so users on it,
               | you create 50 accounts over time that are nice, helpful,
               | and set a good tone. Then later you come in with your
               | attack and the pushy assholes. Suddenly those 50 puppets
               | just slightly side with the asshole. Most people break
               | under that kind of social pressure and cave to the jerks
               | request.
        
               | michaelt wrote:
               | It's entirely possible for an evildoer to make someone
               | feel bad while remaining completely polite.
               | 
               | First send a message to the mailing list as "Alice"
               | providing a detailed bug report for a real bug, and a
               | flawless patch to fix it.
               | 
               | Then you reply to the mailing list as "Bob" agreeing that
               | it's a bug, thanking "Alice" for the patch and the time
               | she spent producing such a detailed bug report, then
               | explaining that unfortunately it won't be merged any time
               | soon, then apologising and saying you know how
               | frustrating it must be for Alice.
               | 
               | Your two characters have been model citizens: Alice has
               | contributed a good quality bug report, and code. Bob has
               | helped the project by confirming bug reports, and has
               | never been rude or critical - merely straightforward,
               | realistic and a bit overly polite.
        
               | lukaslalinsky wrote:
               | As someone else said in this thread, scammers are often
               | rude, because it makes people act fast, polite responses
               | give them time to think. Of course, people are very
               | easily manipulated. But by completely rejecting rudeness
               | and having the mindset to not let others put pressure on
               | me, you will improve the odds by a lot.
        
             | cduzz wrote:
             | Reminds me of the "no soap radio" joke. Joke being
             | euphemism for collective gas lighting, but typically a
             | "joke" played by kids on each other.
             | 
             | Play is just preparing for the same game but when stakes
             | are higher?
             | 
             | https://en.wikipedia.org/wiki/No_soap_radio
        
             | imglorp wrote:
             | The mechanism employed here seems like the good cop, bad
             | cop interrogation/negotiation technique. There is the one
             | person who has taken care to show cultural and mission
             | alignment. Then there are several misaligned actors
             | applying pressure which the first person can relieve.
             | 
             | How to identify and defuse:
             | https://www.pon.harvard.edu/daily/batna/the-good-cop-bad-
             | cop...
        
             | r00fus wrote:
             | The act relies on there being an extreme reluctance to ban.
             | Once the banhammer has been used, the act kind of falls
             | apart. Of course, difference pressure campaigns can then be
             | brought to bear.
             | 
             | We live in an adversarial environment, time to stop playing
             | naively nice. Ideally it isn't the maintainer that has to
             | do all this work.
        
           | TheCondor wrote:
           | The xz list traffic was remarkably low. More than a few times
           | over the years, I thought it broke or I was unsubscribed.
           | 
           | Messages like Jigar's are kind of par for the course.
        
           | coldpie wrote:
           | > I would have banned the account
           | 
           | Yeah, same. We should be much more willing to kick jerks out
           | of our work spaces. The work is hard enough as it is without
           | also being shit on while you do it.
        
           | delfinom wrote:
           | Yea people are too accepting of allowing asshats like the
           | Jigar messages.
           | 
           | Simple ban and get the fuck out. Too often I've dealt with
           | people trying to rationalize it as much as "o its just
           | cultural, they don't understand". No, get the fuck out.
           | 
           | But hey I'm a NYer and telling people to fuck off is a past
           | time.
        
             | nindalf wrote:
             | Jigar was the same person/group as Jia. They were the bad
             | cop and Jia was the good cop. Banning wouldn't have changed
             | anything. Even if Jigar had been banned, the maintainer
             | would still have appreciated the good cop's helpful
             | contributions in contrast to the unhelpful bad cop. Jia
             | would have become a maintainer anyway.
        
           | soraminazuki wrote:
           | Not surprising, unfortunately. You'd think malicious actors
           | would be nice to people they're trying to deceive. But after
           | watching a few Kitboga videos, I learned that they more often
           | yell, abuse, and swear at their victims instead.
        
             | pixl97 wrote:
             | Being nice gives people time to think.
             | 
             | Being mean is stressful and stops your brain from working
             | properly. If someone doesn't allow you to be abusive, then
             | they are not a mark. Predators look for prey that falls
             | into certain patterns.
        
           | jeltz wrote:
           | I think that is intentional and that the goal would have been
           | achieved even if Jigar (who probably is the same guy as Jia)
           | had been banned.
        
           | npteljes wrote:
           | >I wonder if it would be possible to crowdsource FOSS mailing
           | list moderation.
           | 
           | I think this could be a genuine use of an AI: to go through
           | all of the shit, and have it summarized in a fashion that the
           | user wants: distant and objective, friendly, etc. It could
           | provide an assessment on the general tone, aggregate the
           | differently phrased requests, many things like that.
           | 
           | Crowdsourcing would works best with the reddit / hacker news
           | model I feel, where discussion happens in tree styled
           | threads, and users can react to messages in ways that are not
           | text, but something meta, like a vote or a reaction
           | indicating tone.
           | 
           | Both of these have significant downsides, but significant
           | upsides too. People pick the mailing list in a similar way.
        
             | johnny22 wrote:
             | A big problem is that people allow this sort of thing as
             | part of the culture. I've followed the Fedora and PHP
             | development mailing lists a few different times over the
             | years ans this sort of thing was tolerated across the
             | board. It doesn't matter if you crowdsource the moderation
             | if nobody thinks the behavior is bad in the first place.
             | 
             | Trying to do something about it was called censorship.
        
               | npteljes wrote:
               | I'm sorry I don't understand your point clearly. Why is
               | it a big problem, and whose problem it is?
        
           | unethical_ban wrote:
           | It seems from the reading of this article that jigar is in on
           | the scam. That said, I agree.
        
         | HPsquared wrote:
         | I'm reminded of the short story "The Strange Case of Mr
         | Pelham", in which a man is stalked and eventually replaced by a
         | doppelganger.
         | 
         | https://en.wikipedia.org/wiki/The_Strange_Case_of_Mr_Pelham
        
         | nathell wrote:
         | It makes Rich Hickey's ,,Open Source Is Not About You" [0]
         | particularly poignant.
         | 
         | As a hobbyist developer/maintainer of open source projects, I
         | strive to remember that this is my gift to the world, and it
         | comes with no strings attached. If people have any expectations
         | about the software, it's for them to manage; if they depend on
         | it somehow, it's their responsibility to ensure timely
         | resolution of issues. None of this translates to obligations on
         | my part, unless I explicitly make promises.
         | 
         | I empathize with Lasse having been slowed down by mental
         | issues. I have, too. And we need to take good care of
         | ourselves, and proactively prevent the burden of maintainership
         | from exacerbating those issues.
         | 
         | [0]:
         | https://gist.github.com/g1eny0ung/9e7d4d0f72547a8d156452e76f...
        
           | raxxorraxor wrote:
           | This is why I find some disclaimers in some open source
           | projects quite superfiscial, that the software is provided as
           | is without any warranty. Of course it is, this should be the
           | obvious default.
           | 
           | If there is a law that would entitle a user to more, it is a
           | bug in legislation that needs urgent fixing.
        
           | pixl97 wrote:
           | >having been slowed down by mental issues
           | 
           | Anyone and everyone in the OSS world should be concerned
           | about this too. You have nation state level actors out there
           | with massive amounts of information on you. How much
           | information have you leaked to data brokers? These groups
           | will know how much debt you're in. The status of your
           | relationships. Your health conditions and medications? It
           | would not take much on their part to make your life worse and
           | increase your stress levels. Just imagine things like fake
           | calls from your bank saying that debt of yours has been put
           | in collections.
        
           | somat wrote:
           | I see this as sort of the pivot on how people choose an open
           | source license. When you feel like you are building the thing
           | for others use a gplish license, it has all sorts of clauses
           | around getting everyone to play nice. Building the thing for
           | yourself however, I think the bsd style license makes more
           | sense. you don't really care what anyone else is doing with
           | it, you don't want to form a community. however, because it
           | is trivial to share source code, you do so.
        
         | mirekrusin wrote:
         | It's bizzare enough as it is to start asking questions to
         | confirm that "mental issue" had natural cause.
        
           | couchand wrote:
           | Your experiences may differ, but I'd say pretty much anyone
           | who lived through the past few years has reason enough to pay
           | careful attention to their mental health.
        
         | lr1970 wrote:
         | Look how brilliantly they selected their target project:
         | 
         | (1) xz and the lib are widely used in the wild including linux
         | kernel, systemd, openSSH; (2) single maintainer, low rate of
         | maintenance; (3) the original maintainer has other problems in
         | his life distracting them from paying closer attention to the
         | project.
         | 
         | I am wondering how many other OSS projects look similar and can
         | be targeted in similar ways?
        
           | baq wrote:
           | I'm thinking 95% of home automation which is full of obscure
           | devices and half baked solutions which get patched up by
           | enthusiasts and promptly forgotten about.
           | 
           | Controlling someone's lights is probably less important than
           | Debian's build fleet but it's a scary proposition for the
           | impacted individual who happens to use one of those long tail
           | home assistant integrations or whatever.
        
             | davedx wrote:
             | A lot of home automation controls EV charging these days
             | too. Imagine an attack that syncs a country's EV fleet to
             | charge in a minute where demand is at a peak. You could
             | cause some damage at the switchgear I bet if not worse
        
           | apantel wrote:
           | Yes it seems a lot like a case of a predator picking off a
           | weak and sick individual.
        
           | lenerdenator wrote:
           | Many.
           | 
           | We're in a tech slowdown right now. There are people who got
           | used to a certain lifestyle who now have "seeking work" on
           | their LinkedIn profiles, and who have property taxes in
           | arrears that are listed in county newspapers-of-record. If
           | you're an intelligence operative in the Silicon Valley area,
           | these guys should be easy pickings. An envelope full of cash
           | to make some financial problems go away in exchange for a few
           | commits on the FOSS projects they contribute to or maintain.
        
           | orthecreedence wrote:
           | A takeaway for me is to be extremely tight with personal
           | information on the internet. People will use this to craft a
           | situation to fool you.
           | 
           | Are you married? Have a house? Pets? Children? Sick parent?
           | Gay? Trans? Mental health issues? Disabled? All of this can
           | be used against you. Be careful where and how you share stuff
           | like this. I know it's not "cool" to be mysterious online
           | anymore, but it creates a much larger attack surface. People
           | can still engage with groups around these things, but better
           | to do it with various personas than to have one trackable
           | identity with everything attached to it.
        
         | oefrha wrote:
         | I've given semi-popular projects that I no longer had the
         | bandwidth to maintain to random people who bothered to email,
         | no pressuring needed. While those projects are probably four to
         | five magnitudes less important than xz, still thousands of
         | people would be affected if the random dude who emailed was
         | malicious. What should I have done? Let the projects languish?
         | Guess I'll still take the chance in the future.
        
           | 01HNNWZ0MV43FF wrote:
           | I guess all you can do is not give the brand away.
           | 
           | Put a link saying "Hey this guy forked my project, I won't
           | maintain it anymore, he may add malware, review and use at
           | your own risk"
        
           | patmorgan23 wrote:
           | If it's open source they can just fork it, and if you're no
           | longer maintain yours you can put a link over to their fork.
           | (Or any other active forks). It's still on the user to vet
           | new forms.
        
         | lenerdenator wrote:
         | I feel for Lasse.
         | 
         | It's time for more of the big vendors who use these projects in
         | their offerings to step up and give people running these small
         | projects more resources and structure. $20k to have maintainers
         | for each project actually meet twice a year at a conference is
         | chump change for the biggest vendors, especially when compared
         | against the cost of the audits they'll now be doing on
         | everything Jia Tan and Co. touched.
        
           | cryptonector wrote:
           | As an OSS maintainer, $20k wouldn't help me enough unless I
           | was retired. The issue is not money (or not just money), but
           | time. If a maintainer has a full-time job, they may not have
           | time, and developers/maintainers tend to have full-time jobs,
           | so...
           | 
           | Now maybe one could build a career out of OSS
           | maintainerships, with work/time funded by lots of donations
           | much smaller than a salary but amounting to a salary.
        
             | lenerdenator wrote:
             | I was thinking more of a fix to the issue of "who the
             | hell's maintaining this package our distro/service/whatever
             | is based on" than a way to make money. The bigger projects
             | (like the kernel) and vendors (MS, IBM/Red Hat, Canonical,
             | Google, etc.) all have a vested interest in knowing the
             | actual identity and basic personalities of people who
             | maintain the important packages. If maintainers avail
             | themselves for a weekend at a conference twice a year (or
             | maybe even a lighter commitment like a few short meetings
             | with a manager) they get some resources for their efforts.
             | The flip side of this, of course, is that these
             | organizations will prefer to include packages from
             | maintainers who agree to this arrangement over those who
             | don't.
             | 
             | Furthermore, these organizations are in a place to put
             | experienced, trustworthy contributors on projects that need
             | maintainers if need be. If Lasse had been able to go to,
             | idk, the Linux Foundation and say, "Listen, I'm getting
             | burnt out, got anyone?" and they said "Sure, we've got this
             | contributor with an established record who would love to
             | help maintain your project", none of this is happening
             | right now.
        
         | nebulous1 wrote:
         | In "Ghost in the Wires" Kevin Mitnik details one of the ways he
         | obtained information was via a law enforcement receptionist*
         | who he managed to trick into believing he was law enforcement
         | over the phone. He obtained information this way multiple times
         | over multiple years, and fostered a phone based friendship with
         | this woman. He seemed to have no qualms in doing this.
         | 
         | He was also turned on by multiple people who he considered
         | close friends. In the book it did not seem that he had
         | considered that it might not be a "them" problem.
         | 
         | *my details may be off here, I read it some time ago
        
       | kevindamm wrote:
       | Small nit to pick, but in the introductory paragraph it reads
       | "unauthenticated, targeted remote code execution." I recall that
       | there was a special private/public key pair that made this
       | exploit only reproducible by the author (or only possible after
       | re-keying the binary).
       | 
       | I believe this means it was unauthorized, not unauthenticated.
        
         | rsc wrote:
         | "unauthenticated remote code execution" is a fairly standard
         | term for this kind of access.
         | 
         | (https://www.google.com/search?q=%22unauthenticated+remote+co..
         | .)
        
           | kevindamm wrote:
           | Fairly standard term for _a_ kind of access similar to this,
           | but the distinction is important.
           | 
           | Consider the impact if anybody else, not just this attacker,
           | could have exploited it -- on the other hand, it may have
           | been discovered sooner (had it also not been discovered by
           | accident first).
        
         | mcpherrinm wrote:
         | I think it is unauthenticated from the point of view of SSH's
         | own authentication. The backdoor has its own credential, but
         | the RCE is accessible if you don't have an account on the
         | system.
        
           | kevindamm wrote:
           | That's the basis for my preferring the term relating to
           | authorization. The two terms have distinct and well-defined
           | meanings in the domain. They're both critical aspects of
           | security but for different reasons.
        
             | dist-epoch wrote:
             | Remote code execution already implies unauthorized. There
             | is no such thing as authorized remote code execution.
        
               | kevindamm wrote:
               | What makes you say that? SSH, RDP, even hitting a web
               | service are all valid cases of authorized remote code
               | execution. It's not the remote or execution parts that
               | are bad.
        
               | dist-epoch wrote:
               | Now you're redefining words.
               | 
               | Remote code execution means a single thing, running
               | JavaScript when accessing a web page and using SSH as
               | intended is not RCE.
               | 
               | https://en.wikipedia.org/wiki/Remote_code_execution
               | 
               | https://www.google.com/search?q=Remote+code+execution
        
               | kevindamm wrote:
               | I may not have been clear -- I agree that RCE,
               | unqualified, means unauthorized RCE. but then you said
               | there was no such thing as an "authorized" RCE and that's
               | where I beg to differ. Sure, the term isn't used much but
               | it wasn't the clarifying point I wanted to make above,
               | that there is a difference between authenticated and
               | authorized.
               | 
               | The links you point to there are about "RCE attack" which
               | also implies not authorized.
        
               | gquere wrote:
               | There totally is authenticated RCE, for instance a PHP
               | page that contains a RCE but needs a prior authentication
               | to access the resource.
               | 
               | All RCEs are classified in either unauthenticated or
               | authenticated, the former being the worst (or best if
               | you're a researcher/hacker).
        
             | mcpherrinm wrote:
             | The distinction between authentication and authorization is
             | important, but only in the context of what's checking that
             | auth(n/z) is valid.
             | 
             | For something like SSH which has authentication and
             | authorization as features, I would expect to talk about an
             | RCE in that context, and not the backdoor's auth features.
             | 
             | This backdoor bypasses both authentication (not requiring
             | an account password, authorized key, etc on the target
             | system) as well as authorization (as it doesn't check a
             | user against any policy for what commands or users can log
             | in).
        
       | vb-8448 wrote:
       | I wonder if netflix will make a movie from this story, if you
       | read the timeline it really sounds like a well written thriller.
        
         | in3d wrote:
         | It's not compelling enough unless we find out who was behind
         | it, which is probably unlikely.
        
         | publius_0xf3 wrote:
         | It does, but has there ever been a movie that successfully
         | portrayed a compelling drama that takes place entirely on a
         | computer monitor? It's hard to even imagine. It's why I think
         | the novel is still relevant in our age because all the great
         | stories that unfold on a screen can't be acted out on a sound
         | stage.
        
           | bckygldstn wrote:
           | Searching [0] gets you halfway there: its about a compelling
           | drama which takes place mostly in real life, but is portrayed
           | entirely on a computer monitor.
           | 
           | [0] https://en.wikipedia.org/wiki/Searching_(film)
        
       | hk__2 wrote:
       | See also "Everything I Know About the XZ Backdoor" submitted 4
       | days ago but updated since then:
       | https://news.ycombinator.com/item?id=39868673
        
       | jrochkind1 wrote:
       | This seems very difficult to defend against. What is a project
       | with a single burnt-out committer to do?
        
         | rsc wrote:
         | lcamtuf's two posts argue that this may simply not be an open-
         | source maintainer's job to defend against. ("The maintainers of
         | libcolorpicker.so can't be the only thing that stands between
         | your critical infrastructure and Russian or Chinese
         | intelligence services. Spies are stopped by spies.")
         | 
         | That doesn't mean we shouldn't try to help burnt out
         | committers, but the problem seems very hard. As lcamtuf also
         | says, many things don't need maintenance, and just paying
         | people doesn't address what happens when they just don't want
         | to do it anymore. In an alternate universe with different
         | leadership, an organization like the FSF might use donated
         | funds to pay a maintenance staff and an open-source maintainer
         | might be able to lean on them. Of course, that still doesn't
         | address the problem of Jia Tan getting a job with this
         | organization.
         | 
         | https://lcamtuf.substack.com/p/technologist-vs-spy-the-xz-ba...
         | https://lcamtuf.substack.com/p/oss-backdoors-the-allure-of-t...
        
           | uluyol wrote:
           | There are efforts from industry to try to secure open source,
           | e.g., https://cloud.google.com/security/products/assured-
           | open-sour...
           | 
           | I suspect some variant of this will grow so that some
           | companies, MS/GitHub for example, audit large body of code
           | and vet it for everyone else.
        
           | intunderflow wrote:
           | Why the assumption this is a Russian or Chinese intelligence
           | service? Western governments aren't above this sort of
           | conduct: https://www.mail-
           | archive.com/cryptography@metzdowd.com/msg12...
        
             | cb321 wrote:
             | Why are people assuming it's _any_ intelligence service
             | /state actor? With cryptocurrency valuations, it would seem
             | like remote rooting gajillions of machines would be highly
             | incentivized for a private person/collective. Not to
             | mention _other_ financial incentives. Our digital
             | infrastructure secures enormous value much of which can be
             | pilfered anonymously.
             | 
             | I admit, the op has a "professional/polished vibe" to me as
             | well, but we seem to know very little except for what work
             | time/zones were preferred by the possibly
             | collective/possibly singular human(s) behind the Jia Tan
             | identity. Does anyone have slick linguistic tools to assess
             | if the writer is a single author? Maybe an opportunity to
             | show off.. It's sort of how they caught Ted Kaczynski.
             | 
             | It also absolutely makes sense to think of all the state
             | actors (I agree including as you say the US/UK) as part of
             | the _ongoing_ threat model. If the KGB /Ministry of State
             | Security/NSA/MI6 were _not_ doing this before then they
             | surely might in the future. Maybe with more gusto /funding
             | now! They all seem to have an "information dominance at all
             | costs" mentality, at least as agency collectives, whatever
             | individuals inside think.
        
               | maerF0x0 wrote:
               | people often assume state actors are the pinnacle of
               | sophistication, and especially long games. (Also notably
               | Chinese culture is very attuned / trained for long games,
               | relative to American impatience). This was a
               | sophisticated attack, therefore presumption.
        
               | cb321 wrote:
               | Fair enough - I agree that sophistication inspires the
               | presumption, but it's still just that (not that you said
               | otherwise - just emphasizing).
               | 
               | Anyway, I've yet to hear of anything in the xz work
               | beyond the ability of 1-3 skilled black hats with a lot
               | of post-COVID time on their hands. The NSA ensuring the
               | Intel ME/AMT could be turned off seems another level of
               | sophistication entirely, for example { a "higher
               | pinnacle" in your nice phrasing :-) }.
               | 
               | In terms of sheer numbers, my impression is that the vast
               | majority of attacks blocked at almost every
               | sophistication level are more "criminals" than "states".
               | Admittedly, that may partly be states just acquiring
               | ability to act surgically rather than launching big
               | compromise initiatives like botnets (or otherwise states
               | going undetected). I'm sure it's hard to know.
               | 
               | Maybe we're just in a sad era of increasing attack
               | sophistication measured in "Kilo-Sophistic-meters"? The
               | AntiVirus industry has been having a boom lately.
               | 
               | It's probably already been mentioned many times, but
               | besides economic & geopolitical incentives, maybe the
               | attacker(s) was a hard core `systemd` or IBM/RedHat hater
               | or one of the people who supposedly issued death threats
               | to Lennart Poettering now at Microsoft or even an open
               | source hater burnt out or wanting to burn the world down
               | like The Joker in Batman. In light of that, Russ' Setting
               | The Stage Prelude could perhaps profitably add the
               | introduction of that lib dependency into `systemd` and
               | also various distros defaulting to `systemd`.
               | 
               | Anyway, premature conclusions are to be cautioned against
               | over & over. That's all I was trying to do. { And I'm not
               | claiming you were concluding anything - you seem pretty
               | open-minded about it all. I was always just amplifying
               | parent comments. I absolutely agree long games "feel"
               | more State Actor - part of what I meant by "vibe", but to
               | quote the detective in V For Vendetta - "it's just a
               | feeling". To Old People, 2 years famously doesn't seem as
               | long as to a 25- or 15-year old. ;-) It's actually short
               | on the 5+ year Linux distro maintenance time scales. }
        
             | wumeow wrote:
             | Countered in the very same thread: https://www.mail-
             | archive.com/cryptography@metzdowd.com/msg12...
        
         | meowface wrote:
         | As a burnt out creator of open source projects with thousands
         | of GitHub stars who's received numerous questionable
         | maintainership requests: either very carefully vet people or
         | let it stagnate. I chose the latter and just waited until
         | others forked it, in large part because I didn't want to be
         | responsible for someone hijacking the project to spread
         | malware.
         | 
         | If I had ever received a request from a well-known figure with
         | a longstanding reputation, who's appeared in-person at talks
         | and has verifiable employment, I might've been more receptive.
         | But all the requests I got were from random internet identities
         | that easily could've been fabricated, and in any case had no
         | previous reputation. "Jia Tan" and their other sockpuppets very
         | likely are not their real identities.
        
         | coldpie wrote:
         | It's not a perfect solution, but I think Big Companies could
         | have a role to play here. Some kind of tenure/patronage system,
         | where for every $1B a Big Company makes in profit, they employ
         | one critical open source maintainer with a decent salary (like
         | $200k or something). The job would only have two requirements:
         | 1) don't make everyone so mad that everyone on the Internet
         | asks for you to be fired, and 2) name a suitable replacement
         | when you're ready to move on. The replacement would become an
         | employee at Big Company, which means Big Company would need to
         | do whatever vetting they normally do (background checks, real
         | address to send paychecks and taxes, etc).
         | 
         | In this scenario, Jia Tan would not be a suitable replacement,
         | since they don't friggin' exist.
         | 
         | Yes, there's problems with this approach. When money gets
         | involved, incentives can get distorted. It limits the pool of
         | acceptable maintainers to those employable by a Big Company.
         | But I think these are solvable problems, especially if there's
         | a strong culture of maintainer independence. It provides a real
         | improvement over the current situation of putting so much
         | pressure on individuals doing good work for no benefit.
        
           | jodrellblank wrote:
           | > " _It 's not a perfect solution_"
           | 
           | Is it a solution at all? Say Oracle offer Lasse Collin $200k
           | for maintaining xz but he doesn't want to work for Oracle so
           | he refuses, then what? Amazon offer Lasse $200k but they
           | require fulltime work on other open source packages which he
           | isn't experienced with or interested in, so he refuses, then
           | what? Google employ someone else for $200k but they can't
           | force Lasse Collin to hand over commit rights to xz to
           | Google, or force him to work with a fulltime Google employee
           | pestering with many constant changes trying to justify their
           | job, and they can't force Debian to accept a new Google fork
           | of xz, so then what? And NetFlix, Microsoft, Facebook, Uber,
           | they can't all employ an xz maintainer, xz doesn't need that
           | many people, but if they just employ 'open source
           | maintainers' scattering their attention over all kinds of
           | random projects they have no prior experience with, how would
           | they catch this kind of subtle multi-pronged long-term attack
           | on some low-attention, slow moving, project?
           | 
           | Google already employ a very capable security team who find
           | issues in all kinds of projects and publicise them, they
           | didn't find this one. Is it likely this attack could have
           | made its way into ChromeOS and Android if it wasn't noticed
           | now, or would Google have noticed it?
           | 
           | > " _1) don 't make everyone so mad that everyone on the
           | Internet asks for you to be fired_"
           | 
           | So it's a sinecure position, doing nothing is the safest
           | thing?
           | 
           | > " _and 2) name a suitable replacement when you 're ready to
           | move on_"
           | 
           | How could Lasse Collin have named a more suitable replacement
           | than someone who seemed technically capable, interested in
           | the project, and motivated to work on some of the boring
           | details like the build system and tests and didn't seem to be
           | doing it for the hype of saying "I improved compression by
           | 10%" for their resume? Are they needing to be skilled in
           | hiring and recruitment now?
        
             | coldpie wrote:
             | I think you've misunderstood my suggestion. I said the job
             | has two responsibilities. No more. You added a bunch of
             | other responsibilities, I'm saying those wouldn't be
             | allowed. It would be in the employment agreement that is
             | purely payment for doing the maintainership tasks they were
             | already doing. It would be a culture expectation that the
             | company not apply pressure on maintainers.
             | 
             | > but if they just employ 'open source maintainers'
             | scattering their attention over all kinds of random
             | projects they have no prior experience with
             | 
             | They would pay the people who are doing the work now. Under
             | this hypothetical, one of the Big Companies would have
             | hired Lasse as the xz maintainer, for example. His job
             | responsibilities are to maintain xz as he had been doing,
             | and identify a successor when he's ready to move on.
             | Nothing else.
             | 
             | > So it's a sinecure position, doing nothing is the safest
             | thing?
             | 
             | No. Not doing the maintenance tasks would make everyone
             | mad, violating one of the two job responsibilities.
             | 
             | > How could Lasse Collin have named a more suitable
             | replacement than someone who seemed technically capable,
             | interested in the project, and motivated to work on [it]
             | 
             | Lasse would suggest Tan as a suitable replacement. Big
             | Company's hiring pipeline would approach Tan and start the
             | hiring process (in person interviews, tax docs, etc etc).
             | At some point they would realize Tan isn't a real person
             | and not hire him. Or, the adversary would have to put up "a
             | body" behind the profile to keep up the act, which is a
             | much higher bar to clear than what actually happened.
        
               | jodrellblank wrote:
               | Leaving aside issues of how it could work, Lasse Collin
               | wasn't the one who saw this attack and stopped it so how
               | would paying him have helped against this attack?
               | 
               | > " _Lasse would suggest Tan as a suitable replacement.
               | Big Company 's hiring pipeline would approach Tan and
               | start the hiring process (in person interviews, tax docs,
               | etc etc). At some point they would realize Tan isn't a
               | real person and not hire him_"
               | 
               | What if they find that Tan _is_ a real person but he
               | doesn 't want to work for Amazon or legally can't (This
               | is before knowing of his commits being malicious, we're
               | assuming he's a fake profile but he could be a real
               | person being blackmailed)? Collin can't leave? Collin has
               | to pick someone else out of a candidate pool of people
               | he's never heard of? Same question if they find Tan isn't
               | a real person - what then; is there an obligation to
               | review all of Tan's historical commits? Just committing
               | under a pseudonym or pen name isn't a crime, is it? Would
               | the new maintainer be obliged to review _all_ historic
               | commits or audit the codebase or anything? Would Amazon
               | want their money back from Lasse once it became clear
               | that he had let a bad actor commit changes which opened a
               | serious security hole during his tenure as maintainer?
               | 
               | > " _No. Not doing the maintenance tasks would make
               | everyone mad, violating one of the two job
               | responsibilities._ "
               | 
               | What he was doing already was apparently ignoring issues
               | for months and making people like Jigar Kumar annoyed.
               | Which is fine for a volunteer thing. If "Jigar Kumar" is
               | a sock-puppet, nobody knew that at the time of their
               | posts; Lasse Collins' hypothetical employer wouldn't have
               | known and would surely be on his case about paying him
               | lots of money for maintenance while complaints are
               | flowing on the project's public mailing list, right?
               | Either they're paying him to do what he was doing before
               | (which involved apparently ignoring work and making some
               | people mad) or they're paying him to do more than he was
               | doing before (which is not what you said).
               | 
               | It doesn't seem like it would work - but if it did work
               | it doesn't seem like it would have helped against this
               | attack?
        
               | coldpie wrote:
               | > Lasse Collin wasn't the one who saw this attack and
               | stopped it so how would paying him have helped against
               | this attack?
               | 
               | Well, there's a few issues I'm trying to target. I'm
               | trying to work backwards from "how do we stop bad actor
               | Tan from getting maintainer access to the project?"
               | Creating an identify-verified relationship (employment)
               | is a good fit for that, I think. And it nicely solves
               | some other related issues with the current volunteer
               | maintainership model. Lasse may not have felt the strong
               | pressure/health issues if he was being paid to do the
               | work. Or, if he was feeling burnt out, he may have felt
               | more comfortable passing the torch earlier if there was a
               | clear framework to do so, backed by an entity that can do
               | some of the heavy lifting of naming/validating a
               | successor.
               | 
               | > What if they find that Tan is a real person but he
               | doesn't want to work for Amazon or legally can't
               | 
               | I think this would be a fairly rare occurrence, but it's
               | one I called out as a potential problem in my original
               | post, yeah ("smaller pool of possible maintainers"). If
               | there isn't a clear successor, I think the maintainer
               | could inform the Big Company that they'd like to move on
               | in the next year or two, and Big Company could maybe find
               | an internal engineer who wants to take over the role. Or
               | maybe this more formal sponsored-maintainership
               | arrangement would create incentives for outside
               | contributors to aim for those positions, so there's more
               | often someone waiting to take over (and then be verified
               | by Big Company's hiring process).
               | 
               | > is there an obligation [for the maintainer] to review
               | all of Tan's historical commits? Would the new maintainer
               | be obliged to review all historic commits or audit the
               | codebase or anything? Would Amazon [fire] Lasse once it
               | became clear that he had let a bad actor commit changes
               | which opened a serious security hole during his tenure as
               | maintainer?
               | 
               | (I tweaked your questions a tiny bit to rephrase them as
               | I interpreted them. I think the spirit of your questions
               | was kept, I apologize if not.) If these tasks fall under
               | the "don't make everyone mad" job responsibility, then
               | yes. If not, then no, to all of these. There are no
               | obligations other than the two I mentioned: don't piss
               | off the community and help name a successor. It's up to
               | the project's community to decide if the maintainer is
               | not meeting their obligations, not the sponsoring Big
               | Company.
               | 
               | > What he was doing already was apparently ignoring
               | issues for months and making people like Jigar Kumar
               | annoyed.
               | 
               | I'm not sure. It seems like Kumar was a bad actor. _Was_
               | there actually a real maintenance issue? If so, maybe it
               | could have been avoided in the first place by the
               | sponsorship arrangement, like I mentioned at the top of
               | this reply. Or, the community could raise the problem to
               | Big Company, who can do the work of verifying that there
               | is a problem and working with the maintainer to resolve
               | it. Instead what happened here, which was for one burned
               | out guy deciding to hand the keys over to some email
               | address.
        
               | jodrellblank wrote:
               | > " _I 'm trying to work backwards from "how do we stop
               | bad actor Tan from getting maintainer access to the
               | project?" Creating an identify-verified relationship
               | (employment) is a good fit for that, I think._"
               | 
               | It would stop a sock puppet, but Jai Tan might be a real
               | person, a real developer paid or blackmailed by a hostile
               | group; Amazon might just have hired him and handed over
               | maintainer access to him thinking it was above board, if
               | a problem hadn't been found yet. I don't know where Jai
               | Tan claimed to be from, but it's quite possible they
               | would say "I don't have a passport", "I can't leave my
               | family to travel to America for an in-person interview",
               | "I'm not in good health to travel", "I don't speak
               | English well enough for an in-person interview", "I live
               | in a poor country without a functioning government and
               | have no tax documents", or etc. etc. excuses which are
               | quite plausible
               | 
               | > " _Or, if he was feeling burnt out, he may have felt
               | more comfortable passing the torch earlier if there was a
               | clear framework to do so, backed by an entity that can do
               | some of the heavy lifting of naming /validating a
               | successor._"
               | 
               | Your suggested $200k is equivalent to PS160k GBP in the
               | UK; look at this UK average salary list:
               | https://uk.jobted.com/ no job comes close; not Managing
               | Director, IT director, Finance Director, Aerospace
               | engineer, DevOps engineer, neurosurgeon, nothing on the
               | list is above PS110k. Sure there are many people earning
               | that much as a senior devops AI cloud security specialist
               | in a fast paced London based fintech trading house, but
               | the idea that someone would comfortably pass up a salary
               | around the 98th percentile of incomes in the country for
               | like 2 days a month of work because they're "feeling
               | burnt out" is unthinkable. Anyone sensible would hold
               | onto that until they pried it out of one's cold dead
               | hands; American tech salaries are almost literally
               | unbelievable. Even moreso if we consider a maintainer in
               | a poorer country.
               | 
               | > " _I tweaked your questions a tiny bit to rephrase them
               | as I interpreted them. I think the spirit of your
               | questions was kept, I apologize if not_ "
               | 
               | I started writing Tan, but then changed it. A lot of your
               | reply is assuming that we know there were malicious
               | patches and suspect Jigar Kumar was a bad actor and that
               | the big company would be somewhat US friendly. But we
               | can't plan to know all that for all situations like this.
               | Some people will be speculating that the previous paid
               | maintainer was complicit and all their work and merges
               | are now suspect. The billion dollar company who hired
               | Collins in this hypothetical maintainer could be Baidu or
               | Yandex or Saudi Aramco, and then people would be
               | suspicious. It's one thing to have your task be "don't
               | make people mad" but doesn't that change if people
               | getting mad can give you unbounded retrospective work and
               | responsibility?
               | 
               | > " _If these tasks fall under the "don't make everyone
               | mad" job responsibility, then yes. [...] Was there
               | actually a real maintenance issue? [...] Or, the
               | community could raise the problem to Big Company, who can
               | do the work of verifying that there is a problem and
               | working with the maintainer to resolve it._"
               | 
               | As soon as the internet becomes aware that they can get
               | anything merged ASAP by threatening to get mad, everyone
               | will be mad about everything all the time. Whom at the
               | BigCo will do the work of verifying whether there is a
               | problem? I mean, let's put Lasse Collins on a specific
               | team along with other employees who are expected to work
               | 40-80 hour weeks while he isn't. The pressure on the team
               | manager to ditch the maintainer and distribute his salary
               | among the other team members would be constant. If those
               | other team members see him doing less work for similar or
               | more money it would be a morale killer and they would
               | want to leave. If they _also_ have to know his project
               | well enough to follow all the drama and things people are
               | complaining about and tease out what is and isn 't a real
               | problem and coerce him to do his job, sorry 'work with
               | him', well, they won't be very motivated to do that.
        
           | sloowm wrote:
           | I think the best solution would be governments forcing
           | companies to secure the entire pipeline and setting up a non
           | profit that does this for open source packages. Have security
           | researchers work for a non profit and force companies that
           | use software from some guy in Nebraska to pay into the non
           | profit (could be in the form of labor) to get the code
           | checked and certified.
           | 
           | The guy in Nebraska is still not getting anything but will
           | also not have the stress of becoming one of the main
           | characters/victims in a huge attack.
        
         | clnhlzmn wrote:
         | This is not Lasse Collin's responsibility. What is a burnt out
         | committer supposed to do? Absolutely nothing would be fine.
         | Doing exactly what Lasse Collin did and turn over partial
         | control of the project to an apparently helpful contributor
         | with apparent community support is also perfectly reasonable.
        
         | caoilte wrote:
         | get the project taken over by a foundation eg the Apache
         | Foundation.
        
         | 2devnull wrote:
         | Check the GitHub profile of anybody that commits. Is there a
         | photo of the person? Can you see a commit history and repos
         | that help validate who they seem to be.
         | 
         | In this instance, noticing the people emailing to pressure you
         | have fake looking names that start with adjacent letters and
         | the same domain name.
         | 
         | Be more paranoid.
        
           | pixl97 wrote:
           | > Is there a photo of the person?
           | 
           | Does that even matter these days?
           | 
           | Especially if we're talking nation state level stuff
           | convincing histories are not hard to create to deflect casual
           | observers.
           | 
           | >Be more paranoid.
           | 
           | Most people in OSS just want to write some code to do
           | something, not defend the world against evil.
        
             | 2devnull wrote:
             | "Does it even matter?"
             | 
             | Yes, it would have prevented this attack. It isn't totally
             | sufficient but it's quick and easy and would have prevented
             | this attack.
             | 
             | "Most people don't want ..."
             | 
             | I get it. I think the issue is that pushing junk code from
             | malicious contributors into your project causes more hassle
             | in the long run. If you just want to code and make stuff
             | work, you should probably be careful who you pull from.
             | It's not just for the benefit of others, it's first and
             | foremost to protect the code base and the time and sanity
             | of other contributors.
        
               | pixl97 wrote:
               | "Sorry, we had to kill open source software because bad
               | people exist" -Microsoft laughing all the way to the
               | bank.
               | 
               | The more paranoid walls you put up the more actual
               | contributors getting into the movement say "eh, screw
               | this, who wants to code anyway".
               | 
               | This isn't just a problems with OSS, this is a
               | fundamental issue the internet as a whole is experiencing
               | and no one has good answers that don't have terrible
               | trade offs of their own.
        
         | INTPenis wrote:
         | That's a great question and instinctively I'd say better to
         | halt development than cave to pressure.
        
         | Vicinity9635 wrote:
         | Until we can patch humans, social engineering will always work.
         | Burnt-out comitter or not. Just be vigilant.
        
       | exacube wrote:
       | Is the real identity of Jia Tan known, even by Lasse Collin?
       | 
       | I would think a "real identity" should be required by linux
       | distros for all /major/ open source projects/library committers
       | which are included in the distro, so that we can hold folks
       | legally accountable
        
         | asvitkine wrote:
         | How would that even work? Are distros expected to code their
         | own alternative versions of open source libraries where they
         | can't get the maintainers to send their IDs? Or what stops from
         | forged IDs being used?
        
         | gquere wrote:
         | This will never be accepted by the community.
        
         | rsc wrote:
         | Open source fundamentally does not work that way. There are
         | many important open source contributors who work
         | pseudonymously.
         | 
         | Google's Know, Prevent, Fix blog post floated the idea of
         | stronger identity for open source in
         | https://security.googleblog.com/2021/02/know-prevent-fix-fra...
         | and there was very significant pushback. We learned a lot from
         | that.
         | 
         | The fundamental problem with stronger identity is that spy
         | agencies can create very convincing ones. How are distros going
         | to detect those?
        
           | kashyapc wrote:
           | While "open source" fundamentally doesn't work that way, the
           | point here is about _maintainers_ , not regular contributors.
           | Identity of new maintainers must be vetted (via in-person
           | meetups and whatever other mechanisms) by other "trusted"
           | maintainers whose identities are "verified".
           | 
           | I realize, it's a hard problem. (And, thanks for the link to
           | the "Know, Prevent, Fix" post.)
           | 
           | PS: FWIW, I "win my bread" by working for a company that
           | "does" open source.
           | 
           | Edit: Some projects I know use in-person GPG key signing, or
           | maintainer summits (Linux kernel), etc. None of them are
           | perfect, but raises the bar for motivated anonymous
           | contributors with malicious intent, wanting to become
           | maintainers.
        
             | oefrha wrote:
             | I've worked with a few very talented pseudonymous
             | developers on the Internet over the years. I can't think of
             | any way to vet their identities while maintaining their
             | anonymity (well, it's basically impossible by definition),
             | plus if you're talking about in-person meetups, traveling
             | from, say, Asia to North America isn't cheap and there
             | could be visa issues. The distinction between maintainers
             | and non-maintainers isn't that meaningful because non-
             | maintainers with frequent and high quality contributions
             | will gain a degree of trust anyway. The attack we're
             | discussing isn't about someone ramming obviously malicious
             | code through as a maintainer, they passed or could have
             | passed code review.
        
               | cesarb wrote:
               | > traveling from, say, Asia to North America isn't cheap
               | and there could be visa issues.
               | 
               | And there are other reasons some people might not want to
               | travel outside their nearby region. For instance, they
               | might be taking care of an elderly relative. Or they
               | might be the elderly relative, with travel counter-
               | indicated for health reasons.
        
               | Vegenoid wrote:
               | I'll bet many of them simply wouldn't want to.
        
               | kashyapc wrote:
               | I agree, these are all really valid reasons. FWIW, I
               | myself have worked with "anonymous" maintainers and
               | contributors that I've never met.
        
               | kashyapc wrote:
               | You make excellent points; I agree. Especially, a non-
               | maintainer with a high-quality contribution gaining
               | trust. Many times, (tired) maintainers _are_ forced to
               | "rubber-stamp" and merge such high-quality patches. It
               | could be due to any number of (valid) reasons--a CVE fix,
               | an involved performance fix that will take you _weeks_ to
               | load up on the context, enabling a hardware feature that
               | 's under semi-NDA, you just trust their work too well,
               | maintainer fatigue, etc.
               | 
               | What I'm saying is, in context of _critical-path_
               | software, the identity of maintainers vs non-maintainers
               | matters more. I 'm not naively claiming that it'll
               | "solve" the problem at hand, just that it's another
               | _layer_ in defense. For a critical software, you shouldn
               | 't be able to simply submit a "patch"[1] such as:
               | tests: Add-binary-blob-with-a-subtle-backdoor.xz
               | Signed-off-by: "Anonymous Rabbit"
               | <LittleBunny123@lolmail.com>
               | 
               | Commit it yourself, brazenly push it into Linux distros,
               | and then anonymously sign off into the sunset with no
               | trace. I'm sure you'll agree that there's a _world_ of
               | difference between a deeply entrenched, critical libray
               | and a random user-space application.
               | 
               | It's a messy situation. How much, if at all, "clever
               | tech" can mitigate this human "trust issue" is an open
               | problem for now.
               | 
               | [1] https://git.tukaani.org/?p=xz.git;a=commitdiff;h=cf44
               | e4b7f5d
        
           | nrvn wrote:
           | I was initially thinking that one of the core non-tech causes
           | of the was the single-person maintenance mode of the xz
           | project.
           | 
           | But you have a point. As an agency you can seed two jiatan's
           | to serve diligently for a couple of years following the
           | strict 2-person code reviews and then still poison the
           | project. On the other hand, if the xz build process was
           | automated and transparent and release artifacts were
           | reproducible and verifiable even in this poor condition of
           | xz-utils as a project it would have been much harder to
           | squeeze in a rogue m4/build-to-host.m4
        
           | in3d wrote:
           | The blog post clarified it's about maintainers of critical
           | packages, not all contributors. This could be limited to
           | packages with just one or two maintainers, especially newer
           | ones. And they could remain somewhat anonymous, providing
           | their information to trusted third parties only. If some
           | maintainers don't accept even this, their commits could be
           | put into some special queue that requires additional people
           | to sign off on them before they get accepted downstream. It's
           | not a complete fix, but it should help.
        
           | delfinom wrote:
           | My problem with stronger identity is it violates open source
           | licenses.
           | 
           | Source code is provided without warranty and this statement
           | is clear in the license.
           | 
           | Putting an verified identity behind the source code publish
           | is basically starting to twist said said no-warranty. Fuck
           | that.
        
         | mapmeld wrote:
         | What would prevent a known person from accepting a govt payout
         | to sabotage their project, or to merge a plausible-looking
         | patch? Relying on identity just promotes a type of culture of
         | reputation over code review.
        
         | tester457 wrote:
         | If this was done by a state actor then this policy wouldn't
         | help at all. States have no shortage of identities to fake.
        
         | tamimio wrote:
         | Nope, identities won't solve it, you can have people coerced,
         | blackmailed, threatened, or simply just a "front" while there's
         | a whole team of spies in the background. The process should be
         | about what's being pushed and changed in the code, but I would
         | be lying to say I have a concrete concept how it is possible.
        
       | gawa wrote:
       | Excellent summary of the events, with all the links in one place.
       | This is the perfect resource for anyone who want to catch up, and
       | also to learn about how such things (especially social
       | engineering) unfold in the wild, out in the open.
       | 
       | One thing that could be added, for the sake of completeness: in
       | the part "Attack begins", toward the end, when they are pushing
       | for updating xz in the major distros, Ubuntu and Debian are
       | mentioned but not Fedora.
       | 
       | Looks like the social engineering/pressuring for Fedora started
       | at least weeks before 2024 March 04, according to a comment by
       | @rwmj on HN [1]. I also found this thread on Fedora's devel list
       | [2], but didn't dig too much.
       | 
       | [1] https://news.ycombinator.com/item?id=39866275
       | 
       | [2]
       | https://lists.fedoraproject.org/archives/list/devel@lists.fe...
        
         | sfjailbird wrote:
         | It would be intetesting if Lasse Collin published his off-list
         | interactions with 'Jia Tan' and any of the other pseudonyms, to
         | get an even better angle on the social engineering parts.
         | Apparently a large part of the campaign was via private
         | channels to Lasse.
        
       | jvanderbot wrote:
       | > It's also good to keep in mind that this is an unpaid hobby
       | project.
       | 
       | That's the root cause. At some point a corp/gov consortium needs
       | to adopt key projects and hire out the maintainers and give them
       | real power and flexibility, perhaps similar to the way a nation
       | might nationalize key infrastructure.
       | 
       | But this is against the core ethos at some level. Freedom and
       | safety can be antagonistic.
        
         | maerF0x0 wrote:
         | > gov consortium
         | 
         | Personally I wouldnt trust a govt to not backdoor everything.
        
           | jvanderbot wrote:
           | A consortium is a great way to get money and power into those
           | maintainers. Never said they should take the power from them
           | or provide code. I think people are hearing their own mind
           | here, not mine.
        
         | none_to_remain wrote:
         | Avoid compromise with one simple trick: surrender to the
         | attackers
        
         | tamimio wrote:
         | You are implying that governments are more technically
         | competent and more trustworthy than open source communities..
        
           | jvanderbot wrote:
           | I said nothing of the sort.
           | 
           | I'm implying they are richer.
        
           | mttpgn wrote:
           | Many open source projects often already do receive US
           | government funding, mostly through an onerous grant-
           | application process. Nationalizing American open source
           | projects could make them operate more like European open
           | source where their EU funding is open and clear. The
           | detrimental trade-off, however, is that the American agencies
           | most capable to support and contribute directly to
           | infrastructure security have burned away all trust from the
           | rest of the world. Direct contributions directly from those
           | USG agencies would reduce global trust in those projects even
           | worse.
        
       | rwmj wrote:
       | Missing the whole Fedora timeline. I was emailed by "Jia Tan"
       | between Feb 27 and Mar 27, in a partially successful attempt to
       | get the new xz into Fedora 40 & 41. Edit: I emailed Russ with the
       | details.
        
         | itslennysfault wrote:
         | I wondered about this. I saw the note at the bottom "RedHat
         | announces that the backdoored xz shipped in Fedora Rawhide and
         | Fedora Linux 40 beta" but saw nothing in the timeline
         | explaining when/how it made it into Fedora.
        
           | rwmj wrote:
           | These are the Fedora packages for xz that are vulnerable. If
           | you click through the links you can see when they were added:
           | 
           | https://lists.fedoraproject.org/archives/list/devel@lists.fe.
           | .. (https://archive.ph/diGNB)
           | 
           | This is the rough sequence of events in Fedora:
           | 
           | https://lists.fedoraproject.org/archives/list/devel@lists.fe.
           | .. (https://archive.ph/e0SdX)
        
       | j1elo wrote:
       | A "good" side effect of this for OSS maintainers going on, is
       | that now any time an entitled user starts being too pushy or
       | too... well, _entitled_ , they can be given a canned response:
       | 
       |  _Are you trying to pull an xz attack on me?_
        
         | ceejayoz wrote:
         | Ah, the old "undercover cops can't lie about not being a cop,
         | just ask them" technique.
        
           | Fnoord wrote:
           | If you casually ask this while you can study (and preferably
           | record!) the person's posture and they react in real-time
           | then you can apply interrogation technique which CIA et al
           | use.
        
             | ceejayoz wrote:
             | So becoming an open source maintainer will involve an in-
             | person trip to an interrogation?
             | 
             | The xz attack involves, in significant part, a maintainer
             | burned out and happy to accept offered help. I don't think
             | making it _substantially harder_ to receive genuine help is
             | likely to improve the situation.
        
               | Fnoord wrote:
               | I just wanted to bring up that technique (it is not
               | unique to CIA; LE also uses it). I never asserted the
               | technique would've been useful in this very situation.
               | However, in your framing you ignored the option of video
               | conferencing.
               | 
               | Also, you are forgetting bringing up 'are you trying to
               | pull xz to me?' is a yellow flag towards the person who
               | said it. It isn't definite, it just makes people raise
               | up, it gathers the attention of watchful eyes. We should
               | be careful not overdoing it though.
        
               | j1elo wrote:
               | I made the original comment and must say that it was only
               | meant as a joke. But, _maybe_ some cases would merit
               | using it.
               | 
               | This is of course based on the fact of how the xz attack
               | needed a couple of apparently innocent community members
               | to start being too pushy, to the point of almost bullying
               | the author and pointing fingers to their supposed
               | passivity towards maintaining the project.
               | 
               | To some levels of such behavior, after all that has
               | happened, one could reasonably (even if just jokingly)
               | wonder if it's not a similar attempt. IMO, being framed
               | as a possible attacker would probably either calm the
               | shit out of some too entitled users, or provoke them into
               | even more whining.
               | 
               | Now seriously, there is the thing about attitude that I
               | wish was more popular among FOSS maintainers in order to
               | have a mentally healthy relationship with their role: the
               | ability to work on their own terms and not forget even
               | for a split second that this is all a hobby activity.
               | That FOSS licenses are explicitly written to allow A LOT
               | of freedoms such as hiring 1st or 3rd party support
               | services, forking, or whatever else but not to demand
               | anything from the author.
        
       | softwaredoug wrote:
       | Open Source is a real tragedy of the commons.
       | 
       | Everyone wants to consume it. Nobody wants to participate.
       | 
       | People are upset when a company like Elastic or Mongo switches to
       | a "non open" license. But at the same time, the market doesn't
       | leave much choice. Companies won't be incentivized to contribute
       | to projects when they can freeload. The market actually wants
       | vendors, it doesn't want to participate in open source. But they
       | don't want to _pay_ for vendors.
       | 
       | So I think its entirely appropriate that anyone / any entity that
       | creates "open source" to change their license, set limits, say
       | "no", and let users be damed unless they're willing to make it
       | financially appealing. It's literally "Without Warranty" for a
       | reason.
       | 
       | Letting your passion project becoming hijacked into determining
       | your mental health is really depressing. F' the people who can't
       | get on board with your boundaries, etc around it. They deserve
       | the natural consequences of their lack of support.
        
         | rwmj wrote:
         | Yeah whatever. Closed source software is much easier to
         | subvert, just have your agents join the company and they can
         | push whatever they want without any external (or even internal)
         | review.
        
       | sebstefan wrote:
       | Maybe one of the outcomes of this could be a culture change in
       | FOSS towards systematically banning rude consumers in Github
       | issues, or, just in general, a heightened community awareness
       | making us coming down on them way harder when we see it happen.
        
         | Aurornis wrote:
         | The attackers will leverage any culture that helps them
         | accomplish their goals.
         | 
         | If being rude and pushy doesn't work, the next round will be
         | kind and helpful. Don't read too much into the cultural
         | techniques used, because the cultural techniques will mirror
         | the culture at the time.
        
           | coldpie wrote:
           | Even if the security outcome is the same, I would still count
           | people being kind and helpful online instead of rude as an
           | improvement.
        
             | saghm wrote:
             | As always, there's an xkcd for that https://xkcd.com/810/
        
               | cryptonector wrote:
               | That's amazing.
        
           | tamimio wrote:
           | Spot on. The counter should be sound regardless of any social
           | or cultural context, a process where being polite or rude,
           | pushy or not is irrelevant.
        
           | advaith08 wrote:
           | Agree. I think a more core issue here is that only 1 person
           | needed to be convinced in order to push malware into xz
        
         | apantel wrote:
         | The Jia Tan character was never rude. If you make rudeness the
         | thing that throws a red flag, then 'nice' fake accounts will
         | bubble up to do the pressuring.
        
           | sebstefan wrote:
           | Pressuring the maintainer is already rude in itself and being
           | polite about it won't help them
           | 
           | If they want things done quickly they can do it themselves
        
             | ant6n wrote:
             | > If they want things done quickly they can do it
             | themselves
             | 
             | I mean they kind of did. And that was the problem.
        
           | genter wrote:
           | The assumption is that the group behind this attack had sock
           | puppets that were rude to Lasse Collin, to wear him down, and
           | then Jia Tan swept in as the savior.
        
           | patmorgan23 wrote:
           | Jia Tan wasn't rude, but the original maintainer Laser Collin
           | probably wouldn't have been as burned out and willing to give
           | responsibility to them if the community wasn't as rude and
           | demanding of someone doing free work for them.
           | 
           | I think we need to start paying more of these open source
           | maintainers and have some staff/volunteers that can help them
           | manage their git hub issue volume.
        
             | orthecreedence wrote:
             | The article covers that those rude accounts may have been
             | sybils of the attacker to create pressure. It's effectively
             | good cop/bad cop for open source.
        
         | GoblinSlayer wrote:
         | Just don't do anything crazy. There are legitimately crazy
         | people asking for crazy things, not necessarily backdoors.
        
         | ok123456 wrote:
         | People have been bullied out of 'nice' communities. See the
         | 'Actix' debacle in Rust.
        
           | mzs wrote:
           | That was mostly redditors though. Reddit is not a nice
           | community.
        
         | publius_0xf3 wrote:
         | I want to caution against taking a good thing too far.
         | 
         | There's a certain kind of talented person who is all too
         | conscious of their abilities and is arrogant, irascible, and
         | demanding as a result. Linus Torvalds, Steve Jobs, Casey
         | Muratori come to mind. Much as we might want these characters
         | to be kinder, their irascibility is inseparable from their more
         | admirable qualities.
         | 
         | Sometimes good things, even the best things, are made by
         | difficult people, and we would lose a lot by making a community
         | that alienates them.
        
           | djmips wrote:
           | That's a tough one - It's hard to fully disagree but in my
           | experience you can have all the benefits without the poison.
           | Accepting the poison just because of the benefits is kind of
           | just giving up. I don't feel like the your hypothesis that
           | the two are irrevocably linked holds up under examination.
        
           | RyanCavanaugh wrote:
           | There are plenty of hyper-competent technical people in the
           | field who are also kind and patient. Being smart doesn't turn
           | someone into a jerk.
        
           | Tainnor wrote:
           | Linus Torvalds is apparently trying to do better (although I
           | haven't followed up with the progress), but more importantly,
           | while he might be (have been) unnecessarily rude and
           | aggressive, he's not entitled (as far as I know). I don't
           | think he would jump into an issue tracker of some project he
           | doesn't maintain and demand that certain changes be made.
        
         | Vicinity9635 wrote:
         | Being rude is... unimportant. A lot of people think being
         | passive aggressive is being polite when it's actually being
         | rude + deceitful. There's nothing wrong with being direct,
         | which some mistake for rude. I find it refreshing.
        
         | Tainnor wrote:
         | I don't want to excuse rudeness or a sense of entitlement. But
         | I think we can still understand where it comes from. A lot of
         | these people probably work on crappy codebases where "let's
         | just add a random dependency without any vetting" was the norm,
         | they might have to deal with production issues etc. There's
         | probably a systemic issue behind it, that our industry relies
         | too much on unpaid labour and is usually not willing to
         | contribute back.[0]
         | 
         | [0] Funnily enough, just a week or two ago, I fixed an issue in
         | an OS project that we introduced at work. It was an easy
         | frontend fix even for someone like me who doesn't do frontend
         | and barely knows how to spell Vue. And more importantly, in the
         | issue description somebody already wrote exactly what causes
         | the bug and what would need to change - the only thing left was
         | finding the place where to make the (one-line) change. Somehow
         | that issue had been open for 2 years but nobody of the several
         | people who complained (nor the maintainer) had bothered to fix
         | it. After I made a PR, it was merged within a day.
        
       | edg5000 wrote:
       | I wonder, once the attacker gained commit permissions, were they
       | able to rewrite and force push existing commits? In that case
       | rolling back to older commits may not be a solution.
       | 
       | If my speculation is correct then the the exact date on which
       | access was granted must then first be known, after that a trusted
       | backup of the repo from before that date is needed. Ideally Lasse
       | Collin would have a daily backup of the repo.
       | 
       | Although perhaps the entire repo may have to be completely
       | audited at this point.
        
         | ajross wrote:
         | Force pushes tend to be noticed easily. All it takes is for one
         | external developer to try to pull to see the failure. And it's
         | actually hard to do because you need to comb through the tree
         | to update all the tags that point to the old commits. On top of
         | that it obviously breaks any external references to the commit
         | IDs (e.g. in distro or build configurations), all the way up to
         | cryptographic signatures that might have been made on releases.
         | 
         | I think it's a pretty reasonable assumption that this didn't
         | happen, though it would be nice to see a testimony to that
         | effect from someone trustworthy (e.g. "I restored a xz checkout
         | from a backup taken before 5.6.0 and all the commit IDs
         | match").
        
           | fl7305 wrote:
           | > And it's actually hard to do because you need to comb
           | through the tree to update all the tags that point to the old
           | commits.
           | 
           | Isn't this part just a few pages of code, if that?
           | 
           | I agree that it will be blindingly obvious for the reasons
           | you list.
        
         | sloowm wrote:
         | The way the hack works is incredibly sophisticated and has
         | specifically sought out how to get past all normal checks. If
         | messing with some commits would be possible this entire rube
         | goldberg hack would not have been set up.
        
         | Denvercoder9 wrote:
         | There are trusted copies of historic releases from third-party
         | sources (at least Linux distributions, but there's probably
         | other sources as well), it's pretty easy to check whether the
         | tags in the git repository match those. (This can be done as
         | the tarballs are a superset of the files in the git repository,
         | the other way around doesn't work).
        
       | benob wrote:
       | What makes you think that the accounts were not compromised only
       | recently?
        
         | wasmitnetzen wrote:
         | Not OP, but the fact that they only appear in this context
         | makes me believe that their are specifically set up for this
         | task. No regular person has that good opsec just to push
         | patches to a random library.
        
         | sloowm wrote:
         | If the account was compromised only recently the real co-
         | maintainer would have done everything to warn the maintainer.
         | If you're maintaining some core piece of infrastructure and
         | your account gets compromised it would be trivial to let at
         | least someone know you're not the one pushing these commits.
        
       | nrawe wrote:
       | "It's also good to keep in mind that this is an unpaid hobby
       | project." ~ Lasse Collin, 2022-06-08.
       | 
       | As someone working in security, the fact that _foundational_
       | pieces of the computing/networking rely on motivated individuals
       | and essentially goodwill is mind blowing.
       | 
       | There are great aspects to the FOSS movement, but the risks -
       | particularly the social engineering aspects as demonstrated here
       | - and potential blast radius of supply chains like this... We
       | take it all for granted and that is lining up to bite us hard, as
       | an industry.
        
         | zoeysmithe wrote:
         | Sorta but flawed relevant xkcd: https://xkcd.com/2347/
         | 
         | I don't see how this is any strongly different than some
         | unappreciated skill worker in a corporation. Its interesting
         | the double standard we have for FOSS. Meanwhile in the
         | commercial world, supply chain attacks are commonplace and
         | barely solicit headlines.
         | 
         | Yes, FOSS needs to be able to address these kinds of attacks,
         | but the world runs on the efforts of the low-level few,
         | generally. The percent of people who work to build and maintain
         | core infrastructure has always been small in any economic
         | system. The world is held up by the unsung labor of the
         | anonymous working class. Think of all the people working right
         | now to make sure you have clean water, electricity, sanitation,
         | etc. Its a tiny fraction of the people in your city.
         | 
         | Conversely, why aren't all these corporations who depend on
         | this contributing themselves? Or reaching out? There's a real
         | parasitic aspect here that gets swept under the rug too.
         | 
         | I'd even argue this isn't really a hobby for many, especially
         | for higher profile projects. For many its done for social
         | capital reasons to build up one's reputation which has all
         | sorts of benefits, including corporate advancement, creating
         | connections for startups, etc. Its career adjacent. And that's
         | ignoring all the companies that contribute to FOSS explicitly
         | with on-the-clock staff.
         | 
         | So there are motivators more than just "I'm bored and need a
         | hobby." Its a little dismissive to call FOSS development just a
         | hobby. Is what Linus does a hobby? I don't think most people
         | would think so. Things like this have important social and
         | economic motivators. The hypothetical guy in the comic isn't
         | some weirdo doing something irrationally, but has rational
         | motivators.
         | 
         | I'd also argue that its pretty harmful to FOSS adoption if the
         | community takes on a "well, its a hobby don't expect quality,
         | security, or professionalism." This is a great way to chase
         | people away from FOSS. We can't just say "Oh FOSS is better
         | than much closed software" when things are good, then
         | immaturely reply "its just a dumb hobby, you're dumb for
         | trusting me," when things go south. I think its pretty obvious
         | there's a lot of defensiveness right now and people being
         | protective over their social capital and projects, but I think
         | this path is just the wrong way to go.
         | 
         | Comms, PR, and image management in FOSS is usually bad (see
         | Linus's rage, high profile flame wars, dramatic forkings,
         | ideological battles, etc), so optics here aren't great, because
         | optics is something FOSS struggles with. The community is at
         | best, herding cats, with all manner of big personalities and
         | egos, and its usually a bit of a controlled car crash on the
         | best of days.
        
           | phicoh wrote:
           | I think there is a fundamental difference between how
           | corporations used to work and how open source typically
           | works.
           | 
           | In a traditional corporation, people would come to an office.
           | It would be known where they live. If you would require
           | something like (code) review, it becomes a lot harder to
           | plant something. Obviously not impossible, but hard for all
           | but the most dedicated attackers.
           | 
           | In contrast, with open source and poorly funded projects.
           | People don't always have money to travel. So the people
           | working on an open source project may only know each other by
           | some online handles. Nerds typically don't like video
           | conferencing. So it is quite possible to keep almost
           | everything about an identity secret.
           | 
           | And that makes it a lot more attractive to just try
           | something. If something goes wrong, the perpetrator is likely
           | in a safe jurisdiction.
        
             | zoeysmithe wrote:
             | tbf, most security issues aren't from some insider, but
             | outsiders discovering exploits. The insider scenario here
             | is extremely rare both in commercial and FOSS software.
             | 
             | Corporate insiders do stuff like this too, its just how
             | often do we hear about it? FOSS has high visibility but
             | closed source doesn't. Think of all the shady backdoors out
             | there. Or what Snowden and others revealed.
             | 
             | On average a 100% FOSS organization is going to be much,
             | much more secure than a 100% commercial close source one.
             | Think of all the effort it takes to moderately secure a
             | Windows/closed source stack environment. Its an entire
             | massive industry! Crowdstrike alone has a $76bn marketcap
             | and that's just one AV vendor!
             | 
             | Commercial software obeys the dictates of modern
             | capitalism. Projects get rushed, code review and security
             | take a backseat to quarterly reports and launch dates, etc.
             | This makes closed source security issues common.
             | 
             | Usually when the exploit is discovered the attacker is far
             | outside the victim's jurisdiction. See all the crypto gangs
             | operating from non-Western non-extradition states.
        
               | pixl97 wrote:
               | >Corporate insiders do stuff like this too, its just how
               | often do we hear about it?
               | 
               | Pretty much never.
               | 
               | One particular terrible case I saw was when a developer
               | left a testing flag in a build that got pushed to
               | production and used for years. Had you set the right
               | &whatever flag in the URL you'd have unauthenticated
               | access to everything. It was discovered years after the
               | fact when the software was no longer in supported status,
               | so nothing was ever wrote up about it and told to the
               | users. "They shouldn't have been using it by now anyway,
               | no use in bad press and getting users worried".
        
               | 01HNNWZ0MV43FF wrote:
               | And I'm guessing there was no Five Whys or equivalent to
               | ask how to prevent this from happening again.
               | 
               | No time to do things right...
        
             | singularity2001 wrote:
             | You may want to read Kevin Mitnick on how (relatively) easy
             | it is to infiltrate physical spaces.
        
               | Fnoord wrote:
               | Mitnick, at this point, has deceased.
               | 
               | Read up on red teaming and social engineering in general.
               | Many more examples of red teaming are available, for
               | example. I thoroughly enjoy these specific stories on
               | Darknet Diaries podcast.
        
             | nradov wrote:
             | True, but we have to assume that nation states are now
             | actively inserting or recruiting intelligence agents in
             | prominent tech companies. US authorities already caught a
             | Saudi spy in Twitter. How many haven't been caught yet? If
             | I was running foreign intelligence for China or Israel or
             | any other major country I would certainly try to place
             | agents into Google, Apple, OpenAI etc.
        
           | asa400 wrote:
           | This is one of the sanest comments I've ever seen describing
           | what FOSS actually is. I think you nailed it when you said:
           | 
           | > We can't just say "Oh FOSS is better than much closed
           | software" when things are good, then immaturely reply "its
           | just a dumb hobby, you're dumb for trusting me," when things
           | go south.
           | 
           | It's weird. There are the explicit expectations of FOSS
           | (mostly just licenses, which say very little), and the
           | implicit expectations (everything else).
           | 
           | It's anarchic and ad hoc in a way that leaves the question of
           | "what are we actually doing with this project(s)" up for all
           | kinds of situational interpretation, as you noted. This is
           | bad, because this ambiguity leads to conflict when the
           | various actors are forced to reveal their expectations, and
           | in doing so show that their expectations are actually quite
           | divergent (i.e., "this is my fun hobby project!" vs. "my
           | company fails without this bugfix!" vs. "I thought this was a
           | community project!" vs. "This project is for me and my
           | company, I call the shots, you're welcome to look at the
           | code, though").
           | 
           | It's a little bit like the companies that are like "we have a
           | flat management hierarchy, no one really reports to anyone
           | else". It's just not true. It's almost always used as a ruse
           | to dupe a certain class of participant that isn't
           | sophisticated enough to know that these kinds of implicit
           | power hierarchies leave them at a disadvantage. There's
           | always a structure, it's just whether that structure is
           | explicit or not. This kind of wishy-washy refusal to codify
           | project roles/importance in FOSS is not doing us any favors.
           | In fact I think it prevents us from actively recognizing the
           | "clean water" role that an enormous number of projects play.
           | 
           | There's real labor power here if we want it, but our
           | continued desire to have FOSS be everything to everyone is
           | choking it.
        
           | nradov wrote:
           | If you're not getting paid then it's just a hobby. And
           | there's nothing wrong with hobbies. As a FOSS contributor
           | myself I feel no obligation to promote FOSS adoption.
           | Quality, security, and professionalism are not my problem;
           | anyone who cares about those things is welcome to fork my
           | code.
        
         | phicoh wrote:
         | I think a problem is that there doesn't seem any way to
         | automatically check this. If we assume that anything that is
         | used during build time can be malicious then figuring out those
         | dependencies is already hard enough. Mapping that to
         | organizational stability is one step further.
        
           | dylan604 wrote:
           | This is where the but FOSS is reviewable so it is trusted
           | falls down. This situation is a prime example of how that
           | fallacy is misconstrued. By being FOSS didn't make it
           | trustworthy, it just meant that people had a fighting chance
           | to find out why when something does happen. That's closing
           | the barn door after the horses already left.
           | 
           | I'm not knocking FOSS at all. I just think some people have
           | the concept twisted. Just like the meme of being written in
           | Rust means the code is fast/safe from the mere fact it was
           | written in Rust. I don't write Rust, but if I did, I
           | guarantee that just from sheer not knowing WTF I'm doing
           | would result in bad code. The language will not protect me
           | from myself. FOSS will not protect the world from itself, but
           | it does at least allow for decent investigations and after
           | action reports.
        
             | jethro_tell wrote:
             | You don't think every nation state has people inside
             | private software shops? Especially big tech?
             | 
             | Look at stuff getting signed with MS keys, hardware vendors
             | with possible backdoors.
             | 
             | Social engineering is social engineering and it can happen
             | anywhere no matter the profit motivation or lack there of.
             | 
             | Money interest in software won't save you any more than
             | Foss.
        
               | nrawe wrote:
               | It'd be naive to assume that nation state actors are not
               | trying to penetrate the supply chain at all levels, as it
               | just takes a single weak link in the chain. That weak
               | link could be behind corporate doors or in the open.
               | 
               | The main issue is that this attack shows how a relatively
               | unknown component, as part of a much larger and more
               | critical infrastructure, is susceptible to pressure as a
               | result of "this is a hobby project, lend a hand".
               | 
               | At what point do these components become seen as a
               | utility and in some way adopted into a more mainline,
               | secure, well-funded approach to maintenance? That
               | maintenance can, and probably should, happen in the open,
               | but with the requisite level of scrutiny and oversight
               | worthy of a critical component.
               | 
               | We got _very lucky_ , _this time_.
        
               | dylan604 wrote:
               | > this attack shows how a relatively unknown component
               | 
               | why just this one? do we collectively have the memory of
               | a gold fish? just recently, log4j had a similar blast
               | radius. is it because one was seemingly malicious that
               | the other doesn't count?
        
               | nrawe wrote:
               | While blast radius of both is large, there are major
               | differences between them. Log4J was a largely app-level
               | vulnerability affecting Java-based systems.
               | 
               | This vulnerability, had all gone to the attackers plan,
               | would have been present on the major distros next major
               | releases through a key infrastructure component which
               | would have been installed far more widely, IMO.
               | 
               | Another major difference is that Log4J is already part of
               | the Apache Foundation, which means it should have greater
               | oversight/security maintenance anyway, while this is an
               | attack against a solo developer.
               | 
               | It's definitely not to downplay the severity of the Log4J
               | incident, by any means. But they are decidedly different.
        
               | jethro_tell wrote:
               | I think googles program to hire security researchers was
               | a minor step in the right direction, but it would behoove
               | big tech and or various governments, to do the same thing
               | these state intelligence actors are doing, and take a
               | look at all of these projects that touch core infra and
               | investigate the maintainers and their vulnerability.
               | 
               | I would bet that some of these projects like xz would
               | show enormousness benefits from one paid person working
               | on it 1/4 time, leaving room for a couple more projects
               | per dev. Additionally, a couple places providing
               | relatively minor grants would probably help a dev buy
               | back some of their time so the can work on their project
               | some other time then 'after the kids are in bed'
        
             | phicoh wrote:
             | We should not think in absolutes, but in terms of tools.
             | What risks come with using a certain tool.
             | 
             | In your Rust example, using C is like using a power tool
             | without any safety measures. That doesn't mean that you are
             | going to get hurt, but there is an expectation that a
             | sizable fraction of users of such tools will get hurt.
             | 
             | Rust is then the same tool with safety measures. Of course
             | it is still a power tool, you can get hurt. But the chances
             | of that happening during normal operation is a lot lower.
             | 
             | I think xz is a good example where open source happened to
             | work as intended. Somebody noticed something weird, alerted
             | and other people could quickly identify what other software
             | might have the same problem.
        
             | Vegenoid wrote:
             | > That's closing the barn door after the horses already
             | left.
             | 
             | I don't think that's quite true - maybe a couple horses got
             | out, but this was caught early and did not get to infect
             | very many machines because someone completely unaffiliated
             | could review it and find it.
        
         | riskable wrote:
         | This event should be a wake-up call to businesses everywhere:
         | It's not just a small number of "core" FOSS projects that need
         | their support (funding _and_ assistance!). Before this event
         | who was thinking about a compression library when considering
         | the security of their FOSS dependencies?
         | 
         | The scope of "what FOSS needs to be supported and well-funded"
         | just increased by an order of magnitude.
        
           | kashyapc wrote:
           | Yeah, as I note elsewhere in this thread, the OpenSSL
           | "Heartbleed" saga should've taught some lessons, but alas,
           | it's "classic" human nature to repeat our mistakes.
        
             | mistrial9 wrote:
             | no - funding is going to places where profit is returned on
             | investment, NOT to the tedious and long-term work that all
             | of this sits on. It is not "human nature" because tedious,
             | high-skill maintenance is done by humans, which built the
             | infrastructure and continue to be crucial.
             | 
             | There is no accountability and in fact high-five and star
             | shots for those taking piles of money and placing it on
             | more piles of money, instead of doing what appears to be
             | obvious to almost everyone on this thread -- paying long-
             | term engineers.
        
               | kashyapc wrote:
               | No counter-argument; fully agree. FWIW, that's what I was
               | referring to when I said, "fat companies" (and
               | executives) glossing over the important tedious here[1].
               | 
               | [1] https://news.ycombinator.com/item?id=39905802
        
           | jdsalaro wrote:
           | > This event should be a wake-up call to businesses
           | everywhere
           | 
           | This ought to be not only a wake-up call for businesses, but
           | also to hobbyists and members of the public in general; our
           | code, our projects and our social "code-generating systems"
           | must be hardened or face rampant abuse and ultimately be
           | weaponised against us.
           | 
           | In a way, these issues which FOSS is facing and are becoming
           | apparent are no different to those democracy been submitted
           | since time immemorial.
        
           | lolinder wrote:
           | Funding all of these deep dependencies may have helped in
           | this case but wouldn't address the root of the problem, which
           | is that every business out there runs _enormous amounts_ of
           | unsandboxed third party code. Funding may have helped xz
           | specifically from falling to pressure to switch to a
           | malicious maintainer, but it does nothing about the very real
           | risk that any one of the tens of thousands of projects we
           | depend on has _always_ been a long con.
           | 
           | The solution here has to be some combination of a dramatic
           | cut back on the number of individual projects we rely on and
           | well-audited technical solutions that treat all code--even
           | FOSS dependencies--as potentially malicious and sandboxes it
           | accordingly.
        
           | raxxorraxor wrote:
           | This would of course be nice since the fact that so much of
           | our infrastructure is based on the work of people sharing it
           | openly, a practice heavily in contrast to industry behavior,
           | is sadly still a little known fact outside of software
           | development.
           | 
           | The demand for more assistance here is the angle that was
           | played in social engineering, specifically the demand to
           | acquire more maintainers due to workload. Especially if such
           | support would take the form like providing source archives
           | with manipulated build scripts that are rarely checked by
           | third parties.
           | 
           | There is also a problem of badly behaving industry that tries
           | to take control of "hobby projects". Speaking of which, these
           | "hobby"-projects often have much better code than many, many
           | industry codebases.
           | 
           | I think FOSS overall still lessens the risk. It got more
           | risky since it has been integrated in social media since
           | these often allow for developer being shouted down or
           | exploited much more easily.
        
             | consp wrote:
             | > allow for developer being shouted down or exploited much
             | more easily
             | 
             | No is an answer. And blocking people should be a thing.
             | 
             | Personally I do not publish anything anymore which is not
             | "non-commercial only" as a result of demands, you can do it
             | yourself if you want to make money off it (or if you demand
             | things for that matter). Fortunately my online stuff isn't
             | used much but even then it's possible to get "requests".
        
           | xhkkffbf wrote:
           | I'm all for funding FOSS, but how would the money have made
           | any difference here? It just would have made Jian a bit
           | richer, right?
        
         | JTbane wrote:
         | That's just the nature of free software- you have to either
         | trust the maintainer or do it yourself. There is not really a
         | way around it.
         | 
         | - Corporate maintainers are great until they enshittify things
         | in the pursuit of profit (see Oracle)
         | 
         | - Nonprofits are probably the best but can go the same route as
         | corps (see Mozilla)
         | 
         | - Hobbyists are great until they burnout (see xz)
        
           | apantel wrote:
           | I think the level of complexity is the problem. A bad actor
           | can be embedded in any of the above contexts: corp, non-
           | profit, FOSS hobbyist. It doesn't matter. The question is:
           | when software is so complex that no one knows what 99.9% of
           | the code in their own stack, on their own machine, does
           | (which is the truth for everyone here including me), how do
           | you detect 'bad action'?
        
             | Eisenstein wrote:
             | The level of complexity involved in making sure that
             | electrical plants work, that water gets to your home, that
             | planes don't crash into each other, that food gets from the
             | ground to a supermarket shelf, etc, is unfathomable and no
             | single person knows how all of it works. Code is not some
             | unique part of human infrastructure in this aspect. We
             | specialize and rely on the fact that by and large, people
             | want things to work and as long as the incentives align
             | people won't do destructive things. There are millions of
             | people acting in concert to keep the modern world working,
             | every second of every day, and it is amazing that more crap
             | isn't constantly going disastrously wrong and that when it
             | does when are surprised.
        
               | 01HNNWZ0MV43FF wrote:
               | > Code is not some unique part of human infrastructure in
               | this aspect
               | 
               | It kinda is. The fact that code costs fractions of a
               | penny to copy and scale endlessly, changes everything.
               | 
               | There's hard limits on power plants, you need staff to
               | run them, it's well-understood.
               | 
               | But software - You can make a startup with 3 people and a
               | venture capitalist who's willing to gamble a couple
               | million on the bet that one really good idea will make
               | hundreds of millions.
               | 
               | Software actually is different. It's the only non-scarce
               | resource. Look at the GPL - Software is the only space
               | where communism / anarchy kinda succeeded, because you
               | really can give away a product with _nearly_ no variable
               | costs.
               | 
               | And it's really just the next step on the scale of "We
               | need things that are dangerous" throughout all history.
               | Observe:
               | 
               | - Fire is needed to cook food, but fire can burn down a
               | whole city if it's not controlled
               | 
               | - Gunpowder is needed for weapons, but it can kill
               | instantly if mishandled
               | 
               | - Nuclear reactors are needed for electricity, but there
               | is no way to generate gigawatts of power in a way that
               | those gigawatts can't theoretically cause a disaster if
               | they escape containment
               | 
               | - Lithium-ion batteries are the densest batteries yet,
               | but again they have no moral compass between "The user
               | needs 10 amps, I'm giving 10 amps" and "This random short
               | circuit needs 10 amps, I'm giving 10 amps"
               | 
               | - Software has resulted in outrageous growth and change,
               | but just like nuclear power, it doesn't have its own
               | morality, someone must contain it.
               | 
               | Even more so than lithium and nuke plants, software is a
               | bigger lever that allows us to do more with less. Doing
               | more with less simply means that a smaller sabotage
               | causes more damage. It's the price of civilization.
               | 
               | So the genie ain't going back in. And private industry is
               | always going to be a tragedy of the commons.
               | 
               | I'm not sure what government regulation can do, but there
               | may come a point where we say, even if it means our
               | political rivals freeload off of us, it's better for the
               | USA to bear the cost of auditing and maintaining FOSS
               | than to ask private corporations to bear that cost
               | duplicating each other's work and keeping it secret.
               | 
               | Is that a handout to Big Tech? 100%. Balance it with UBI
               | and a CO2 tax that coincidentally incentivizes data
               | centers to be efficient. We'll deal with it.
        
           | emn13 wrote:
           | While it's interesting to philosophize about alternatives
           | like this and it's seemingly obviously true that there's no
           | trivial solution to maintainership that solves all problems
           | perfectly, I'm a little wary about presenting these flawed
           | approaches as somehow equivalent; I highly doubt they're even
           | remotely equally bad - nor that they have equally big
           | upsides.
        
         | jddil wrote:
         | Never understood why our industry seems unique in our
         | willingness to do unpaid work for giant corps. Your compression
         | library isn't saving the world, it's making it easier for
         | amazon to save a few bucks.
         | 
         | You have the right to be paid for your time. It's valuable.
         | 
         | I enjoy coding too... but the only free coding I do is for
         | myself.
         | 
         | Use a proper license, charge for your time and stop killing
         | yourself doing unpaid hobby projects that cause nothing but
         | stress.
        
           | dugite-code wrote:
           | > why our industry seems unique in our willingness to do
           | unpaid work for giant corps.
           | 
           | Because it never starts that way. It scratches an itch,
           | solves an interesting puzzle and people thank and praise the
           | work. Deep down we all want to be useful, and it helps that
           | it looks great on a resume.
           | 
           | After it's established the big corps come along, but the
           | feeling of community usefulness remains. It's also why so
           | many devs burn themselves out, they don't want to disapoint.
        
             | maerF0x0 wrote:
             | IMO this is exactly why. The payout comes later, if the
             | project is successful.
        
           | cesarb wrote:
           | > Never understood why our industry seems unique in our
           | willingness to do unpaid work for giant corps. Your
           | compression library isn't saving the world, it's making it
           | easier for amazon to save a few bucks.
           | 
           | The work was not being done "for giant corps"; it was being
           | done for everyone, and giant corps just happen to be a part
           | of "everyone", together with small corps, individual people,
           | government, and so on.
           | 
           | > You have the right to be paid for your time. It's valuable.
           | 
           | When you think of free software contributions as "volunteer
           | labor" instead of just a hobby, it makes more sense. Yes, my
           | time is valuable; when I'm working on free software, I'm
           | choosing to use this valuable time to contribute to the whole
           | world, without asking for anything in return.
        
             | squigz wrote:
             | Weird that this is a concept people struggle to
             | understand...
        
           | bombcar wrote:
           | Did you get paid for the post you just wrote? People at giant
           | corps are reading it right now, they're getting value from
           | it. You deserve to be paid!
           | 
           | IF you understand why you'd post without being paid, you're
           | 80% of the way to realizing why people program without being
           | paid.
        
         | Inf0s3c wrote:
         | Also in security and 100% tired of the code slingers who get
         | annoyed by security reviews
         | 
         | Here on HN it's been derided as "company just checking a box
         | for compliance but adds no functionality. It slows us down when
         | we want to disrupt!" - developer of yet another todo list or
         | photo editor app...
         | 
         | Buffer overflows and the like are one thing. Notions from this
         | blog that certain normal files won't be well reviewed is a bad
         | smell in software. Innocuous files should be just as rigorously
         | reviewed as it's all part of the state of the machine
         | 
         | "This is how it's always worked" is terrible justification
         | 
         | Startup script kiddies git pulling the internet, and single
         | maintainers open source projects aren't cutting it; if it's
         | that important to the whole those in charge of the whole need
         | to make sure it's properly vetted.
         | 
         | I'm an EE first; this really just makes me want to see more
         | software flashed into hardware.
        
           | ok123456 wrote:
           | A security review or snake oil AI black box didn't stop this.
           | It was stopped by a 'code slinger' who noticed a performance
           | regression in ssh.
           | 
           | Deloitte coming in a checklist would have NEVER stopped this
           | one.
        
         | jrochkind1 wrote:
         | What it reminds me of, is my reoccuring thought, not just about
         | open source but including this aspect, that we've built up a
         | society based on software, that we could literally only afford
         | because we've done it unsustainably. The economy could not bear
         | the costs of producing all this software in an actual reliable
         | sustainable way. So... now what.
        
           | Filligree wrote:
           | Could it really not afford it? I'm not convinced that is the
           | case, so much as we don't have a way to pay people for their
           | effort.
        
           | mschuster91 wrote:
           | It might be a good idea for governments to coordinate with
           | computing advocacy groups/associations (e.g. German CCC,
           | NANOG, popular Linux distributions such as Ubuntu, Debian,
           | Fedora, Arch)... set up a fund of maybe 10 million euros,
           | have the associations identify critical, shared components,
           | and work out funding with their core developers. 10M a year
           | should fund anything from 100-200 developers (assuming
           | European wages), that should be more than enough, and it's
           | pocket change for the G20 nations.
           | 
           | If that's too much bureaucracy or people fear that
           | governments might exert undue influence: hand the money to
           | universities, go back to the roots - many (F)OSS projects
           | started out in universities after all. Only issue there is
           | that projects may end up like OpenStack in the end ;)
        
         | djvdq wrote:
         | Nothing will really change, sadly. Remember log4j? There were
         | also a lot of talking why people working on FOSS should be
         | paid. And after one month almost no one remembered these voices
         | exept for small minority of people.
        
         | agumonkey wrote:
         | What's gonna happen now ? a team of foss sec chaos monkey
         | trying to run checks on a core set of libs ?
        
       | userbinator wrote:
       | I think one of the good things to come out of this may be an
       | increased sense of conservatism around upgrading. Far too many
       | people, including developers, seem to just accept upgrades as
       | always-good instead of carefully considering the risks and
       | benefits. Raising the bar for accepting changes can also reduce
       | the churn that makes so much software unstable.
        
         | sebstefan wrote:
         | All things considered I'm not sure that'd be such a good thing
         | 
         | How many security issues spring from outdated packages vs
         | packages updated too hastily?
        
           | denimnerd42 wrote:
           | and when you do get a security issue and you're using a 10
           | year old version the upgrade is going to be really really
           | difficult vs incremental upgrades when they are available. or
           | are you going to fork and assume responsibility for that
           | library code too?
        
           | Hackbraten wrote:
           | On top of that:
           | 
           | A newly-introduced security issue tends to have very limited
           | exploitability, because it's valuable, not-yet well
           | understood, and public exploits are yet to be developed.
           | 
           | Compare to that a similar vulnerability in an older package:
           | chances are that everything about it has been learned and is
           | publicly known. Exploits have become a commodity and are now
           | part of every offensive security distro on the planet. If you
           | run that vulnerable version, there's a real risk that a non-
           | targeted campaign will randomly bite you.
        
             | maerF0x0 wrote:
             | > valuable, not-yet well understood, and public exploits
             | 
             | Except in the scenario that is this exact case: Supply
             | chain attacks that are developed with the exploit in mind.
        
               | Hackbraten wrote:
               | I agree in principle. But even if the backdoor is
               | deliberate (as is the case here), there's limited risk
               | for the average person. Nobody in their right mind is
               | going to attack Jane Doe and risk burning their multi-
               | million dollar exploit chain.
               | 
               | For an old vulnerability, however, _any_ unpatched system
               | is a target. So the individual risk for the average
               | unpatched system is still orders of magnitude higher than
               | in the former scenario.
        
         | cesarb wrote:
         | > Far too many people, including developers, seem to just
         | accept upgrades as always-good instead of carefully considering
         | the risks and benefits.
         | 
         | Another example of this was log4j: if you were still using the
         | old 1.x log4j versions, you wouldn't have been vulnerable to
         | the log4shell vulnerability, since it was introduced early in
         | the 2.x series. The old 1.x log4j versions had other known
         | vulnerabilities, but only if you were using less common
         | appenders or an uncommon server mode or a built-in GUI log
         | viewer (!); the most common use of log4j (logging into a local
         | file) was not exposed to any of these, and in fact, you could
         | remove the vulnerable classes and still have a functional log4j
         | setup (see for instance
         | https://www.petefreitag.com/blog/log4j-1x-mitigation/ which I
         | just found on a quick web search).
         | 
         | Did log4shell (and a later vulnerability which could only be
         | exploited if you were using Java 9 or later, because it
         | depended on a new method which was introduced on Java 9) lead
         | people to question whether always being on the "latest and
         | greatest" was a good thing? No, AFAIK the opposite happened:
         | people started to push even harder to keep everything on the
         | latest release, "so that when another vulnerability happens,
         | upgrading to a fixed version (which is assumed to be based on
         | the latest release) will be easy".
        
           | JohnMakin wrote:
           | > Another example of this was log4j: if you were still using
           | the old 1.x log4j versions, you wouldn't have been vulnerable
           | to the log4shell vulnerability
           | 
           | Lol, this exact thing happened at my last gig. When I first
           | learned of the vulnerability I panicked, until I found out we
           | were so outdated it didn't affect us. We had a sad laugh
           | about it.
           | 
           | > "so that when another vulnerability happens, upgrading to a
           | fixed version (which is assumed to be based on the latest
           | release) will be easy".
           | 
           | I think there is some truth to this motivation though - if
           | you are on an ancient 1.X version and have to jump a major
           | version of two, that almost always causes pain depending on
           | how critical the service or library is. I don't pretend to
           | know the right answer but I always tend to wait several
           | versions before upgrading so any vulnerabilities or fixes can
           | come by the time I get to the upgrade.
        
             | cesarb wrote:
             | A _lot_ of people were in that exact same situation. So
             | many, that the original author of log4j 1.x released a fork
             | to allow these people to keep using the old code while
             | technically being  "up to date" and free of known
             | vulnerabilities: https://reload4j.qos.ch/
        
         | TheKarateKid wrote:
         | About a decade ago, the industry shifted from slow, carefully
         | evaluated infrequent updates (except for security) to frequent,
         | almost daily updates that are mandatory. I'd say this was
         | pioneered by the Chromium team and proved to be beneficial. The
         | rest of the industry followed.
         | 
         | Now we're in a position where most projects update so quickly,
         | that you don't really have a choice. If you need to update one
         | component, there's a good chance it will require that many more
         | as dependencies in your project will require an update to be
         | compatible.
         | 
         | The industry as a whole sacrificed stability and some aspects
         | of security for faster advancements and features. Overall I'd
         | say the net benefit is positive, but its times like these that
         | remind us that perhaps we need to slow things down just a
         | little and do a bit of a course correction to bring things into
         | a better balance.
        
         | nilsherzig wrote:
         | Wouldn't that just result in exploits written for old versions?
         | A successful exploit for something that everyone is running
         | might be worse, than a backdoor on blending edge systems.
         | 
         | Everyone being on different versions results in something like
         | a moving target
        
         | paulmd wrote:
         | well, it's the bazaar vs the cathedral, isn't it? bazaar moves
         | a lot faster. Everyone likes that part, except when it breaks
         | things, and when they have to chase an upstream that's
         | constantly churning, etc. but most people don't consider that a
         | cathedral itself might have some engineering merit too.
         | cathedrals are beautiful and polished and stable.
         | 
         | I highly encourage people to try freeBSD sometime. Give ports a
         | try (although the modern sense is that poudrie is better even
         | if you want custom-built packages). See how nicely everything
         | works. All the system options you need go into rc.conf (almost
         | uniformly). Everything is documented and you can basically
         | operate the system out of the FreeBSD Handbook documentation
         | (it's not at all comparable to the "how to use a window or a
         | menu" level intro stuff the linux provides). You can't do that
         | when everything is furiously churning every release. everything
         | just works, everything is just documented, it's an _experience_
         | when you 're coming from linux.
         | 
         | and that forum post from 2007 on how to tweak a service script
         | is probably still valid, because BSD hasn't had 3 different
         | init systems over that timespan etc.
         | 
         | just like "engineering is knowing how to build a bridge that
         | barely doesn't fall over", engineering here is knowing what not
         | to churn, and fitting your own work and functionality
         | extensions into the existing patterns etc. like it doesn't have
         | to be even "don't make a bigger change than you have to", you
         | just have to present a stable userland and stable kernel
         | interface and stable init/services interface. the fact that
         | linux _doesn 't_ present a stable kernel interface is actually
         | fairly sketchy/poor engineering, it doesn't have to be that
         | way, a large subset of kernel interfaces probably _should_ be
         | stable.
        
         | Vicinity9635 wrote:
         | I'm kinda the opposite. Way too many times I've seen "upgrades"
         | actively remove things I liked and add things I hate. I hold
         | off on letting mobile apps update because they almost always
         | get worse, not better.
        
         | cryptonector wrote:
         | That's a double-edged sword. What happens when you need to
         | upgrade in order to get vulnerability fixes?
        
         | MySweetHubert wrote:
         | The 2017 WannaCry ransomware attack would be a good counter
         | example, the virus spread even though it was already fixed in
         | an update from MS a bit more than a month before.
        
       | kashyapc wrote:
       | Given this disaster, one or other "foundation" will now embrace
       | the `xz` project, start paying a maintainer or two so that they
       | don't accidentally end up dying from burnout.
       | 
       | Rinse, repeat for all critical-path open source software. A bit
       | like the OpenSSL "Heartbleed" disaster[1]. OpenSSL is now part of
       | Linux Foundation's (they do a lot of great work) "Core
       | Infrastructure Initiative".
       | 
       | Many fat companies build their applications on these crucial low-
       | level libraries, and leave the drudgery to a lone maintainer in
       | Nebraska, chugging away in his basement[2].
       | 
       | [1] https://en.wikipedia.org/wiki/Heartbleed
       | 
       | [2] https://xkcd.com/2347/
        
       | chubot wrote:
       | Something to add to the timeline: when did this avenue of attack
       | become available?
       | 
       | It only happened in the last 10 years apparently.
       | 
       | Why do sshd and xz-utils share an address space?
       | 
       | When was the sshd -> systemd dependency introduced?
       | 
       | When was the systemd -> xz-utils dependency introduced?
       | 
       | ---
       | 
       | To me this ARCHITECTURE issue is actually bigger than the social
       | engineering, the details of the shell script, and the details of
       | the payload.
       | 
       | I believe that for most of the life of xz-utils, it was a
       | "harmless" command line tool.
       | 
       | In the last 10 years, a dependency was silently introduced on
       | some distros, like Debian and Fedora.
       | 
       | Now maintainer Lasse Collin becomes a target of Jia Tan.
       | 
       | If the dependency didn't exist, then I don't think anyone would
       | be bothering Collin.
       | 
       | ---
       | 
       | I asked this same question here, and got some good answers:
       | 
       | https://lobste.rs/s/uihyvs/backdoor_upstream_xz_liblzma_lead...
       | 
       | Probably around 2015?
       | 
       | So it took ~9 years for attackers to find this avenue, develop an
       | exploit, and create accounts for social engieering?
       | 
       | If so, we should be proactively removing and locking down other
       | dependencies, because it will likely be an effective and simple
       | mitigation.
        
         | duped wrote:
         | I don't think these questions are that interesting. SSHD shared
         | an address space with xz-utils because xz utils provides a
         | shared library, and that's how dynamic linking works. sshd uses
         | libsystemd on platforms with systemd because systemd is the
         | tool that manages daemons and services like sshd, and
         | libsystemd is the bespoke way for daemons to talk to it (and
         | more importantly, it is already there in the distro - so you're
         | not "adding a million line dependency" so much as linking
         | against a system library from the OS developers that you need).
         | 
         | Linking against libsystemd on Debian is about as suspicious as
         | linking against libsystem on MacOS. It's a userspace library
         | that you can hypothetically avoid, but you shouldn't.
         | 
         | As for why systemd links against xz, I don't know, and it's a
         | bit surprising that an init system needs compression utils but
         | not particularly surprising given the kitchen sink architecture
         | of systemd.
        
           | cesarb wrote:
           | > As for why systemd links against xz, I don't know, and it's
           | a bit surprising that an init system needs compression utils
           | 
           | It's for the journal. It can be optionally compressed with
           | zlib, lzma, or zstd. That library had not only the sd_notify
           | function which sshd needed, but also several functions to
           | manipulate the journal.
        
           | chubot wrote:
           | _and that 's how dynamic linking works_ -- really ignorant
           | comment
           | 
           | Read the lobste.rs thread for some quotes on process
           | separation and the Unix philosophy.
           | 
           | There are mechanisms other than dynamic linking -- this is
           | precisely the question.
           | 
           | Also, Unix supported remote logins BEFORE dynamic linking
           | existed.
           | 
           | ---
           | 
           | What about sshd didn't work prior to 2015?
           | 
           | Was the dependency worth it?
           | 
           |  _not particularly surprising given the kitchen sink
           | architecture of systemd_
           | 
           | That's exactly the point -- does systemd need a kitchen sink
           | architecture?
           | 
           | ---
           | 
           | The questions are interesting because they likely lead to
           | simple and effective mitigations.
           | 
           | They are interesting because critical dependencies on poorly
           | maintained projects may cause nation states to attack single
           | maintainers with social engineering.
           | 
           | Solutions like "let's create a Big tech funded consortium for
           | security" already exist (Linux foundation threw some money at
           | bash in 2014 after ShellShock).
           | 
           | That can be part of the solution, but I doubt it's the most
           | effective one.
        
             | duped wrote:
             | I don't think it's acceptable to create a subprocess for
             | what's effectively a library function call because it comes
             | from a dependency.
             | 
             | The problem is the design of rtld and the dynamic linking
             | model, where one shared library can detect and hijack the
             | function calls of another by using the auditing features of
             | rtld. Hardened environments already forbid LD_PRELOAD for
             | injection attacks like this, but miss audit hooks.
             | 
             | My point is that just saying we should use the Unix process
             | model as the defense for supply chain attacks is like using
             | a hammer to fix a problem that needs a scalpel.
             | 
             | > What about sshd didn't work prior to 2015?
             | 
             | systemd notifications, from what it sounds like.
        
         | JeremyNT wrote:
         | It's certainly a reasonable question to ask in this specific
         | case. In hindsight Debian and Red Hat both bet badly when
         | patching OpenSSH in a way that introduced the possibility of
         | this specific supply chain attack.
         | 
         | > _If so, we should be proactively removing and locking down
         | other dependencies, because it will likely be an effective and
         | simple mitigation._
         | 
         | I think this has always been important, and it remains so, but
         | incidents like this really drive the point home. For any piece
         | of software that has such a huge attack surface as ssh does,
         | the stakes are even higher, and so the idea of introducing
         | extra dependencies here should be approached with extreme
         | caution indeed.
        
           | chubot wrote:
           | Yup, "trusted computing base" is classic concept that we've
           | forgotten
           | 
           | Or we pay lip service to, but don't use in practice
        
         | sloowm wrote:
         | Another point relevant on the timeline is when downstream
         | starts using binaries instead of source.
         | 
         | I think people are flying past that important piece of the
         | hack. Without that this would not have been possible. If there
         | is a trusted source in the middle building the binaries instead
         | of the single maintainer and the hacker this attack becomes
         | extremely hard to slip by people.
        
           | Denvercoder9 wrote:
           | > Another point relevant on the timeline is when downstream
           | starts using binaries instead of source.
           | 
           | No downstream was using binaries instead of source. Debian
           | and Fedora rebuild everything from source, they don't use the
           | binaries supplied by the maintainer. The backdoor was
           | inserted into the build system.
        
           | testplzignore wrote:
           | I'm not familiar with how distros get the source code for
           | upstream dependencies. I'm trying to understand what Andres
           | meant when he said this:
           | 
           | > One portion of the backdoor is solely in the distributed
           | tarballs
           | 
           | Is it that the tarball created and signed by Jia had the
           | backdoor, but this backdoor wasn't present in the repo on
           | github? And the Debian (or any distro) maintainers use the
           | source code from tarball without comparing against what is in
           | the public github repo? And how does that tarball get to
           | Debian?
        
             | Hackbraten wrote:
             | Exactly.
             | 
             | The threat actor had signed and uploaded the compromised
             | source tarball to GitHub as a release artifact. They then
             | applied for an NMU (non-maintainer upload) with Debian,
             | which got accepted, and that's how the tarball ended up on
             | Debian's infrastructure.
        
               | sloowm wrote:
               | Thanks for the extra explanation. I guess this is harder
               | to protect against than I thought and it's more that some
               | distro's got somewhat lucky than debian and fedora doing
               | something that is out of the ordinary.
        
       | xyst wrote:
       | Timeline reads like a "pig butchering" or romance scam. Except
       | the goal is not money, but control.
       | 
       | Attackers find a vulnerable but critical library in the supply
       | chain. Do a cross check on maintainers or owners of the code
       | (reference social media and other sources to get more
       | information). Execute social engineering attack. Talk to
       | maintainer(s) "off list" to gain rapport. Submit a few non-
       | consequential patches that are easy to grep to gain trust. Use
       | history of repository or mailing list against victim to gaslight
       | ("last release was X years ago!1", "you are letting project
       | rOt!", "community is waiting for you!").
        
         | BuildTheRobots wrote:
         | I'm amazed more people aren't talking about the "off list" part
         | or asking Colin if he's willing to provide those
         | emails/conversations.
        
           | xyst wrote:
           | That's more of a LEO concern. Maybe attackers got sloppy and
           | leaked info in email headers?
        
         | DrammBA wrote:
         | > Timeline reads like a "pig butchering"
         | 
         | To me this is the polar opposite of pig butchering. This was a
         | targeted and unromantic attack, unrelated to investing or
         | cryptocurrency, the original maintainer was not "fattened like
         | a hog" in any way, if anything he was bullied and abused into
         | submission.
        
       | pluc wrote:
       | The most interesting question I have with all this is:
       | 
       | Do you think this was a planned effort or was it opportunistic?
       | Did they know what they were doing and social engineered towards
       | it, or did they figure out what to do based on the day-to-day
       | context they discovered?
        
       | mattxxx wrote:
       | A service that measured "credibility" or "activity" of a
       | username/email could be really useful here. At least, it would be
       | a leading indicator that something _might_ be up. In particular
       | the aside here about the email addresses are suspect:
       | 
       | https://research.swtch.com/xz-timeline#jia_tan_becomes_maint...
       | 
       | would be useful info for Lasse Collins before taking the pressure
       | campaign seriously.
        
         | iso8859-1 wrote:
         | What about just using 'web of trust', for example with GPG? If
         | the user's key is signed by people that met up with the actual
         | person, it would be much harder to make fake identities.
        
           | ptx wrote:
           | There was an article from 2019 [0] that someone on HN linked
           | recently about how "web of trust is dead", but it seems to
           | concern scalability problems with the keyserver, which
           | resulted in DoS attacks, which made them disable the feature
           | by default. The concept should presumably still be good,
           | assuming the issues specific to the GPG keyserver can be
           | avoided.
           | 
           | [0] https://inversegravity.net/2019/web-of-trust-dead/
        
           | carols10cents wrote:
           | What would prevent the sock puppet accounts from signing each
           | others' keys?
        
       | stockhorn wrote:
       | I feel like release tarballs shouldnt differ from the repo
       | sources. And if they do, there should be a pipeline which
       | generates and upload the release artifacts....
       | 
       | Can somebody write a script which diffs the release tarballs from
       | the git sources for all debian packages and detects whether there
       | are any differences apart from the files added by autotools :)?
        
       | peter_d_sherman wrote:
       | My takeaways:
       | 
       | First from the article itself:
       | 
       | >"At this point Lasse seems to have started working even more
       | closely with Jia Tan. Evan Boehs observes that _Jigar Kumar and
       | Dennis Ens both had nameNNN@mailhost email addresses that never
       | appeared elsewhere on the internet, nor again in xz-devel_. "
       | 
       | That is an important observation!
       | 
       | Takeaway: Most non-agenda driven _actual people_ on the Internet,
       | leave a _trail_ -- an actual trail of social media and other
       | posts (and connected friends) that could be independently
       | verified by system /social media/website administrators via voice
       | or video calls (as opposed to CAPTCHA or other computer-based "Is
       | it a human?" tests, which can be gamed) for stronger confidences
       | in the identity / trustworthiness of the remote individual at the
       | other end of a given account...
       | 
       | Next takeaway from the linked article:
       | (https://boehs.org/node/everything-i-know-about-the-xz-
       | backdo...):
       | 
       | >"AndresFreundTec @AndresFreundTec@mastodon.social writes:
       | 
       | Saw sshd processes were using a _surprising amount of CPU_ ,
       | despite immediately failing because of wrong usernames etc.
       | Profiled sshd,
       | 
       |  _showing lots of cpu time in liblzma, with perf unable to
       | attribute it to a symbol._
       | 
       | Got suspicious. Recalled that I had seen an odd valgrind
       | complaint in automated testing of postgres, a few weeks earlier,
       | after package updates."
       | 
       | Takeaway: We could always use more _observability_ of _where
       | exactly CPU time is spent_ in given programs...
       | 
       | Binaries without symbol tables (which is the majority of programs
       | on computers today) make this task challenging, if not downright
       | impossible, or at least very impractical -- too complex for the
       | average user...
       | 
       |  _Future OS designers should consider including symbol tables for
       | all binaries they ship_ -- as this could open up the capability
       | for flame graphs  / _detailed CPU usage profiling_ (and the
       | subsequent ability to set system policies /logging around these)
       | -- for mere mortal average users...
        
       | psanford wrote:
       | One big take away for me is that we should stop tolerating
       | inscrutable code in our systems. M4 has got to go! Inscrutable
       | shell script have got to go!
       | 
       | Its time to stop accepting that the way we've done this in the
       | past is the way we will continue doing it ad infinitum.
        
         | kibwen wrote:
         | That's a great first step, but ready your pitchforks for this
         | next take, because the next step is to completely eliminate
         | Turing-complete languages and arbitrary I/O access from
         | standard build systems. 99.9% of all projects have the
         | capability to be built with trivial declarative rulesets.
        
           | azemetre wrote:
           | Can you explain more in-depth what you mean? I'm also unaware
           | of how you could have declarative rulesets in a non turing-
           | complete language.
           | 
           | Sounds like it would be impossible but maybe my thinking is
           | just enclosed and not free.
        
             | ngruhn wrote:
             | Haven't used it much but Dhall is a non Turing complete
             | configuration language: https://dhall-lang.org/
        
           | koito17 wrote:
           | In this case, I think the GP is absolutely right. If you look
           | at the infamous patch with a "hidden" dot, you may think "any
           | C linter should catch that syntax error and immediately draw
           | suspicion." But the thing is, no linter at the moment exists
           | for analyzing strings in a CMakeLists.txt file or M4 macro.
           | Moreover, this isn't something one can reliably do runtime
           | detection for, because there are plenty of legitimate reasons
           | that program could fail to compile, but our tooling does not
           | have a way to clearly communicate the syntax error being the
           | reason for a compilation failure.
        
           | smallmancontrov wrote:
           | What does modern C project management look like? I'm only
           | familiar with Autotools and CMake.
        
             | infamouscow wrote:
             | Redis. Simple flat directory structure and a Makefile. If
             | you need more, treat it as a sign from God you're doing
             | something wrong.
        
           | hyperman1 wrote:
           | Java's Maven is an interesting case study, as it tried to be
           | this:. A standard project layout, a standard dependency
           | mechanism, pom.xml as standard metadata file, and a standard
           | workflow with standard target(clean/compile/test/deploy).
           | Plugins for what's left.
           | 
           | There might have been a time where it worked, but people
           | started to use plugins for all kinds of reasons, quite good
           | ones in most cases. Findbugs, code coverage, source code
           | generation,...
           | 
           | Today, a maven project without plugins is rare. Maven brought
           | us 95%, but there is a long tail left to cover.
        
             | kibwen wrote:
             | _> Findbugs, code coverage, source code generation,..._
             | 
             | For the purpose of this conversation we mostly just care
             | about the use case of someone grabbing the code and wanting
             | to use it in their own project. For this use case, dev
             | tools like findbugs and code coverage can be ignored, so it
             | would suffice to have a version of the build system with
             | plugins completely disabled.
             | 
             | Code generation is the thornier one, and we can at least be
             | more principled about it than "run some arbitrary code",
             | and at least it should be trivial to say "this codegen
             | process gets absolutely no I/O access whatsoever; you're a
             | dumb text pipeline". But at the end of the day, we have to
             | Just Say No to things like this. Even if it makes the
             | codebase grodier to check in generated code, if I can't
             | inspect and audit the source code, that's a problem, and
             | arbitrary build-time codegen prevents that. Some trade-offs
             | are worth making.
        
               | hyperman1 wrote:
               | The xz debacle happened partiallybecause the generated
               | autoconf code was provided. Checking in generated code is
               | not that much better. It's a bit more visible, but not
               | much people will spend their limited time to validate it,
               | as it's not worth it for generated code. xz also had
               | checked in inscrutable test files, and nobody could know
               | it was encrypted malware.
               | 
               | I'm not a fan of generated code. It tends to cause
               | misery, being in a no mans land between code and not-
               | code. But it is usefull sometimes, e.g rust generating an
               | API from the opengl XML specs.
               | 
               | Sandboxing seems the least worst option, but it will
               | still be uninspected half code that one day ends up in
               | production.
        
               | kibwen wrote:
               | _> The xz debacle happened partiallybecause the generated
               | autoconf code was provided._
               | 
               | The code was only provided in a roundabout way that was
               | deliberately done to evade manual inspection, so that's
               | not a failure of checking in generated code, that's a
               | failure of actually building a binary from the artifacts
               | that we expect it to be built from. Suffice to say,
               | cutting out the Turing-complete crap from our build
               | systems is only one of _many_ things that we need to fix.
        
             | ptx wrote:
             | Maybe now that we have things like GitHub Actions,
             | Bitbucket Pipelines, etc., which can run steps in separate
             | containers, maybe most of those things could be moved from
             | the Maven build step to a different pipeline step?
             | 
             | I'm not sure how well isolated the containers are (probably
             | not very - I think GitHub gives access to the Docker
             | socket) and you'd have to make sure they don't share secret
             | tokens etc., but at least it might make things simpler to
             | audit, and isolation could be improved in the future.
        
             | wongarsu wrote:
             | Most of these could still be covered with IO limited to the
             | project files though.
             | 
             | There is a sizable movement in the Rust ecosystem to move
             | all build-time scripts and procedural macros to (be
             | combined to) WASM. This allows you to write turing-complete
             | performant code to cover all use-cases people can
             | reasonably think of, while also allowing trivially easy
             | sandboxing.
             | 
             | It's not perfect, for example some build scripts download
             | content from the internet, which can be abused for
             | information extraction. And code generation scripts could
             | generate different code depending on where it thinks it's
             | running. But it's a lot better than the unsandboxed code
             | execution that's present in most current build systems,
             | without introducing the constraints of a pure config file.
        
               | RedShift1 wrote:
               | How does the sandboxing help if the compiler and/or build
               | scripts or whatever modifies its own output?
        
               | comex wrote:
               | But the xz backdoor didn't involve a build script that
               | tried to compromise the machine the build was running on.
               | It involved a build script that compromised the code
               | being built. Sandboxing the build script wouldn't have
               | helped much if at all. Depending on the implementation,
               | it might have prevented it from overwriting .o files that
               | were already compiled, maybe. But there would still be
               | all sorts of shenanigans it could have played to sneakily
               | inject code.
               | 
               | I'd argue that Rust's biggest advantage here is that
               | build scripts and procedural macros are written in Rust
               | themselves, making them easier to read and thus harder to
               | hide things in than autotools' m4-to-shell gunk.
               | 
               | But that means it's important to build those things from
               | source! Part of the movement you cite consists of watt,
               | which ships proc macros as precompiled WebAssembly
               | binaries mainly to save build time. But if watt were more
               | widely adopted, it would actually make an attacker's life
               | much easier since they could backdoor the precompiled
               | binary. Historically I've been sympathetic to dtolnay's
               | various attempts to use precompiled binaries this way
               | (not just watt but also the serde thing that caused a
               | hullabaloo); I hate slow builds. But after the xz
               | backdoor I think this is untenable.
        
               | mjw1007 wrote:
               | Maybe not untenable.
               | 
               | If everything is done carefully enough with reproducible
               | builds, I think using a binary whose hash can be checked
               | shouldn't be a great extension of trust.
               | 
               | You could have multiple independent autobuilders
               | verifying that particular source does indeed generate a
               | binary with the claimed hash.
        
               | tadfisher wrote:
               | Ultimately you're going to have to be adept at stuff like
               | the Underhanded C Contest to spot this kind of thing in
               | any Turing-complete language, so the idea of auditing the
               | source is unreliable at worst. So I'd take another page
               | from the Java/Maven ecosystem and require hashed+signed
               | binaries, with the possible addition of requiring builds
               | to be performed on a trusted remote host so that at least
               | we can verify the binary is produced from the same source
               | and results in the same output.
               | 
               | But determined actors with access are always going to try
               | to thwart these schemes, so verification and testing is
               | going to need to step up.
        
               | naikrovek wrote:
               | reproducible builds never made sense to me. if you trust
               | the person giving you the hash, just get the binary from
               | them. you don't need to reproduce the build at all.
               | 
               | if you trust that they're giving you the correct hash,
               | but not the correct binary, then you're not thinking
               | clearly.
        
               | emn13 wrote:
               | One thing a culture of reproducible builds (and thus
               | stable hashes) however does provide is a lack of excuse
               | as to why the build _isn 't_ reproducible. Almost nobody
               | will build from source - but when there's bugs to be
               | squashed and weird behavior, some people, sometimes,
               | will. If hashes are the norm, then it's a little harder
               | for attackers to pull of this kind of thing not because
               | you trust their hash rather than their blob, but rather
               | because they need to publish both at once - thus
               | broadening the discovery window for shenanigans.
               | 
               | To put it another way: if you _don 't_ have reproducible
               | builds and you're trying to replicate an upstream
               | artifact then it's very hard to tell what the cause is.
               | It might just be some ephemeral state, a race, or some
               | machine aspect that caused a few fairly meaningless
               | differences. But as soon as you have a reproducible
               | build, then failure to reproduce instantly marks upstream
               | as being suspect.
               | 
               | It's also useful when you're trying to do things like
               | tweak a build - you can ensure you're importing upstream
               | correctly by checking the hash with what you're making,
               | and then even if your downstream version isn't a binary
               | dependent on upstream (e.g. optimized differently, or
               | with some extra plugins somewhere), you can be sure that
               | changes between what you're building and upstream are
               | intentional and not spurious.
               | 
               | It's clearly not a silver bullet, sure. But it's not
               | entirely useless either, and it probably could help as
               | part of a larger ecosystem shift; especially if
               | conventional tooling created and published these by
               | default such that bad actors trying to hide build
               | processes actually stick out more glaringly.
        
             | shawnz wrote:
             | When your programmatic build steps are isolated in plugins,
             | then you can treat them like independent projects and apply
             | your standard development practices like code review and
             | unit tests to those plugins. Whereas when you stuff
             | programmatic build steps into scripts that are bundled into
             | existing projects, it's harder to make sure that your
             | normal processes for assuring code quality get applied to
             | those pieces of accessory code.
        
               | dotancohen wrote:
               | My standard development practices like code review and
               | unit tests do not scale to review and test every
               | dependency of every dependency of my projects. Even at
               | company-wide scale.
        
               | shawnz wrote:
               | I'm not saying that. I'm just saying that improved
               | ability to apply such development practices is one
               | benefit of using a plugin-style architecture for
               | isolating the programmatic steps of your build pipeline.
               | It's not perfect but in many ways it's still a
               | significant improvement upon just allowing arbitrary code
               | right in the pipeline definition.
        
           | klysm wrote:
           | until you have to integrate with the rest of the world sure
        
             | kibwen wrote:
             | Looking at the state of software security in the rest of
             | the world, this may not be much of a disincentive. At some
             | point we need to knuckle down and admit that times have
             | changed, the context of tools built for the tech of the 80s
             | is no longer applicable, and that we can do better. If that
             | means rewriting the world from scratch, then I guess we
             | better get started sooner rather than later.
        
           | eadmund wrote:
           | I think that what we need is good sandboxing. All a sandboxed
           | Turing-complete language can do is perform I/O on some
           | restricted area, burn CPU and attempt to escape the sandbox.
           | 
           | I would like to see this on the language level, not just on
           | the OS level.
        
             | semi-extrinsic wrote:
             | I have been thinking the exact same thing, and specifically
             | I would like to try implementing something that works with
             | the Rye python manager.
             | 
             | Say I have a directory with a virtualenv and some code that
             | needs some packages from PyPI. I would very much like to
             | sandbox anything that runs in this virtualenv to just disk
             | access inside that directory, and network access only to
             | specifically whitelisted URLs. As a user I should only need
             | to add "sandbox = True" to the pyproject.toml file, and
             | optionally "network_whitelist = [...]".
             | 
             | From my cursory looking around, I believe Cloudflare
             | sandbox utils, which are convenience wrappers around
             | systemd Seccomp, might be the best starting point.
             | 
             | Edit: or just use Firejail, interesting...
             | 
             | You mention sandboxing on the language level, but I don't
             | think it is the way. Apparently sandboxing within Python
             | itself is a particularly nasty rabbit hole that is
             | ultimately unfruitful because of Python's introspection
             | capabilities. You will find many dire warnings on that
             | path.
        
           | cryptonector wrote:
           | This is a better take. Though I'm sure people can obfuscate
           | in source code just as much as in build configuration code.
        
           | duped wrote:
           | What would that accomplish? It certainly wouldn't have
           | stopped this attack.
           | 
           | > 99.9% of all projects have the capability to be built with
           | trivial declarative rulesets.
           | 
           | Only if you forbid bootstrapping, which all projects
           | ultimately rely on at some point in their supply chain.
        
             | kibwen wrote:
             | _> What would that accomplish? It certainly wouldn 't have
             | stopped this attack._
             | 
             | We could write an entire PhD thesis on the number of dire
             | technical failings that would need to be addressed to stop
             | this attack, so while this alone wouldn't have stopped it,
             | it would have required the actor to come up with another
             | vector of code injection which would have been easier to
             | find.
             | 
             |  _> Only if you forbid bootstrapping_
             | 
             | Codebases that bootstrap are the 0.1%. Those can be built
             | via `bash build.sh` rather than deceptively hiding a
             | Turing-complete environment behind a declarative one. Even
             | if you need to have these in your trusted computing base
             | somewhere, we can focus auditing resources there,
             | especially once we've reduced the amount of auditing that
             | we need to do on the other 99.9% of codebases now that
             | we've systematically limited the build-time shenanigans
             | that they can get up to.
        
         | Dalewyn wrote:
         | The internet would become a nicer place (again) if we killed
         | off JavaShit and the concept of obfuscating ("minifying") code
         | to make View Source practically unusable.
        
           | Sammi wrote:
           | Javascript is pretty much guaranteed to be permanent. It is
           | the language of the web.
           | 
           | (There's webassembly too, but that doesn't remove js)
        
             | Dalewyn wrote:
             | Thanks for being part of the problem.
        
             | homarp wrote:
             | I don't think anything is "pretty much guaranteed": things
             | evolve
        
           | davedx wrote:
           | Nonsensical argument. JavaScript and TypeScript projects are
           | developed with patches of the original source not some
           | compressed artefact. Take your trolling elsewhere
        
             | Dalewyn wrote:
             | The vast majority of JavaShit executed by the end user in
             | their user agent are "compressed artefacts" that cannot be
             | read by human eyes in any practical way. Likewise most HTML
             | and CSS which are also obfuscated to impracticality.
             | 
             | This is coincidentally drawing a parallel with the xz
             | attack: The source code and the distributed tarballs (the
             | "compressed artefacts") are different.
        
               | mardifoufs wrote:
               | Javascript is almost always executed in sandboxed,
               | unprivileged environments. The issue here is that this
               | type of obfuscation is easy to add in _core os
               | libraries_. The JavaScript ecosystem, for all the hate
               | that it gets, makes it super easy to sandbox any running
               | code.
               | 
               | It doesn't matter if it's minified or obfuscated because
               | you basically have to run unknown, untrusted code
               | everywhere while browsing the web with JavaScript turned
               | on. So the ecosystem and tooling is extremely resilient
               | to most forms of malicious attacks no matter how minified
               | or obfuscated the js you're running is. The complete
               | opposite is true for bash and shell scripting in general
        
         | rurban wrote:
         | Great, let the cmake dummies wander off into their own little
         | dreamworld, and keep the professionals dealing with the core
         | stuff.
        
         | tomxor wrote:
         | That would be an improvement for sure... but this is not
         | fundamentally a technical problem.
         | 
         | Reading the timeline, the root cause is pure social
         | engineering. The technical details could be swapped out with
         | anything. Sure there are aspects unique to xz that were
         | exploited such as the test directory full of binaries, but
         | that's just because they happened to be the easiest target to
         | implement their changes in an obfuscated way, that misses the
         | point - once an attacker has gained maintainer-ship you are
         | basically screwed - because they _will_ find a way to insert
         | something, somehow, eventually, no matter how many easy targets
         | for obfuscation are removed.
         | 
         | Is the real problem here in handing over substantial trust to
         | an anonymous contributor? If a person has more to lose than
         | maintainership then would this have happened?
         | 
         | That someone can weave their way into a project under a
         | pseudonym and eventually gain maintainership without ever
         | risking their real reputation seems to set up quite a risk free
         | target for attackers.
        
           | flockonus wrote:
           | Agreed it's mainly a social engineering problem, BUT also can
           | be viewed as a technical problem, if a sufficiently advanced
           | fuzzer could catch issues like this.
           | 
           | It could also be called an industry problem, where we rely on
           | other's code without running proper checks. This seems to be
           | an emerging realization, with services like socket.dev
           | starting to emerge.
        
           | usefulcat wrote:
           | > Is the real problem here handing over substantial trust to
           | an anonymous contributor?
           | 
           | Unless there's some practical way of comprehensively solving
           | The Real Problem Here, it makes a lot of sense to consider
           | all reasonable mitigations, technical, social or otherwise.
           | 
           | > If a person has more to lose than maintainership then would
           | this have happened?
           | 
           | I guess that's one possible mitigation, but what exactly
           | would that look like in practice? Good luck getting open
           | source contributors to accept any kind of liability. Except
           | for the JiaTans of the world, who will be the first to accept
           | it since they will have already planned to be untouchable.
        
             | tomxor wrote:
             | > I guess that's one possible mitigation, but what exactly
             | would that look like in practice? Good luck getting open
             | source contributors to accept any kind of liability. Except
             | for the JiaTans of the world, who will be the first to
             | accept it since they will have already planned to be
             | untouchable.
             | 
             | It's not necessary to accept liability, that's waived in
             | all FOSS licenses. What I'm suggesting would only risk
             | reputation of non-malicious contributors, and from what
             | I've seen, most of the major FOSS contributors and
             | maintainers freely use their real identity or associate it
             | with their pseudonym anyway, since that attribution comes
             | with real life perks.
             | 
             | Disallowing anonymous pseudonyms would raise the bar quite
             | a bit and require more effort from attackers to construct
             | or steal plausible looking online identities for each
             | attack, especially when they need to hold up for a long
             | time as with this attack.
        
             | patmorgan23 wrote:
             | Create an organization of paid professionals who are
             | responsible for maintaining these libraries (and providing
             | support to library users).
             | 
             | It's heartblead all over again (though those were honest
             | mistakes, not an intentional attack
        
         | pera wrote:
         | While I do agree that M4 is not great I don't believe any
         | alternative would have prevented this attack: you could try
         | translating that build-to-host file to say python with all the
         | evals and shell oneliners and it would still not be immediately
         | obvious for a distro package maintainer doing a casual review
         | after work: for them it would look just like your average
         | weekend project hacky code. Even if you also translated the
         | oneliners it wouldn't be immediately obvious if you were not
         | suspicious. My point is, you could write similarly obfuscated
         | code in any language.
        
         | ok123456 wrote:
         | With all the CI tooling and containerization, it seems to be
         | going in the opposite direction.
        
         | patmorgan23 wrote:
         | That and we need to pay open source maintainers and find new
         | ways to support them.
         | 
         | And all code that gets linked into security critical
         | applications/libraries needs to be covered by under some sort
         | of security focused code review.
         | 
         | So no patching the compression code that openSSL links to with
         | random junk distribution maintainers.
        
         | orthecreedence wrote:
         | This doesn't read as a technical failure to me. This was 99%
         | social engineering. I understand that the build system was used
         | a vector, but eliminating that vector doesn't mean all doors
         | are closed. The attacker took advantage of someone having
         | trouble.
        
         | cryptonector wrote:
         | What's inscrutable code? Was it m4 or sh or the combination of
         | the two?
         | 
         | Who will pay for all the rewriting you want done? Or even just
         | for the new frameworks that are "scrutable"? How do we
         | guarantee that the result is not inscrutable to you or others?
         | 
         | There is so much knee-jerking in this xz debacle.
         | 
         | (And I say this / ask these questions with _no_ love for
         | autoconf /m4/sh.)
        
         | intelVISA wrote:
         | Functionality like IFUNC is completely inexcusable outside of
         | dev/debug builds imo. It's rot.
        
       | deathanatos wrote:
       | > _Evan Boehs observes that Jigar Kumar and Dennis Ens both had
       | nameNNN@mailhost email addresses_
       | 
       | This is the second time I've read this "observation", but this
       | observation is just wrong? Jigar's email is
       | "${name}${number}@${host}", yes, but Dennis's is just
       | "${name}@${host}" -- there's not a _suffixed_ number. (There 's a
       | 3, but it's just a "leetcode" substitution for the E, i.e., it's
       | semantically a letter.)
       | 
       | (They could still both be sockpuppets, of course. This doesn't
       | prove or disprove anything. But ... let's check our facts?)
        
         | __float wrote:
         | Where are the email addresses visible? I've also seen this a
         | few times, but never the actual addresses.
        
           | matsur wrote:
           | eg "Hans Jansen" is <hansjansen162@outlook.com>
           | 
           | https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1067708
        
           | glitchcrab wrote:
           | I couldn't spot email addresses directly in plaintext for
           | those who weren't submitting patches (e.g. Jigar), however if
           | you look at one of the links to his (?) responses then
           | there's a mailto link with the text 'Reply via email'
        
       | Solvency wrote:
       | Why does it seem like so many open source developers suffer from
       | chronic mental health issues? as shown here, in TempleOS, etc.
       | It's a weird but sad pattern I see all of the time.
        
         | dmitrygr wrote:
         | Not being paid for your work while others profit from it surely
         | doesn't help with one's mental state.
        
         | SalmoShalazar wrote:
         | Chronic mental health issues are extremely common.
        
       | dxxvi wrote:
       | I guess that Lasse Collin will have more mental health issues
       | after all of this :-)
       | 
       | Do we know who really is Jia Tan? Any photo of him? The email
       | addresses he has been using? His location?
        
         | phreeza wrote:
         | What makes you think they are an actual person?
        
       | gregwebs wrote:
       | I think this can be made much more difficult by enforcing a
       | policy of open builds for open source. It shouldn't be possible
       | to inject build files from a local machine. All build assets
       | should come from the source repository. Artifacts should come
       | from Github Actions or some other tool that has a clear
       | specification of where all inputs came from. Perhaps Github could
       | play a role in helping to automate any inconveniences here.
        
         | maclockard wrote:
         | I think that trust needs to be 'pushed deeper' than that so to
         | speak. While this would be an improvement, what happens if
         | there is a malicious actor at Github? This may be unlikely, but
         | would be even harder to detect since so much of the pipeline
         | would be proprietary.
         | 
         | Ideally, we would have a mechanism to verify that a given build
         | _matches_ the source for a release. Then it wouldn't matter
         | where it was built, we would be able to independently verify
         | nothing funky happened.
        
           | gregwebs wrote:
           | Vendor independent build providence is certainly the long-
           | term goal. In the immediate-term moving away from mystery
           | tarballs towards version control gets us a step closer.
           | 
           | One of the best things about Golang is that packages are
           | shared direct via source repositories (Github) rather than a
           | package repository containing mystery tarballs. I understand
           | the appeal of package repositories, but without proper
           | security constraints it's a security disaster waiting to
           | happen.
        
         | dboreham wrote:
         | Yes the whole "let's take a mystery meat tarball from a repo
         | that isn't the project repo" seems suspect.
         | 
         | Github+ even has a scheme for signing artifacts such that you
         | have some level of trust they came from inside their Actions
         | system, derived from some git commit. This would allow the
         | benefits of a modular build for a large product like a distro,
         | while preserving a chain of trust in its component parts.
         | 
         | +Not advocating a dependency on Github per se -- the same sort
         | of artifact attestation scheme could be implemented elsewhere.
        
           | shp0ngle wrote:
           | as I wrote in a different thread, some projects don't have
           | any source control.
           | 
           | From the big ones - 7z, ncurses are both tarballs only.
        
             | patmorgan23 wrote:
             | They need to join us in the 80s and start using source
             | control.
        
         | qerti wrote:
         | Wasn't the payload in a blob in the tests, which is in the
         | source repo? If you were to clone the repo then build from
         | source, you'd have the backdoor, right? Surely distros aren't
         | using binaries sent by maintainers
        
           | jsnell wrote:
           | No. The payload was in the checked in test files, but the
           | test files were inert. They were only activated by the
           | tarball having different build files than the repository (or
           | rather, different build files than would be generated by
           | autotools for the repository), which extracted the payload
           | from the test files and injected it into the output binary.
        
       | fizlebit wrote:
       | What are the chances this is the first such attack, not just the
       | first one discovered. Presumably every library or service running
       | as root is open to attack and maybe also some running in
       | userspace. The attack surface is massive. Time for better
       | sandboxing? Once attackers get into the build systems of Debian
       | and others is it game over?
        
         | 2OEH8eoCRo0 wrote:
         | I wouldn't be surprised if there are others but this specific
         | one seems special. It was caught because on connection it does
         | an extra decryption operation and I'd assume there is no way
         | around this extra work. They'd have to re-architect this to not
         | require that decryption.
         | 
         | I'm not a security expert though.
        
       | dboreham wrote:
       | Something I've wondered about wrt this debacle: presumably the
       | smart part of xz was the compression algorithm. I'm guessing but
       | that's probably less than 500 lines of code. The rest is plumbing
       | to do with running the algorithm in a CLI utility, on various
       | different OSes, on different architectures, as a library, and so
       | on. All that stuff is at some level of abstraction the same for
       | all things that do bulk processing on bytes of data. Therefore
       | perhaps we should think about a scheme where the boilerplate-ish
       | stuff is all in some framework that is well funded with people to
       | ensure it doesn't have cutout maintainers injecting backdoors,
       | and is re-used for hundreds of bulk-data-processing utilities;
       | and the clever part would then be a few hundred lines of code
       | that's easy to review, and actually probably never needs to
       | change. Like...a separation of concerns.
        
         | calvinmorrison wrote:
         | we were discussing this on the IRC. Imagine spinning up a
         | thread, then running a bsd style pledge(2) on it to call
         | liblzma. Kinda janky but it would work. Another option would be
         | to just go out and call the xz util and not rely on a library
         | to do so. That process can be locked down with pledge to only
         | have stdin/stdout. That's all you need.
         | 
         | So, like UNIX does have this plumbing, its just that reaching
         | for libraries and tight integration has been the pursuit of
         | Lennart Poopering and his clan for years.
        
       | jbdigriz990 wrote:
       | Joe Cooper's take on pressuring project maintainers:
       | https://forum.virtualmin.com/t/dont-panic-ssh-exploit-in-ble...
       | 
       | somewhat ironic, but I'd say effective.
        
         | Longlius wrote:
         | I don't even think it's necessarily moderation so much as
         | maintaining these high-traffic projects is now akin to a full-
         | time job minus the pay.
         | 
         | At a certain point, companies need to step up and provide
         | funding or engineering work or they should just keep expecting
         | to get owned.
        
           | Vegenoid wrote:
           | If there is a larger shift to companies paying for the
           | software they rely on that is currently free, it's not going
           | to be companies paying open-source maintainers, it's going to
           | be companies paying other companies that they can sign
           | contracts with and pass legal responsibility to.
        
           | nmz wrote:
           | Why would they when they can just... not?
        
       | dekhn wrote:
       | Why do I feel like this set of posts by Russ about supply chain
       | security will end up with a proposal/concrete first
       | implementation of a supply-chain-verified-build process that
       | starts at chip making - simple enough to analyze and rebuild
       | independently- to bootstrap a go runtime that provides an
       | environment to do secure builds.
       | 
       | Reflections on Trusting Trust becomes even more interesting if
       | you consider photolithography machines being backdoored.
        
         | RyanShook wrote:
         | I'm of the opinion that there are backdoors in most of our
         | software and a lot of our hardware. xz just happened to be
         | caught because it was hogging resources.
        
       | maerF0x0 wrote:
       | Two points I'd be interested in discussing.
       | 
       | 1. It seems a lot of people are assuming Jia Tian was compromised
       | all along. I haven't seen a reason to believe this was a long
       | play, but rather that they were compromised along the way (and
       | selected for their access). Why play the long game of let's find
       | a lone package, w/ a tired maintainer, and then try to get in
       | good with them vs. Let's survey all packages w/ a tired
       | maintainer, and compromise a new entrant (ie bribe/beat them).
       | 
       | 2. IMO this also is a failing on behalf of many government
       | agencies. NSA, NIST, FBI all should, IMO spend less time
       | compromising their own citizenry, and more time focusing on
       | shoring up the security risks Americans face.
        
         | lmkg wrote:
         | Regarding point 1: The timeline in the linked article describes
         | some communications as "pressure emails." I've heard the
         | theory, but haven't seen solid evidence, that the pressure
         | emails weren't just regular impatience from outside devs, but
         | actually _part of the attack_. To convince the primary
         | maintainer into granting access to another party.
        
           | maerF0x0 wrote:
           | Occams Razor and a variation on Hanlon's razor to me suggest
           | that Jia actually wanted to solve the issues and work on
           | code. They contributed some things to other repos too,
           | correct? (I saw another thread about microsoft documentation,
           | for eg).
           | 
           | Here's the thing, if the maintainer was so tired and needed
           | the help, the pressure is more risk than reward. The
           | maintainer would be relieved to have help... But pressure
           | risks estranging...
           | 
           | To be clear I'm aware this is just my own musing though.
        
         | nebulous1 wrote:
         | Completely conclusive proof? No, but it seems unlikely that
         | there ever would be conclusive proof of such.
         | 
         | They don't seem to exist outside of this incident and things
         | related to this.
         | 
         | There were multiple people who also don't seem to exist outside
         | of their posts to the xz mailing list applying pressure for the
         | original maintainer to bring Jia on board. This occurred around
         | the time that Jia was first making contact with the project,
         | not only recently.
         | 
         | Apparently the IP addresses that were logged for Jia is a VPN
         | based in Singapore
         | 
         | They have vanished.
         | 
         | Honestly, there's very little evidence that they _weren 't_
         | always intending this.
        
           | doakes wrote:
           | What's your source for the IP addresses?
        
             | orkj wrote:
             | It's mentioned in one of the first references in this
             | article:
             | 
             | https://boehs.org/node/everything-i-know-about-the-xz-
             | backdo...
             | 
             | IRC activity
        
             | nebulous1 wrote:
             | I can't remember where I read that but it would likely have
             | been from a HN link or possibly a comment.
             | 
             | I just found this:
             | https://news.ycombinator.com/item?id=39868773 which is
             | definitely not where I originally read it, but libera is
             | probably ultimately the source
        
         | PUSH_AX wrote:
         | > It seems a lot of people are assuming Jia Tian was
         | compromised all along. I haven't seen a reason to believe this
         | was a long play, but rather that they were compromised along
         | the way
         | 
         | So I assume Jia has spoken out since? How do you go this long
         | without realising someone else is making plays as you?
        
         | saulpw wrote:
         | I took a look at Jia Tan's early behavior, and I found it to be
         | consistent with being "compromised" from the beginning. They
         | had months of contributions on private repos before forking a
         | test library and making superficial changes to it, and then
         | diving headlong into archival libraries. It all looks set up
         | and I see no evidence of an actual person at any point.
         | 
         | I also think it is more difficult to get away with
         | bribing/beating an existing contributing than you suggest; esp
         | since failure means likely exposure.
        
       | klysm wrote:
       | What's crazy to me about how we've set up computers to do things,
       | is that xz itself is a pure function, but somehow there's all of
       | this shit that has to happen to just use a pure function! It's
       | just bytes in and bytes out, but we have an astoundingly complex
       | build system and runtime linking system which somehow allows this
       | pure function to run arbitrary commands on a system.
        
       | tamimio wrote:
       | This attack doesn't exploit a technical issue or bug, it exploits
       | the open source philosophy, and unless the community will come up
       | with a systematic process to counter it, expect more
       | sophisticated attacks similar to it in the future. This time we
       | got lucky that some smart nerd -I am a nerd too, this is a praise
       | not to be taken in a bad way- noticed and notified the community
       | in less than 20 days of the second backdoor implementation, next
       | time the attack may undergoes more comprehensive "rehearsals"
       | that it will make it impossible to detect.
        
         | Vegenoid wrote:
         | Can you explain what aspects of the open source philosophy were
         | exploited, and what possible mitigations might be?
        
         | luyu_wu wrote:
         | Could've happened just as easily if not more easily with
         | proprietary software.
        
       | 1024core wrote:
       | We should set up a GoFundMe to reward Andres Freund.
        
       | 1024core wrote:
       | I am reminded of the infamous Sendmail worm from 1989(?).
       | 
       | If this compromised OpenSSHd had become the default across
       | millions of systems all over the world, could a worm-like thing
       | have brought a major chunk of the Internet down? Imagine millions
       | of servers suddenly stuck in a boot-loop, refusing to boot.
       | 
       | And all because one owner of a library had some mental health
       | issues. We should not have such SPOFs.
        
         | dijit wrote:
         | > And all because one owner of a library had some mental health
         | issues
         | 
         | Wrong takeaway.
        
         | juliusdavies wrote:
         | Impossible here because the exploit was carefully engineered to
         | be unreplayable and NOBUS (nobody but us) so it couldn't go
         | viral. Even if you intercepted a complete tcp byte trace of the
         | attack there was nothing you could do with that to attack other
         | systems.
        
       | mrbluecoat wrote:
       | Best overview of the xz saga: https://xkcd.com/2347/
        
       | shp0ngle wrote:
       | Did the Jia Tan character actually committed something of value?
       | Looking at the history, he had (there was some stuff with
       | multithreaded compression/decompression); as he kept it in the
       | original license, could it be used going forward?
        
       | riston wrote:
       | xz most likely wasn't the only library that was targeted, there
       | could be other similar projects as well which we haven't
       | discovered yet. From the timeline you can see the social
       | engineering part was quite big part.
       | 
       | Just thinking out loud, would it possible to go over with LLM and
       | analyse existing OSS mailing lists and issues to classify such
       | sentiment from the users?
        
       ___________________________________________________________________
       (page generated 2024-04-02 23:01 UTC)