[HN Gopher] Backdoor in upstream xz/liblzma leading to SSH serve...
       ___________________________________________________________________
        
       Backdoor in upstream xz/liblzma leading to SSH server compromise
        
       Author : rkta
       Score  : 4251 points
       Date   : 2024-03-29 16:16 UTC (1 days ago)
        
 (HTM) web link (www.openwall.com)
 (TXT) w3m dump (www.openwall.com)
        
       | Rucadi wrote:
       | Saw this on nix, which was using a compromised version in the
       | unstable channel, I hope not too many systems are affected.
        
       | jeffbee wrote:
       | Safety through obscurity and weirdness! If you disable ifunc,
       | like any sensible person, this backdoor disables itself.
        
         | prydt wrote:
         | I'm curious now. What is ifunc? (Had difficulty finding it
         | through a search)
        
           | jeffbee wrote:
           | ifunc is a GNU method of interposing function calls with
           | platform-optimized versions of the function. It is used to
           | detect CPU features at runtime and insert, for example,
           | AVX2-optimized versions of memcmp. It is seen in crypto a
           | lot, because CPUs have many crypto-specific instructions.
           | 
           | However, I don't like it much and I think software should be
           | compiled for the target machine in the first place. My 1
           | hardened system that is reachable from the public network is
           | based on musl, built mostly with llvm, and with ifunc
           | disabled.
        
             | cesarb wrote:
             | > However, I don't like it much and I think software should
             | be compiled for the target machine in the first place.
             | 
             | That means you either have to compile software locally on
             | each machine, or you have a combinatorial explosion of
             | possible features.
             | 
             | Compiling locally has several drawbacks. It needs the full
             | compilation environment installed on every machine, which
             | uses a lot of disk space, and some security people dislike
             | it (because then attackers can also compile software
             | locally on that machine); compiling needs a lot of memory
             | and disk space, and uses a lot of processor time and
             | electric power. It also means that signature schemes which
             | only allow signed code cannot be used (or you need to have
             | the signing key available on the target machine, making it
             | somewhat pointless).
             | 
             | The combinatorial explosion of features has been somewhat
             | tamed lately, by bundling sets of feature into feature
             | levels (x86_64-v1, etc), but that still quadruples the
             | amount of compiled code to be distributed, and newer
             | features still have to be selected at runtime.
        
               | jeffbee wrote:
               | I don't think you can really say it is "combinatorial"
               | because there's not a mainstream machine with AES-NI but
               | not, say, SSSE3. In any case if there were such a machine
               | you don't need to support it. The one guy with that box
               | can do scratch builds.
        
               | myself248 wrote:
               | Compiled _on_ and compiled _for_ are not the same. There
               | must be a way to go to the target machine, get some
               | complete dump of CPU features, copy that to the compile-
               | box, do the build, and copy the resulting binaries back.
        
               | derefr wrote:
               | > That means you either have to compile software locally
               | on each machine, or you have a combinatorial explosion of
               | possible features.
               | 
               | Or you just have to buy a lot of the exact same hardware.
               | Secure installations tend to do that.
        
               | afh1 wrote:
               | I have no issues compiling everything on my Gentoo box.
        
             | eklitzke wrote:
             | Obviously compiling for the target architecture is best,
             | but for most software (things like crypto libraries
             | excluded) 95% of the benefit of AVX2 is going to come from
             | things like vectorized memcpy/memcmp. Building glibc using
             | ifuncs to provide optimized implementations of these
             | routines gives most users most of the benefit of AVX2 (or
             | whatever other ISA extension) while still distributing
             | binaries that work on older CPU microarchitectures.
        
               | jeffbee wrote:
               | ifunc memcpy also makes short copies suck ass on those
               | platforms, since the dispatch cost dominates regardless
               | of the vectorization. It's an open question whether ifunc
               | helps or harms the performance of general use cases.
               | 
               | By "open question" I meant that there is compelling
               | research indicating that GNU memcpy/memcmp is
               | counterproductive, but the general Linux-using public did
               | not get the memo.
               | 
               | https://storage.googleapis.com/gweb-
               | research2023-media/pubto...
               | 
               | "AsmDB: Understanding and Mitigating Front-End Stalls in
               | Warehouse-Scale Computers" Section 4.4 "Memcmp and the
               | perils of micro-optimization"
        
               | wiml wrote:
               | On the other hand, it also means that your distro can
               | supply a microarchitecture-specific libc and every
               | program automatically gets the memcpy improvements.
               | (Well, except for the golang/rust people.)
               | 
               | Wasn't this the point of Gentoo, back in the day? It was
               | more about instruction scheduling and register allocation
               | differences, but your system would be built with
               | everything optimized for your uarch.
        
           | aaronmdjones wrote:
           | https://sourceware.org/glibc/wiki/GNU_IFUNC
        
         | canada_dry wrote:
         | Is there a way to easily/reliably disable ifunc globally on a
         | system (e.g. ubuntu/debian) without breaking a bunch of things?
         | 
         | FYI this looks for pkgs with liblzma:
         | 
         | > dpkg -l |grep liblzma
         | 
         | Versions >= 5.6 are compromised
        
         | selckin wrote:
         | https://github.com/google/oss-fuzz/pull/10667
        
         | travem wrote:
         | Interesting, I used https://ossinsight.io/analyze/JiaT75 to
         | identify contributions from the account used by author of the
         | backdoor. It looks like the account made other potentially
         | problematic contributions to other projects.
         | 
         | The disabling of ifunc in this PR against Google's oss-fuzz
         | project maybe one way they tried to prevent this particular
         | backdoor being flagged by that tool?
         | https://github.com/google/oss-fuzz/pull/10667
        
           | mxmlnkn wrote:
           | There is a related issue for LLVM/clang by this person:
           | 
           | https://github.com/llvm/llvm-project/issues/63957
        
             | est wrote:
             | I am curious, why don't this clever hacker create multiple
             | accounts instead of only this "JiaT75"
        
         | afh1 wrote:
         | Interestingly: https://github.com/tukaani-
         | project/xz/issues/62#issuecomment...
        
       | dlachausse wrote:
       | > openssh does not directly use liblzma. However debian and
       | several other distributions patch openssh to support systemd
       | notification, and libsystemd does depend on lzma.
       | 
       | It looks to be limited to Linux systems that are running certain
       | patches. macOS and BSD seem unaffected?
        
         | delphij wrote:
         | FreeBSD is not affected as the payloads in question were
         | stripped out, however we are looking into improvements to our
         | workflow to further improve the import process.
        
       | rasengan wrote:
       | > One portion of the backdoor is _solely in the distributed
       | tarballs_. For easier reference, here 's a link to debian's
       | import of the tarball, but it is also present in the tarballs for
       | 5.6.0 and 5.6.1:
       | 
       | Ubuntu 22.04 version:
       | 
       | dpkg -l |grep liblzma ii liblzma5:amd64 5.2.5-2ubuntu1 amd64 XZ-
       | format compression library
       | 
       | Whew!
        
       | yogorenapan wrote:
       | Very strange behavior from the upstream developers. Possible
       | government involvement? I have a feeling LANG is checked to
       | target servers from particular countries
        
         | acheong08 wrote:
         | One thing to note is that the person that added the commits
         | only started contributing around late 2022 and appears to have
         | a Chinese name. Might be required by law to plant the backdoor.
         | 
         | That would be quite scary considering they have contributed to
         | a wide variety of projects including C++
         | https://learn.microsoft.com/en-us/cpp/overview/whats-new-cpp...
        
           | yorwba wrote:
           | I don't think you need to worry about the C++ contribution:
           | https://github.com/MicrosoftDocs/cpp-
           | docs/commit/9a96311122a...
        
             | acdha wrote:
             | This does make me wonder how much they made a deliberate
             | effort to build an open source portfolio so they'd look
             | more legitimate when time came to mount an attack. It seems
             | expensive but it's probably not really much at the scale of
             | an intelligence agency.
        
               | bombcar wrote:
               | If I were doing it, I'd have a number of these "burner
               | committers" ready to go when needed.
               | 
               | If I were doing it AND amoral, I'd also be willing to
               | find and compromise committers in various ways.
        
               | kube-system wrote:
               | What's the salary for a software engineer in urban China?
               | 60-80k/yr USD? Two years of that salary is cheaper than a
               | good single shoulder fired missile. Seems like a pretty
               | cheap attack vector to me. A Javelin is a quarter million
               | per pop and they can only hit one target.
        
               | acdha wrote:
               | Yeah, exactly - when your army is measured in the
               | millions, picking n hundred with technical aptitude is
               | basically a rounding error.
        
               | alwayslikethis wrote:
               | They are paid much less than that. However, American
               | weapons are also far overpriced due to high labor costs,
               | among other things. The Chinese probably have cheaper
               | weapons.
        
             | hartator wrote:
             | Until you figure there are very subtle unicode changes in
             | the URL that don't diff on GitHub. :)
        
           | ComputerGuru wrote:
           | No one is being "required by law" to add vulnerabilities,
           | it's more likely they are foreign agents to begin with.
        
             | bombcar wrote:
             | Depends on the law and where you are. Publicly we have
             | https://www.eff.org/issues/national-security-letters/faq
             | and it's likely that other requests have occurred from time
             | to time, even in the USA.
        
             | computerfriend wrote:
             | > No one is being "required by law" to add vulnerabilities
             | 
             | This is absolutely not the case in many parts of the world.
        
           | 8organicbits wrote:
           | > appears to have a Chinese name
           | 
           | Given the complexity of the attack, I'd assume the name is
           | fake.
        
           | bdd8f1df777b wrote:
           | I would think a Chinese state hacker would not assume a
           | Chinese name, just in case he was discovered like now.
        
           | dotty- wrote:
           | The contribution to C++ is just a simple markdown change:
           | https://github.com/MicrosoftDocs/cpp-docs/pull/4716 C++ is
           | fine.
        
         | anarazel wrote:
         | LANG only needs to have _some_ value, the concrete value does
         | not seem to matter.
        
       | cf100clunk wrote:
       | I am *not* a security researcher, nor a reverse engineer.
       | There's lots of       stuff I have not analyzed and most of what
       | I observed is purely from       observation rather than
       | exhaustively analyzing the backdoor code.
       | 
       | I love this sort of technical writing from contributors outside
       | the mainstream debugging world who might be averse to sharing.
       | What an excellently summarized report of his findings that should
       | be seen as a template.
        
         | anarazel wrote:
         | FWIW, it felt intimidating as hell. And I'm fairly established
         | professionally. Not sure what I'd have done earlier in my
         | career (although I'd probably not have found it in the first
         | place).
        
           | internetter wrote:
           | > Not sure what I'd have done earlier in my career
           | 
           | To anybody in this sorta situation, you should absolutely
           | share whatever you have. It doesn't need to be perfect, good,
           | or 100% accurate, but if there's a risk you could help a lot
           | of people
        
           | RockRobotRock wrote:
           | I hope you've hired a PR person for all the interviews :)
        
           | aerhardt wrote:
           | This story is an incredible testament to how open-source
           | software can self-regulate against threats, and more broadly,
           | it reminds us that we all stand on the shoulders of
           | contributors like you. Thank you!
        
             | ddalex wrote:
             | This is one threat that was discovered, only because the
             | implementer was sloppy.
             | 
             | Think about what various corps and state-level actors have
             | been putting in there.
        
         | bonzini wrote:
         | For what it's worth the author is a PostgreSQL committer, he's
         | not a security researcher but he's a pretty damn good engineer!
        
         | vhiremath4 wrote:
         | Honestly, you only get this kind of humility when you're
         | working with absolute wizards on a consistent basis. That's how
         | I read that whole analysis. Absolutely fascinating.
        
       | agwa wrote:
       | > openssh does not directly use liblzma. However debian and
       | several other distributions patch openssh to support systemd
       | notification, and libsystemd does depend on lzma.
       | 
       | The systemd notification protocol could have been as simple as
       | just writing a newline to a pipe, but instead you have to link to
       | the libsystemd C library, so now security-critical daemons like
       | openssh have additional dependencies like liblzma loaded into
       | their address space (even if you don't use systemd as PID 1),
       | increasing the risks of supply chain attacks. Thanks, systemd.
        
         | capitainenemo wrote:
         | FWIW, I did a quick check on a Devuan system. The sshd in
         | Devuan does link to a libsystemd stub - this is to cut down on
         | their maintenance of upstream packages. However that stub does
         | not link to lzma.
        
           | cf100clunk wrote:
           | On an MX Linux (non-systemd Debian-derived distro) box I ran
           | ldd on /sbin/ssh and also ran:
           | 
           | [EDIT: this string gives cleaner results:]
           | lsof -w -P -T -p $(pgrep sshd)|grep mem
           | 
           | and saw liblzma in the results of both, so there is some sort
           | of similar trickery going on.
        
             | capitainenemo wrote:
             | Huh. That's rather surprising. Do you know how MX Linux
             | handles systemd? Devuan does that shimming of upstream. Do
             | they perhaps just try to leave out certain packages?
             | 
             | Anyway. I did not see lzma in the results on Devuan running
             | a process check (just in case). I did see it on a Debian.
        
               | cf100clunk wrote:
               | It turns out MX uses a package called systemd-shim that
               | seems to be the Debian one:                 $aptitude
               | show systemd-shim       Package: systemd-shim
               | Version: 10-6       State: installed       Automatically
               | installed: no       Priority: extra       Section: admin
               | Maintainer: Debian QA Group <packages@qa.debian.org>
               | Architecture: amd64       Uncompressed Size: 82.9 k
               | Depends: libc6 (>= 2.34), libglib2.0-0 (>= 2.39.4),
               | cgmanager (>= 0.32)       Suggests: pm-utils
               | Conflicts: systemd-shim:i386       Breaks: systemd (<
               | 209), systemd:i386 (< 209)       Description: shim for
               | systemd       This package emulates the systemd function
               | that are required to run the systemd helpers without
               | using the init service
        
         | delroth wrote:
         | > The systemd notification protocol could have been as simple
         | as just writing a newline to a pipe
         | 
         | It basically is. libsystemd links to liblzma for other features
         | not related to notifications.
         | 
         | (The protocol is that systemd passes the path to a unix socket
         | in the `NOTIFY_SOCKET` env variable, and the daemon writes
         | "READY=1" into it.)
        
           | agwa wrote:
           | Is that protocol documented/stable? For whatever reason,
           | daemons are choosing to link to libsystemd instead of
           | implementing it themselves.
           | 
           | It doesn't matter that libsystemd links to liblzma for other
           | reasons. It's still in the address space of any daemon that
           | is using libsystemd for the notification protocol.
        
             | wickberg wrote:
             | I know Golang has their own implementation of sd_notify().
             | 
             | For Slurm, I looked at what a PITA pulling libsystemd into
             | our autoconf tooling would be, stumbled on the Golang
             | implementation, and realized it's trivial to implement
             | directly.
        
               | tripflag wrote:
               | indeed; it should be trivial in any language. Here's
               | python: https://github.com/9001/copyparty/blob/a080759a03
               | ef5c0a6b06c...
        
               | cesarb wrote:
               | It should be trivial in any language _which has AF_UNIX_.
               | Last time I looked, Java didn 't have it, so the only way
               | was to call into non-Java code.
        
               | reftel wrote:
               | Then I suggest you have another look =)
               | https://inside.java/2021/02/03/jep380-unix-domain-
               | sockets-ch...
        
               | fullstop wrote:
               | Under Limitations: Datagram support
        
               | reftel wrote:
               | It appears you are correct. What an odd limitation!
        
               | fullstop wrote:
               | At first I thought that this surely could not be true as
               | of today, but it looks like it is.
               | 
               | There is AF_UNIX support, but only for streams and not
               | datagrams: https://bugs.openjdk.org/browse/JDK-8297837
               | 
               | What an odd decision. I suppose that you could execute
               | systemd-notify but that's a solution that I would not
               | like.
        
               | cesarb wrote:
               | > I suppose that you could execute systemd-notify but
               | that's a solution that I would not like.
               | 
               | What I did was to use JNA to call sd_notify() in
               | libsystemd.so.0 (when that library exists), which works
               | but obviously does not avoid using libsystemd. I suppose
               | I could have done all the socket calls into glibc by
               | hand, but doing that single call into libsystemd directly
               | was simpler (and it can be expected to exist whenever
               | systemd is being used).
        
               | ptx wrote:
               | It looks like the FFI (Project Panama) finally landed in
               | Java 22, released a few days ago:
               | https://openjdk.org/jeps/454
               | 
               | Unless that feature also has some weird limitation, you
               | could probably use that to call the socket API in libc.
        
               | pkaye wrote:
               | Can me point me to the Golang implementation? Is it a
               | standard package?
        
               | yencabulator wrote:
               | Most likely https://github.com/coreos/go-systemd
        
               | KerrAvon wrote:
               | Caveat is that golang is not a good enough actor to be a
               | reliable indicator of whether this interface is
               | supported, though. They'll go to the metal because they
               | can, not because it's stable.
        
           | tonyg wrote:
           | Strange protocol. Why not pass a path to a _file_ that should
           | be `touch`d and /or written to, I wonder? Would avoid the
           | complexity of sockets.
        
             | Bu9818 wrote:
             | Services may be in a different mount namespace from systemd
             | for sandboxing or other reasons (also means you have to
             | worry about filesystem permissions I suppose). Passing an
             | fd from the parent (systemd) is a nice direct channel
             | between the processes
        
               | NewJazz wrote:
               | But systemd precisely doesn't pass an FD. If it did, you
               | would just need to write() and close().
        
               | Bu9818 wrote:
               | Yeah I was wrong about that, I confused it with socket-
               | activation passing. The systemd-side socket is available
               | from the process.
        
           | wiml wrote:
           | > libsystemd links to liblzma for other features not related
           | to notifications
           | 
           | Which is pretty emblematic of systemd's primary architectural
           | fault!
        
             | IAmNotACellist wrote:
             | systemd getting its tentacles everywhere they can squeeze
             | is a feature, not a bug
        
         | fullstop wrote:
         | Also thanks to Debian for modifying openssh.
        
           | cassianoleal wrote:
           | You're not wrong. Had Debian not patched it in this way, OP
           | might have never found it, leaving all other distros who do
           | the same vulnerable.
           | 
           | Note that OP found this in Debian sid as well, which means
           | it's highly unlikely this issue will find its way into any
           | Debian stable systems.
        
             | fullstop wrote:
             | Right, the systemd notification framework is very simple
             | and I've used it in my projects. I didn't even know that
             | libsystemd provided an implementation.
             | 
             | My Arch system was not vulnerable because openssh was not
             | linked to xz.
             | 
             | IMO every single commit from JiaT75 should be reviewed and
             | maybe even rolled back, as they have obliterated their
             | trust.
             | 
             | edit:
             | 
             | https://github.com/google/oss-fuzz/pull/10667
             | 
             | Even this might be nefarious.
        
               | gopher_space wrote:
               | > the systemd notification framework is very simple and
               | I've used it in my projects
               | 
               | Have you come across an outline or graph of systemd that
               | you really like, or maybe a good example of a minimal
               | setup?
        
             | SAI_Peregrinus wrote:
             | If they hadn't been modifying SSH their users would never
             | have been hit by this backdoor. Of course if it is actually
             | intended to target SSH on Debian systems, the attacker
             | would likely have picked a different dependency. But adding
             | dependencies like Debian did here means that those
             | dependencies aren't getting reviewed by the original
             | authors. For security-critical software like OpenSSH such
             | unaudited dependencies are prime targets for attacks like
             | this.
        
               | sitkack wrote:
               | It takes a village.
        
               | cassianoleal wrote:
               | My point was, this is not "Debian did a thing". Lots of
               | other distros do the same thing. In this particular case,
               | it was in fact fortunate for users of all these other
               | distros that Debian did it, lest this vulnerability might
               | have never been found!
               | 
               | Also, only users on sid (unstable) and maybe testing seem
               | to have been affected. I doubt there are many Debian
               | servers out there running sid.
               | 
               | Debian stable (bookworm) has xz-utils version 5.4.1:
               | https://packages.debian.org/bookworm/xz-utils
        
               | fullstop wrote:
               | > Debian stable (bookworm) has xz-utils version 5.4.1:
               | https://packages.debian.org/bookworm/xz-utils
               | 
               | Guess who released 5.4.1? JiaT75!
        
               | cassianoleal wrote:
               | 5.4.1 doesn't even have the `m4/build-to-host.m4` script
               | that pulls the backdoor's tarball.
               | 
               | https://salsa.debian.org/debian/xz-utils/-/tree/v5.4.1/m4
        
               | fullstop wrote:
               | Neither does https://salsa.debian.org/debian/xz-
               | utils/-/tree/v5.6.0/m4
               | 
               | The script was not present in the git tree, only in the
               | released archives.
               | 
               | I'm also suggesting that there could be more than one
               | exploit present. All of their commits should be rolled
               | back, none of it can be trusted.
        
               | sroussey wrote:
               | Not just commits, but all tarballs released with his key.
        
               | cassianoleal wrote:
               | > The script was not present in the git tree, only in the
               | released archives.
               | 
               | I confess I couldn't quite figure out the branching and
               | tagging strategy on that repo. Very weird stuff. That
               | script seems to have been added by Sebastian Andrzej
               | Siewior just ahead of the 5.6.0 release. It's definitely
               | present in the Debian git tree, and probably in many
               | other distros since others seem to be affected.
               | 
               | The commit where the script was added to Debian is tagged
               | `upstream/v5.6.0` despite the script itself not being
               | present on that tag upstream: https://github.com/tukaani-
               | project/xz/tree/v5.6.0/m4
               | 
               | > I'm also suggesting that there could be more than one
               | exploit present. All of their commits should be rolled
               | back, none of it can be trusted.
               | 
               | I agree.
        
               | seba_dos1 wrote:
               | > I confess I couldn't quite figure out the branching and
               | tagging strategy on that repo.
               | 
               | It's just a regular Debian packaging repository, which
               | includes imports of upstream tarballs - nothing out of
               | ordinary there. Debian packaging is based on tarballs,
               | not on git repos (although in absence of upstream
               | tarballs, Debian maintainer may create a tarball out of
               | VCS repo themselves).
               | 
               | The linked repo just happens to include some tags from
               | upstream repo, but those tags are irrelevant to the
               | packaging. Only "debian/*" and "upstream/*" tags are
               | relevant. Upstream VCS history is only imported for the
               | convenience of the packager, it doesn't have to be there.
               | 
               | Debian's git repositories don't have any forced layout
               | (they don't even have to exist or be up-to-date, the
               | Debian Archive is the only source of truth - note how
               | this repo doesn't contain the latest version of the
               | package), but in practice most of them follow the
               | conventions of DEP-14 implemented by gbp (in this
               | particular case, it looks like `gbp import-orig
               | --upstream-vcs-tag`: https://wiki.debian.org/PackagingWit
               | hGit#Upstream_import_met...).
        
               | cassianoleal wrote:
               | Thanks for the explanation, very helpful!
        
               | saalweachter wrote:
               | I would phrase it as "It's good we have a heterogenous
               | open-source community".
               | 
               | Monocrops are more vulnerable to disease because the same
               | (biological) exploit works on the entire population. In
               | our Linux biosphere where there are dozens of major,
               | varied configurations sharing parts but not all of their
               | code (and hundreds or thousands of minor variations), a
               | given exploit is likely to fail _somewhere_ , and that
               | failure is likely to create a bug that someone can
               | notice.
               | 
               | It's not foolproof, but it helps keep the ecosystem
               | healthy.
        
         | Jasper_ wrote:
         | That is all the protocol is. From https://www.freedesktop.org/s
         | oftware/systemd/man/latest/sd_n...:
         | 
         | > These functions send a single datagram with the state string
         | as payload to the socket referenced in the $NOTIFY_SOCKET
         | environment variable.
         | 
         | The simplest implementation (pseudocode, no error handling, not
         | guaranteed to compile), is something like:
         | const char *addrstr = getenv("NOTIFY_SOCKET");         if
         | (addrstr) {             int fd = socket(AF_UNIX, SOCK_DGRAM,
         | 0);             struct sockaddr_un addr = { .sun_family =
         | AF_UNIX };             strncpy(addr.sun_path,
         | sizeof(addr.sun_path), addrstr);             connect(fd,
         | (struct sockaddr*) &addr);             write(fd, "READY=1");
         | close(fd);         }
        
           | throwaway71271 wrote:
           | goddamnit leftpad got us too :)
        
           | iforgotpassword wrote:
           | This is what I did for a daemon I'm maintaining. Type=notify
           | support was requested but I'm really allergic to adding new
           | libs to a project until they really do some heavy lifting and
           | add enough value. I was pleasantly surprised the protocol was
           | that simple and implemented it myself. I think systemd should
           | just provide a simple standalone reference implementation and
           | encourage people to copy it into their project directly. (But
           | maybe they already do, I did that almost a decade ago IIRC
           | when the feature was relatively new.)
        
           | Repulsion9513 wrote:
           | Whoops, you forgot `vsock:`, `@`, `SO_PASSCRED` (I think)...
           | oh and where is that example provided? But yep that's all the
           | protocol is for sure (and forever)!
        
         | bbarnett wrote:
         | One of the objections that many people do not understand, is
         | that systemd adds complexity. Unnecessary complexity. Boats
         | full, loads full, mountains full of complexity.
         | 
         | Yes, there are things delivered with that complexity. However,
         | as an example, sysvinit is maybe, oh, 20k lines of code
         | including binaries, heck including all core init scripts.
         | 
         | What's systemd? 2M lines? It was >1M lines 4+ years ago.
         | 
         | For an init system, a thing that is to be the core of
         | stability, security, and most importantly glacial, stable
         | change -- that is absurdly complex. It's exceedingly over
         | engineered.
         | 
         | And so you get cases like this. And cases like that, and that
         | over there, and that case over there too. All which could not
         | exist, if systemd didn't try to overengineer, over complicate
         | everything.
         | 
         | Ah well. I'm still waiting for someone to basically fork
         | systemd, remove all the fluff (udev, ntp, dns, timers, restart
         | code, specialized logging, on and on and on), and just end up
         | with systemd compatible service files.
         | 
         | But not yet. So... well, oh well.
        
           | gavinhoward wrote:
           | I have a design in the works to do just this.
           | 
           | The problem? It's on the backburner because I don't think I
           | could find a business model to make money from it.
           | 
           | I don't think offering support for a price would work, for
           | example.
        
             | adr1an wrote:
             | What about sponsors? Actually, now I have the idea of a
             | platform similar to Kickstarter but for software
             | development, and with just sponsors. It wouldn't work,
             | sure... Except in some cases. Like when things like this
             | happen...
        
               | gavinhoward wrote:
               | Sponsors are fickle, unfortunately, and they tend to
               | remove "donations" when money gets tight.
               | 
               | If I am considered a full vendor, though, and a vendor
               | for a critical piece of software, they might keep me
               | around.
        
             | iforgotpassword wrote:
             | What's the point of your implementation? systemd is totally
             | modular, you can use just the init system without networkd,
             | timesyncd, resolved, nspawn, whatever else I forgot about.
             | 
             | If you want you can just use systemd as PID1 for service
             | management and enjoy a sane way to define and manage
             | services - and do everything in archaic ways like 20 years
             | ago.
        
               | gavinhoward wrote:
               | There are two points to the implementation:
               | 
               | * Choice. If I have a separate implementation, my users
               | do not have to be subject to systemd's choices. And I do
               | not either.
               | 
               | * The same implementation will have the same bugs, so in
               | the same way that redundant software has multiple
               | independent implementations, having an independent
               | implementation will avoid the same bugs. It may have
               | different bugs, sure, but my goal would be to test like
               | SQLite and achieve DO-178C certification. Or as close as
               | I could, anyway.
        
               | iforgotpassword wrote:
               | I'd assume chances of monetizing this are incredibly low.
               | There already is an init system that understands systemd
               | unit files, the name escapes my mind unfortunately.
               | DO-178C might be a selling point literally, but whether
               | there's enough potential customers for ROI is
               | questionable.
        
               | gavinhoward wrote:
               | I unfortunately agree with you. Hence why it's on the
               | backburner.
        
               | JdeBP wrote:
               | Ahem!
               | 
               | * https://jdebp.uk/Softwares/nosh/guide/converting-
               | systemd-uni...
        
               | Repulsion9513 wrote:
               | No, you can't. Systemd might be somewhat modular; the
               | things distros ship which depend on it are not.
        
               | iforgotpassword wrote:
               | Well some distros might force more components upon you
               | but thas hardly systemd's fault. Same if some software
               | decides to make use of another component of systemd -
               | then that's their choice, but also there are
               | alternatives. The only thing that comes to mind right now
               | would be something like GNOME which requires logind, but
               | all other "typical" software only wants systemd-the-init-
               | system if anything. You can run Debian just fine with
               | just systemd as an init system and nothing else.
        
           | jorvi wrote:
           | > Ah well. I'm still waiting for someone to basically fork
           | systemd, remove all the fluff (udev, ntp, dns, timers,
           | restart code, specialized logging, on and on and on)
           | 
           | Most of the things you named there are modular and can be
           | easily disabled.
           | 
           | Furthermore, udev precedes systemd and systemd has in fact
           | its own replacement for it (though the name escapes me).
           | 
           | Kind of a classic, people loving harping on systemd without
           | properly understanding it.
        
             | johnny22 wrote:
             | systemd subsumed udev. Eudev is what folks who don't have
             | systemd use.
        
             | nottorp wrote:
             | > are modular and can be easily disabled.
             | 
             | That's a common defense for any bloatware. If they're
             | modular and easily disabled then why are they all enabled
             | by default?
        
           | matheusmoreira wrote:
           | Systemd is actually pretty damn good _and_ it 's GPL licensed
           | free software.
           | 
           | I understand that people don't like the way it seems to work
           | itself into the rest of Linux user space as a dependency but
           | that's actually our own fault for not investing the man power
           | that Red Hat invests. We have better things to do than make
           | our own Linux user space and so they have occupied that
           | niche. It's free software though, we always have the freedom
           | to do whatever we want.
           | 
           | By the way, all the stuff you mentioned is not really part of
           | the actual init system, namely PID 1. There's an actual
           | service manager for example and it's entirely separate from
           | init. It manages services really well too, it's measurably
           | better than all that "portable" nonsense just by virtue of
           | using cgroups to manage processes which means it can actually
           | supervise poorly written double forking daemons.
        
             | pessimizer wrote:
             | People are complaining that it's too big, labyrinthine, and
             | arcane to audit, not that it doesn't work. They would
             | prefer other things that work, but don't share those
             | characteristics.
             | 
             | Also, the more extensive the remit (of this init), the more
             | complexly interconnected the interactions between the
             | components; the fewer people understand the architecture,
             | the fewer people understand the code, the fewer people read
             | the code. This creates a situation where the codebase is
             | getting larger and larger at a rate faster than the growth
             | of the number of man-hours being put into reading it.
             | 
             | This has to make it easier for people who are systemd
             | specialists to put in (intentionally or unintentionally)
             | backdoors and exploitable bugs that will last for years.
             | 
             | People keep defending systemd by talking about its UI and
             | its features, but that completely misses the point. If
             | systemd were replaced by something comprehensible and less
             | internally codependent, _even if_ the systemd UI and
             | features were preserved, most systemd complainers would be
             | over the moon with happiness. Red Hat invests too much into
             | completely replacing linux subsystems, they should take a
             | break. Maybe fix the bugs in MATE.
        
               | dralley wrote:
               | >the more complexly interconnected the interactions
               | between the components
               | 
               | This is a bit of a rich criticism of systemd, given the
               | init scripts it replaced.
               | 
               | > Red Hat invests too much into completely replacing
               | linux subsystems, they should take a break. Maybe fix the
               | bugs in MATE.
               | 
               | MATE isn't a Red Hat project. And nobody complains about
               | Pipewire.
        
               | Repulsion9513 wrote:
               | A shell script with a few defined arguments is not a
               | complexly interconnected set of components. It's
               | literally the simplest, most core, least-strongly-
               | dependent interconnection that exists in a nix system.
               | 
               | Tell us you never bothered to understand how init worked
               | before drawing a conclusion on it without telling us.
        
               | dralley wrote:
               | Have you ever seen the init scripts of a reasonably-
               | complex service that required other services to be
               | online?
        
               | _factor wrote:
               | Let's not get started on how large the kernel is. Large
               | code bases increase attack surface, period. The only
               | sensible solution is to micro service out the pieces and
               | only install the bare essentials. Why does the an x86
               | server come with Bluetooth drivers baked in?
               | 
               | The kernel devs are wasting time writing one offs for
               | every vendor known to man, and it ships to desktops too.
        
               | matheusmoreira wrote:
               | > Red Hat invests too much into completely replacing
               | linux subsystems, they should take a break.
               | 
               | They should do whatever they feel is best for them, as
               | should we. They're releasing free as in freedom GPL Linux
               | software, high quality software at that. Thus I have no
               | moral objections to their activities.
               | 
               | You have to realize that this is really a symptom of
               | _others_ not putting in the required time and effort to
               | produce a better alternative. I know because I reinvent
               | things regularly just because I enjoy it. People
               | underestimate by many orders of magnitude the effort
               | required to make something like this.
               | 
               | So I'm really thankful that I got systemd, despite many
               | valid criticisms. It's a pretty good system, and it's not
               | proprietary nonsense. I've learned to appreciate it.
        
             | ongy wrote:
             | How is the service manager different from PID1/init?
        
               | matheusmoreira wrote:
               | They are completely different things.
               | 
               | Init just a more or less normal program that Linux starts
               | by default and by convention. You can make it boot
               | straight into bash if you want. I created a little
               | programming language with the ultimate goal of booting
               | Linux directly into it and bringing up the entire system
               | from inside it.
               | 
               | It's just a normal process really. Two special cases that
               | I can think of: no default signal handling, and it can't
               | ever exit. Init will not get interrupted by signals
               | unless it explicitly configures the signal dispositions,
               | even SIGKILL will not kill it. Linux will panic if PID 1
               | ever exits so it can't do that.
               | 
               | Traditionally, it's also the orphaned child process
               | reaper. Process descriptors and their IDs hang around in
               | memory until something calls wait on them. Parent
               | processes are supposed to do that but if they don't it's
               | up to init to do it. Well, that's the way it works
               | traditionally on Unix. On Linux though that's
               | customizable with prctl and PR_SET_CHILD_SUBREAPER so you
               | actually can factor that out to a separate process. As
               | far as I know, systemd does just that, making it _more_
               | modular and straight up _better_ than traditional Unix,
               | simply because this separate process won 't make Linux
               | panic if it ever crashes.
               | 
               | As for the service manager, this page explains process
               | and service management extremely well:
               | 
               | https://mywiki.wooledge.org/ProcessManagement
               | 
               | Systemd does it right. It does everything that's
               | described in there, does it correctly, uses powerful
               | Linux features like cgroups for even better process
               | management and also solves the double forking problem
               | described in there. It's essentially a solved problem
               | with systemd. Even the people who hate it love the unit
               | files it uses and for good reason.
        
               | ongy wrote:
               | I know the differences between them conceptionally.
               | 
               | The thing that people usually complain about is systemd
               | forcibly setting its process manager at pid=1. I.e. the
               | thing "discussed" in
               | https://github.com/systemd/systemd/issues/12843
               | 
               | There is a secondary feature to run per-user managers,
               | though I'm unsure whether it does run doesn't run without
               | systemd PID1. Though it might only rely on logind.
        
               | matheusmoreira wrote:
               | Wow, I remember reading that PID != 1 line years ago. Had
               | no idea they changed it. I stand corrected then. Given
               | the existence of user service managers as well as flags
               | like --system and --user, I inferred that they were all
               | entirely separate processes.
               | 
               | Makes no sense to me why the service manager part would
               | require running as PID 1. The maintainer just says this:
               | 
               | > PID 1 is very different from other processes, and we
               | rely on that.
               | 
               | He doesn't really elaborate on the matter though.
               | 
               | Every time this topic comes up I end up searching for
               | those so called PID 1 differences. I come up short every
               | time aside from the two things I mentioned above. Is this
               | information buried deep somewhere?
               | 
               | Just asked ChatGPT about PID 1 differences. It gave me
               | the aforementioned two differences, completely dismissed
               | Linux's prctl child subreaper feature "because PID 1
               | often assumes this role in practice" as well as some
               | total bullshit about process group leaders and regular
               | processes not being special enough to interact with the
               | kernel which is just absolute nonsense.
               | 
               | So I really have no idea what it is about PID 1 that
               | systemd is supposedly relying on that makes it impossible
               | to split off the service manager from it. Everything I
               | have read up until now suggests that it is not required,
               | _especially_ on Linux where you have even more control
               | and it 's not like systemd is shy about using Linux
               | exclusive features.
        
             | Repulsion9513 wrote:
             | > By the way, all the stuff you mentioned is not really
             | part of the actual init system, namely PID 1
             | 
             | Except it literally is. I once had a systemd system
             | suddenly refuse to boot (kernel panic because PID1 crashed
             | or so) after a Debian upgrade, which I was able to resolve
             | by... wait for it... making /etc/localtime not be a
             | symlink.
             | 
             | Why does a failure doing something with the timezone make
             | you unable to boot your system? What is it even doing with
             | the timezone? What is failing about it? Who knows, good
             | luck strace'ing PID1!
        
               | matheusmoreira wrote:
               | Turns out you're right and my knowledge was outdated. I
               | seriously believed the systemd service manager was
               | separate from its PID 1 but at some point they even
               | changed the manuals to say that's not supported.
               | 
               | I was also corrected further down in the thread, with
               | citations from the maintainers even:
               | 
               | https://news.ycombinator.com/item?id=39871735
               | 
               | As it stands I really have no idea why the service
               | manager has not been split off from PID 1. Maintainer
               | said that PID 1 was "different" but didn't really
               | elaborate. Can't find much reliable information about
               | said differences either. Do you know?
        
               | Repulsion9513 wrote:
               | I have no idea, lol. Maybe the signal handling behavior?
               | You can't signal PID1 (unless the process has installed
               | its own signal handler for that signal). Even SIGKILL
               | won't usually work.
               | 
               | That's my entire problem with systemd though: despite the
               | averred modularity, it combines far too many concerns for
               | anyone to understand how or why it works the way it does.
        
               | matheusmoreira wrote:
               | Yeah the signal handling thing is true, PID 1 is the only
               | process that can handle or mask SIGKILL, maybe even
               | SIGSTOP. The systemd manual documents its handling of a
               | ton of signals but there's nothing in there about either
               | of those otherwise unmaskable signals. So I don't really
               | see how systemd is "relying" on anything. It's not
               | handling SIGKILL, is it?
               | 
               | The other difference is PID 1 can't exit because Linux
               | panics if it does. That's actually an argument for moving
               | functionality out of PID 1.
               | 
               | There are other service managers out there which work
               | outside PID 1. Systemd itself literally spawns non-PID 1
               | instances of itself to handle the user services. I
               | suppose only the maintainers can tell us why they did it
               | that way.
               | 
               | Maybe they _are_ relying on the fact PID 1 traditionally
               | reaps zombies even though Linux has a prctl for that:
               | 
               | https://www.man7.org/linux/man-pages/man2/prctl.2.html
               | PR_SET_CHILD_SUBREAPER
               | 
               | What if the issue is just that nobody's bothered to write
               | the code to move the zombie process reaping to a separate
               | process yet? Would they accept patches in that case?
               | 
               | Ludicrously, that manual page straight up says systemd
               | uses this system call to set itself up as the reaper of
               | zombie processes:
               | 
               | > Some init(1) frameworks (e.g., systemd(1)) employ a
               | subreaper process
               | 
               | If that's true then I really have no idea what the hell
               | it is about PID 1 that they're relying on.
               | 
               | Edit: just checked the source code and it's actually
               | true.
               | 
               | https://github.com/systemd/systemd/blob/main/src/core/mai
               | n.c...
               | 
               | https://github.com/systemd/systemd/blob/main/src/basic/pr
               | oce...
               | 
               | https://github.com/systemd/systemd/blob/main/src/basic/pr
               | oce...
               | 
               | So they're not relying on the special signals handling
               | and they even have special support for non-PID 1 child
               | subreapers. Makes no sense to me. Why can't they just
               | drop those PID == 1 checks and make a simpler PID 1
               | program that just spawns the real systemd service
               | manager?
               | 
               | Edit: they _already_ have a simple PID 1 in the code
               | base!
               | 
               | https://github.com/systemd/systemd/blob/main/src/nspawn/n
               | spa...
               | 
               | It's only being used inside namespaces though! Why? No
               | idea.
        
           | dralley wrote:
           | This is a bit like complaining that the Linux kernel has 30
           | million lines of code, while ignoring that 3/4 of that is in
           | hardware support (drivers) or filesystems that nobody is
           | actually required to use at any given time.
           | 
           | systemd is a collection of tools, one of which is an init
           | system. Nobody accused GNU yes of being bloated just because
           | it's in a repository alongside 50 other tools.
        
             | msm_ wrote:
             | Gnu yes is actually pretty bloated. It's 130 lines of code
             | for something so trivial [1]! ;)
             | 
             | [1] https://github.com/coreutils/coreutils/blob/master/src/
             | yes.c
        
               | pixelbeat wrote:
               | yes(1) is the standard unix way of generating repeated
               | data. It's good to do this as quickly as possible. I
               | really don't understand why so many get annoyed with this
               | code. 130 lines isn't that complicated in the scheme of
               | things.
        
             | Repulsion9513 wrote:
             | > that nobody is actually required to use at any given time
             | 
             | But that's the very problem with systemd! As time goes on
             | you're _required_ , whether by systemd itself or by the
             | ecosystem around it, to use more and more of it, until it's
             | doing not only service management but also timezones, RTC,
             | DNS resolution, providing getpwent/getgrent, inetd, VMs and
             | containers, bootloader, udev (without adding literally any
             | benefit over the existing implementations), ... oh and you
             | also have to add significant complexity in other things
             | (like the kernel!) to use it, like namespaces (which have
             | been a frequent source of vulnerabilities)...
        
               | SahAssar wrote:
               | > timezones, RTC, DNS resolution, providing
               | getpwent/getgrent, inetd, VMs and containers, bootloader
               | 
               | How many of those are you _actually required_ to use
               | systemd for? At least for DNS, inetd, containers and
               | bootloader I 'm pretty sure I run a few different
               | alternatives across my systems. I think major distros
               | (running systemd) still ship with different dns and
               | inetd, for containers its a lot more common to use a
               | docker-like (probably docker or podman) than it is to use
               | systemd-nspawn.
               | 
               | > oh and you also have to add significant complexity in
               | other things (like the kernel!) to use it, like
               | namespaces (which have been a frequent source of
               | vulnerabilities)
               | 
               | Namespaces were implemented before systemd, have been
               | used before systemd in widely used systems (for example
               | LXC and many others). Namespaces and similar kernel
               | features are not tied to systemd.
        
               | Repulsion9513 wrote:
               | > How many of those are you actually required to use
               | systemd for?
               | 
               | That depends on what other software you want to run,
               | because systemd's design heavily encourages other things
               | (distros, libraries, applications) to take dependencies
               | on various bits. See also: every mainstream distro.
               | 
               | > Namespaces were implemented before systemd, have been
               | used before systemd in widely used systems (for example
               | LXC and many others). Namespaces and similar kernel
               | features are not tied to systemd.
               | 
               | Didn't say they were. But I don't have to use LXC or many
               | others in order to use the most popular distros and
               | applications.
               | 
               | I do have to use systemd for that, though.
               | 
               | Which means I have to have namespaces enabled.
        
               | Bu9818 wrote:
               | >namespaces (which have been a frequent source of
               | vulnerabilities)...
               | 
               | Unprivileged user namespaces sure, but I don't think that
               | applies to namespaces in general (which without
               | unprivileged user namespaces can only be created by root,
               | and LPE is the concern with unprivileged userns due to
               | increased attack surface). systemd doesn't need
               | unprivileged userns to run.
        
           | bananapub wrote:
           | > One of the objections that many people do not understand,
           | is that systemd adds complexity. Unnecessary complexity.
           | Boats full, loads full, mountains full of complexity.
           | 
           | this is and always has been such a dumb take.
           | 
           | if you'd like to implement an init (and friends) system that
           | doesn't have "unnecessary complexity" and still provides all
           | the functionality that people currently want, then go and do
           | so and show us? otherwise it's just whinging about things not
           | being like the terrible old days of init being a mass of
           | buggy and racey shell scripts.
        
             | simoncion wrote:
             | > about things not being like the terrible old days of init
             | being a mass of buggy and racey shell scripts.
             | 
             | Zero of the major distros used System V init by default.
             | Probably only distros like Slackware or Linux From Scratch
             | even suggested it.
             | 
             | It's unfortunate that so many folks uncritically swallowed
             | the Systemd Cabal's claims about how they were the first to
             | do this, that, or the other.
             | 
             | (It's also darkly amusing to note that every service that
             | has nontrivial pre-start or post-start configuration and/or
             | verification requirements ends up using systemd to run at
             | least one shell script... which is what would have often
             | been inlined into their init script in other init systems.)
        
               | bananapub wrote:
               | > Zero of the major distros used System V init by
               | default. Probably only distros like Slackware or Linux
               | From Scratch even suggested it.
               | 
               | I have absolutely no idea what you're trying to claim.
               | 
               | Are you suggesting that Debian's "sysvinit" package
               | wasn't a System V init system? That the years I spent
               | editing shell scripts in /etc/init.d/ wasn't System V
               | init?
               | 
               | or are you making some pointless distinction about it not
               | actually being pre-lawsuit AT&T files so it doesn't count
               | or something?
               | 
               | or did you not use Linux before 2010?
               | 
               | if you have some important point to make, please make it
               | more clearly.
               | 
               | > It's unfortunate that so many folks uncritically
               | swallowed the Systemd Cabal's claims about how they were
               | the first to do this, that, or the other.
               | 
               | I feel like you have very strong emotions about init
               | systems that have nothing to do with the comment you're
               | replying to.
        
             | Repulsion9513 wrote:
             | There were plenty of those that existed even before
             | systemd. Systemd's adoption was not a result of providing
             | the functionality that people want but rather was a result
             | of providing functionality that a few important people
             | wanted and promptly took hard dependencies on.
        
           | marcosdumay wrote:
           | As long as Gnome requires bug-compatibility with systemd,
           | nobody will rewrite it.
        
           | quotemstr wrote:
           | > One of the objections that many people do not understand,
           | is that systemd adds complexity. Unnecessary complexity.
           | Boats full, loads full, mountains full of complexity.
           | 
           | Complexity that would otherwise be distributed to a sea of
           | ad-hoc shell scripts? systemd is a win
        
             | Repulsion9513 wrote:
             | The init-scripts that predated systemd were actually pretty
             | damn simple. So was init itself.
        
         | bennyhill wrote:
         | > so now security-critical daemons like openssh have additional
         | dependencies like liblzma
         | 
         | Systemd itself seems security-critical to me. Would removing
         | other dependencies on libsystemd really make a secure system
         | where systemd was compromised through its library?
        
           | agwa wrote:
           | 1. systemd (at least the PID 1 part) does not talk to the
           | network, so a remotely-accessible backdoor would need to be
           | more complex (and thus more likely to be detected) than a
           | backdoor that can be loaded into a listening daemon like
           | openssh.
           | 
           | 2. You can run Debian systems without systemd as PID 1, but
           | you're still stuck with libsystemd because so many daemons
           | now link with it.
        
             | capitainenemo wrote:
             | .. well, you can use a shim package as devuan did.
        
             | chasil wrote:
             | > systemd... does not talk to the network...
             | 
             | Socket activation and the NFS automounter appear to.
             | 
             | If I run "netstat -ap" I see pid 1 listening on enabled
             | units.
             | 
             | Edit: tinysshd is specifically launched this way.
             | 
             | Edit2: there is also substantial criticism of xz on
             | technical grounds.
             | 
             | https://www.nongnu.org/lzip/xz_inadequate.html
        
         | poettering wrote:
         | Uh. systemd documents the protocol at various places and the
         | protocol is trivial: a single text datagram sent to am AF_UNIX
         | socket whose path you get via the NOTIFY_SOCKET. That's trivial
         | to implement for any one with some basic unix programming
         | knowledge. And i tell pretty much anyone who wants to listen
         | that they should just implement the proto on their own if thats
         | rhe only reason for a libsystemd dep otherwise. In particular
         | non-C environments really should do their own native impl and
         | not botjer wrapping libsystemd just for this.
         | 
         | But let me stress two other things:
         | 
         | Libselinux pulls in liblzma too and gets linked into _tons_
         | more programs than libsystemd. And will end up in sshd too (at
         | the very least via libpam /pam_selinux). And most of the really
         | big distros tend do support selinux at least to some level.
         | Hence systemd or not, sshd remains vulnerable by this specific
         | attack.
         | 
         | With that in mind libsystemd git dropped the dep on liblzma
         | actually, all compressors are now dlopen deps and thus only
         | pulled in when needed.
        
           | o11c wrote:
           | Deferring the load of the library often just makes things
           | harder to analyze, not necessarily more secure. I imagine
           | many of the comments quoting `ldd` are wrongly forgetting
           | about `dlopen`.
           | 
           | (I really wish there were a way to link such that the library
           | isn't actually loaded but it still shows in the metadata, so
           | you can get the performance benefits of doing less work but
           | can still analyze the dependency DAG easily)
        
             | poettering wrote:
             | It would make things more secure in this specific
             | backdooring case, since sshd only calls a single function
             | of libsystemd (sd_notify) and that one would not trigger
             | the dlopen of liblzma, hence the specific path chosen by
             | the backdoor would not work (unless libselinux fucks it up
             | fter all, see other comments)
             | 
             | Dlopen has drawbacks but also major benefits. We decided
             | the benefits relatively clearly outweigh the drawbacks, but
             | of course people may disagree.
             | 
             | I have proposed a mechanism before, that would expose the
             | list of libs we potentially load via dlopen into an ELF
             | section or ELF note. This could be consumed by things such
             | as packagae managers (for auto-dep generation) and ldd.
             | However there was no interest in getting this landed from
             | anyone else, so I dropped it.
             | 
             | Note that there are various cases where people use dlopen
             | not on hardcoded lib names, but dynamically configured
             | ones, where this would not help. I.e. things like glibc nss
             | or pam or anything else plugin based. But in particular pam
             | kinda matters since that tends to be loaded into almost any
             | kind of security relavant software, including sshd.
        
               | o11c wrote:
               | The plugin-based case can covered by the notion of
               | multiple "entry points": every library that is intended
               | to be `dlopen`ed is tagged with the name of the interface
               | it provides, and every library that does such `dlopen`ing
               | mentions the names of such interfaces rather than the
               | names of libraries directly. Of course your `ldd` tool
               | has to scan _all_ the libraries on the system to know
               | what might be loaded, but `ldconfig` already does that
               | for libraries not in a private directory.
               | 
               | This might sound like a lot of work for a package-
               | manager-less-language ecosystem at first, but if you
               | consider "tag" as "exports symbol with name", it is in
               | fact already how most C plugin systems work (a few use an
               | incompatible per-library computed name though, or rely
               | entirely on global constructors). So really only the
               | loading programs need to be modified, just like the
               | fixed-name `dlopen`.
        
           | iforgotpassword wrote:
           | > And i tell pretty much anyone who wants to listen that they
           | should just implement the proto on their own if thats rhe
           | only reason for a libsystemd dep otherwise.
           | 
           | That's what I think too. Do the relevant docs point this out
           | too? Ages ago they didn't. I think we should try to avoid
           | that people just google "implement systemd notify daemon" and
           | end up on a page that says "link to libsystemd and call
           | sd_notify()".
        
           | agwa wrote:
           | > And i tell pretty much anyone who wants to listen that they
           | should just implement the proto on their own if thats rhe
           | only reason for a libsystemd dep otherwise
           | 
           | Could you point out where the man page (https://www.freedeskt
           | op.org/software/systemd/man/latest/sd_n...) says this?
        
             | NekkoDroid wrote:
             | If you are talking about the stability of that interface:
             | https://systemd.io/PORTABILITY_AND_STABILITY/
        
             | chadcatlett wrote:
             | The notes section has a brief description of the protocol
             | and the different kinds of sockets involved.
        
           | bbarnett wrote:
           | _And will end up in sshd too (at the very least via libpam
           | /pam_selinux)._
           | 
           | Inaccurate.
           | 
           | It's not pulled in on any sysvinit Debian system I run. It is
           | on stable, oldstable, and oldoldstable systems via systemd.
           | 
           | Not systemd:
           | 
           | # ldd $(which sshd) linux-vdso.so.1 (0x00007ffcb57f5000)
           | libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1
           | (0x00007fbad13c9000)                  libwrap.so.0 =>
           | /lib/x86_64-linux-gnu/libwrap.so.0 (0x00007fbad13bd000)
           | libaudit.so.1 => /lib/x86_64-linux-gnu/libaudit.so.1
           | (0x00007fbad138c000)                  libpam.so.0 =>
           | /lib/x86_64-linux-gnu/libpam.so.0 (0x00007fbad137a000)
           | libsystemd.so.0 => /lib/x86_64-linux-gnu/libsystemd.so.0
           | (0x00007fbad12d5000)                  libselinux.so.1 =>
           | /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007fbad12a5000)
           | libgssapi_krb5.so.2 => /lib/x86_64-linux-
           | gnu/libgssapi_krb5.so.2 (0x00007fbad1253000)
           | libkrb5.so.3 => /lib/x86_64-linux-gnu/libkrb5.so.3
           | (0x00007fbad1179000)                  libcom_err.so.2 =>
           | /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007fbad1173000)
           | libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3
           | (0x00007fbad0c00000)                  libz.so.1 =>
           | /lib/x86_64-linux-gnu/libz.so.1 (0x00007fbad1154000)
           | libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
           | (0x00007fbad0a1f000)                  libnsl.so.2 =>
           | /lib/x86_64-linux-gnu/libnsl.so.2 (0x00007fbad1137000)
           | libcap-ng.so.0 => /lib/x86_64-linux-gnu/libcap-ng.so.0
           | (0x00007fbad112f000)                  libcap.so.2 =>
           | /lib/x86_64-linux-gnu/libcap.so.2 (0x00007fbad1123000)
           | /lib64/ld-linux-x86-64.so.2 (0x00007fbad156a000)
           | libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0
           | (0x00007fbad1089000)                  libk5crypto.so.3 =>
           | /lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007fbad09f2000)
           | libkrb5support.so.0 => /lib/x86_64-linux-
           | gnu/libkrb5support.so.0 (0x00007fbad09e4000)
           | libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1
           | (0x00007fbad09dd000)                  libresolv.so.2 =>
           | /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007fbad09cc000)
           | libtirpc.so.3 => /lib/x86_64-linux-gnu/libtirpc.so.3
           | (0x00007fbad099e000)
           | 
           | systemd:
           | 
           | # ldd $(which sshd) linux-vdso.so.1 (0x00007ffc4d3eb000)
           | libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1
           | (0x00007feb8aa35000)                  libwrap.so.0 =>
           | /lib/x86_64-linux-gnu/libwrap.so.0 (0x00007feb8aa29000)
           | libaudit.so.1 => /lib/x86_64-linux-gnu/libaudit.so.1
           | (0x00007feb8a9f8000)                  libpam.so.0 =>
           | /lib/x86_64-linux-gnu/libpam.so.0 (0x00007feb8a9e6000)
           | libsystemd.so.0 => /lib/x86_64-linux-gnu/libsystemd.so.0
           | (0x00007feb8a916000)                  libselinux.so.1 =>
           | /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007feb8a8e6000)
           | libgssapi_krb5.so.2 => /lib/x86_64-linux-
           | gnu/libgssapi_krb5.so.2 (0x00007feb8a894000)
           | libkrb5.so.3 => /lib/x86_64-linux-gnu/libkrb5.so.3
           | (0x00007feb8a7ba000)                  libcom_err.so.2 =>
           | /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007feb8a7b4000)
           | libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3
           | (0x00007feb8a200000)                  libz.so.1 =>
           | /lib/x86_64-linux-gnu/libz.so.1 (0x00007feb8a795000)
           | libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
           | (0x00007feb8a01f000)                  libnsl.so.2 =>
           | /lib/x86_64-linux-gnu/libnsl.so.2 (0x00007feb8a778000)
           | libcap-ng.so.0 => /lib/x86_64-linux-gnu/libcap-ng.so.0
           | (0x00007feb8a770000)                  libcap.so.2 =>
           | /lib/x86_64-linux-gnu/libcap.so.2 (0x00007feb8a764000)
           | libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20
           | (0x00007feb89ed8000)                  liblzma.so.5 =>
           | /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007feb8a735000)
           | libzstd.so.1 => /lib/x86_64-linux-gnu/libzstd.so.1
           | (0x00007feb89e1c000)                  liblz4.so.1 =>
           | /lib/x86_64-linux-gnu/liblz4.so.1 (0x00007feb8a70d000)
           | /lib64/ld-linux-x86-64.so.2 (0x00007feb8abb5000)
           | libpcre2-8.so.0 => /lib/x86_64-linux-gnu/libpcre2-8.so.0
           | (0x00007feb89d82000)                  libk5crypto.so.3 =>
           | /lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007feb8a6e0000)
           | libkrb5support.so.0 => /lib/x86_64-linux-
           | gnu/libkrb5support.so.0 (0x00007feb8a6d2000)
           | libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1
           | (0x00007feb8a6c9000)                  libresolv.so.2 =>
           | /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007feb8a6b8000)
           | libtirpc.so.3 => /lib/x86_64-linux-gnu/libtirpc.so.3
           | (0x00007feb8a68a000)                  libgpg-error.so.0 =>
           | /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007feb89d5a000)
           | 
           | EG
           | 
           | # ldd $(which sshd) | grep liblz
           | liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5
           | (0x00007fd1e647a000)                  liblz4.so.1 =>
           | /lib/x86_64-linux-gnu/liblz4.so.1 (0x00007fd1e6398000)
        
         | mardifoufs wrote:
         | What? I don't get it? Isn't it on Debian if they modified the
         | package to do something like this? Why would you blame systemd
         | for maintainers doing something that upstream has never
         | required or recommended?
        
         | hnald wrote:
         | It's unfortunate that the anti-systemd party lost the war...
         | years ago. But I don't blame systemd, Lennart Pottering or the
         | fanboys (though it would have been so much better if the guy
         | never worked in open source or wasn't such a prolific
         | programmer). I blame Debian and its community for succumbing to
         | this assault on Unix philosophy (again, years ago).
        
           | 1attice wrote:
           | Sometimes things evolve in ways that make us feel a little
           | obsolete.
           | 
           | I've been learning NixOS for a few years now, and it would
           | have been impossible without systemd. It's one heck of a
           | learning curve, but when you get to the other side, you know
           | something of great power and value. Certain kinds of
           | complexity adds 'land' (eg. systemd) that can become 'real
           | estate' (eg. NixOS), which in turn hopes to become 'land' for
           | the next innovation, and so forth.
           | 
           | Whether this happens or not (whether it's the _right_ kind of
           | complexity) is really hard to assess up-front, and probably
           | impossible without knowing the complex new technology in
           | question very well. (And by then you have the bias of
           | depending, in part, yourself on the success of the new tech,
           | as you 've committed significant resources to mastering it,
           | so good luck on convincing skeptical newcomers!)
           | 
           | It's almost like a sort of event horizon -- once you know a
           | complex new technology well enough to see whether or not it's
           | useful, the conflict-of-interest makes your opinion
           | unreliable to outsiders!
           | 
           | Nevertheless, the assessment process itself, while difficult
           | to get right, is worth getting better at.
           | 
           | It's easy for impatience and the sensation of what I've taken
           | to calling 'daunt' -- that intrinsic recoil that the mind has
           | from absorbing a large amounts of information whose use case
           | is not immediately relevant -- to dissuade one from
           | exploring. But then, one never discovers new 'land', and one
           | never builds new real estate!
           | 
           | [ Aside: This is why I'm a little skeptical of the current
           | rebellion against frontend frameworks. Certainly some of
           | them, like tailwind, are clearly adding fetters to an
           | otherwise powerful browser stack. But others, like Svelte,
           | and to some extent, even React, bring significant benefits.
           | 
           | The rebellion has this vibe like, well, users _should_ prefer
           | more simply-built interfaces, and if they don't, well, they
           | just have bad taste. What would be more humble would be to
           | let the marketplace (e.g. consumers) decide what is
           | preferable, and then build that. ]
        
         | shirro wrote:
         | The notify protocol isn't much more complicated than that. From
         | memory you send a string to a unix socket. I have written both
         | systemd notify and listenfd in a few languages for little
         | experiments and it is hard to imagine how the protocols could
         | be simpler.
         | 
         | Looking at most popular projects these days they are a mass of
         | dependencies and I think very few of them can be properly
         | audited and verified by the projects that use them. Rust and Go
         | might be more memory safe than C but look at the number of
         | cargo or go modules in most projects. I have mostly stopped
         | using node/npm on my systems.
        
       | 0x0 wrote:
       | Homebrew is currently shipping 5.6.1 (and was shipping 5.6.0 as
       | well). Hopefully not affected on mac?
        
         | stephenr wrote:
         | The issue is caused by patches to add integration with systemd,
         | so no, this won't affect SSH on a Mac.
        
           | 0x0 wrote:
           | Just because macs don't use systemd, doesn't mean the
           | backdoor won't work. The oss-sec post talks about liblzma
           | having backdoors in crc32_resolve() and crc64_resolve() and
           | that it has not been fully reversed. This could perhaps
           | affect more than just sshd on x86-64 linux?
        
             | anarazel wrote:
             | > Just because macs don't use systemd, doesn't mean the
             | backdoor won't work.
             | 
             | Practically speaking it can't - For one the script injected
             | into the build process tests that you're running on x86-64
             | linux, for another, the injected code is elf code, which
             | wouldn't link on a mac. It also needs to manipulate dynamic
             | linker datastructures, which would also not work the same
             | on a mac.
             | 
             | > This could perhaps affect more than just sshd on x86-64
             | linux?
             | 
             | This however is true - /usr/sbin/sshd was the only argv[0]
             | value that I found to "work", but it's possible there are
             | others. "/usr/sbin/sshd" isn't a string directly visible in
             | the injected code, so it's hard to tell.
        
             | stephenr wrote:
             | The article explains _numerous_ concurrent conditions that
             | have to be met for the backdoor to even be activated (at
             | build time, not runtime), which combined make it extremely
             | unlikely this will affect SSH on macOS:
             | 
             | - linux
             | 
             | - x86-64
             | 
             | - building with gcc & the GNU linker
             | 
             | - part of a .deb or .rpm build
             | 
             | Add to that, as the article explains: openssh does not
             | directly use liblzma, the only reason SSH is affected at
             | all, is because some Linux Distros patch openssh to link it
             | against systemd, which _does_ depend on liblzma.
             | 
             | Could it affect things _other_ than SSH on a Mac? Unlikely.
             | The compromise was introduced in 5.6.0, but macOS Sonoma
             | has 5.4.4 (from August last year).
        
         | woodruffw wrote:
         | Homebrew reverted to 5.4.6 once the maintainers became aware.
         | The current understanding is that macOS is not affected, but
         | that's not certain.
         | 
         | [1]: https://github.com/Homebrew/homebrew-core/pull/167512
        
       | asveikau wrote:
       | That's completely crazy, the backdoor is introduced through a
       | very cryptic addition to the configure script. Just looking at
       | the diff, it doesn't look malicious at all, it looks like build
       | script gibberish.
        
         | zb3 wrote:
         | Yeah, now imagine they succeeded and it didn't cause any
         | performance issues...
         | 
         | Can we even be sure no such successful attempt has already been
         | made?
        
           | gpvos wrote:
           | No, we can't.
        
           | coldpie wrote:
           | You can be certain it has happened, many times. Now think of
           | all the software we mindlessly consume via docker, language
           | package managers, and the like.
           | 
           | Remember, there is no such thing as computer security. Make
           | your decisions accordingly :)
        
         | agwa wrote:
         | Thanks to autoconf, we're now used to build scripts looking
         | like gibberish. A perfect place to hide a backdoor.
        
           | rwmj wrote:
           | This is my main take-away from this. We must stop using
           | upstream configure and other "binary" scripts. Delete them
           | all and run "autoreconf -fi" to recreate them. (Debian
           | already does something like this I think.)
        
             | anthk wrote:
             | I always run autoreconfig -ifv first.
        
               | rwmj wrote:
               | In this case it wouldn't be sufficient. You had to also
               | delete m4/build-to-host.m4 for autoreconf to recreate it.
        
               | anthk wrote:
               | Thanks. At least the distro I use (Hyperbola) it's LTS
               | bound, so is not affected.
        
             | cesarb wrote:
             | > We must stop using upstream configure and other "binary"
             | scripts. Delete them all and run "autoreconf -fi" to
             | recreate them.
             | 
             | I would go further than that: all files which are in a
             | distributed tarball, but not on the corresponding git
             | repository, should be treated as suspect.
             | 
             | Distributing these generated autotools files is a relic of
             | times when it could not be expected that the target machine
             | would have all the necessary development environment
             | pieces. Nowadays, we should be able to assume that whoever
             | wants to compile the code can also run
             | autoconf/automake/etc to generate the build scripts from
             | their sources.
             | 
             | And other than the autotools output, and perhaps a couple
             | of other tarball build artifacts (like cargo simplifying
             | the Cargo.toml file), there should be no difference between
             | what is distributed and what is on the repository. I recall
             | reading about some project to find the corresponding commit
             | for all Rust crates and compare it with the published
             | crate, though I can't find it right now; I don't know
             | whether there's something similar being done for other
             | ecosystems.
        
               | nolist_policy wrote:
               | One small problem with this is that autoconf is not
               | backwards-compatible. There are projects out there that
               | need older autoconf than distributions ship with.
        
               | rwmj wrote:
               | There are, and they need to be fixed.
        
               | cesarb wrote:
               | The test code generated by older autoconf is not going to
               | be work correctly with newer GCC releases due to the
               | deprecation of implicit int and implicit function
               | declarations (see
               | https://fedoraproject.org/wiki/Changes/PortingToModernC),
               | so these projects already have to be updated to work with
               | newer autoconf.
        
               | badsectoracula wrote:
               | Typing `./configure` wont work but something like
               | `./configure CFLAGS="-Wno-error=implicit-function-
               | declaration"` (or whatever flag) might work (IIRC it is
               | possible to pass flags to the compiler invocations used
               | for checking out the existence of features) without
               | needing to recreate it.
               | 
               | Also chances are you can shove that flag in some old
               | `configure.in` and have it work with an old autoconf for
               | years before it having to update it :-P.
        
               | Too wrote:
               | Easily solved with Docker.
               | 
               | Yes, it sucks to add yet another wrapper but that's what
               | you get for choosing non backwards compatible tools in
               | the first place. In combination with projects that don't
               | keep up to date on supporting later versions.
        
               | pajko wrote:
               | https://www.gnu.org/software/autoconf-archive/
        
               | londons_explore wrote:
               | Why do we distribute tarballs at all? A git hash should
               | be all thats needed...
        
               | cesarb wrote:
               | > Why do we distribute tarballs at all? A git hash should
               | be all thats needed...
               | 
               | A git hash means nothing without the repository it came
               | from, so you'd need to distribute both. A tarball is a
               | self-contained artifact. If I store a tarball in a CD-
               | ROM, and look at it twenty years later, it will still
               | have the same complete code; if I store a git hash in a
               | CD-ROM, without storing a copy of the repository together
               | with it, twenty years later there's a good chance that
               | the repository is no longer available.
               | 
               | We could distribute the git hash together with a shallow
               | copy of the repository (we don't actually need the
               | history as long as the commit with its trees and blobs is
               | there), but that's just reinventing a tarball with more
               | steps.
               | 
               | (Setting aside that currently git hashes use SHA-1, which
               | is not considered strong enough.)
        
               | londons_explore wrote:
               | except it isn't reinventing the tarball, because the git
               | hash _forces_ verification that every single file in the
               | repo matches that in the release.
               | 
               | And git even has support for "compressed git repo in a
               | file" or "shallow git repo in a file" or even "diffs from
               | the last release, compressed in a file". They're called
               | "git bundle"'s.
               | 
               | They're literally perfect for software distribution and
               | archiving.
        
               | bombcar wrote:
               | People don't know how to use git hashes, and it's not
               | been "done". Whereas downloading tarballs and verifying
               | hashes of the tarball has been "good enough" because the
               | real thing it's been detecting is communication faults,
               | not supply chain attacks.
               | 
               | People also like version numbers like 2.5.1 but that's
               | not a hash, and you can only indirectly make it a hash.
        
               | hypnagogic wrote:
               | > I would go further than that: all files which are in a
               | distributed tarball, but not on the corresponding git
               | repository, should be treated as suspect.
               | 
               | This and the automated A/B / diff to check the tarball
               | against the repo, flag if mismatched.
        
             | nicolas_17 wrote:
             | The backdoor is in an .m4 file that gets parsed by autoconf
             | to generate the configure script. Running autoconf yourself
             | won't save you.
        
               | rwmj wrote:
               | That's not entirely true. autoreconf will regenerate
               | m4/build-to-host.m4 but only if you delete it first.
        
             | tobias2014 wrote:
             | It seems like this was the solution for archlinux, pull
             | directly from the github tag and run autogen: https://gitla
             | b.archlinux.org/archlinux/packaging/packages/xz...
        
               | 1oooqooq wrote:
               | it's shocking how many packages on distros are just one
               | random tarball from the internet with lipstick
        
             | k8svet wrote:
             | Oh come on, please, let's put autotools out to pasture.
             | I've lost so much of my life fighting autotools crap
             | compared to "just use meson".
        
             | snnn wrote:
             | I don't think it would help much. I work on machine
             | learning frameworks. A lot of them(and math libraries) rely
             | on just in time compilation. None of us has the time or
             | expertise to inspect JIT-ed assembly code. Not even
             | mentioning that much of the code deliberately read/write
             | out of bound, which is not an issue if you always add some
             | extra bytes at the end of each buffer, which could make
             | most memory sanitizer tools useless. When you run their
             | unit tests, you run the JIT code, then a lot of things
             | could happen. Maybe we should ask all packaging systems
             | splitting their build into compile and test two stages, to
             | ensure that a testing code would not impact the binaries
             | that are going to be published. I would rather to read and
             | analysis the generated code instead of the code that
             | generates it.
        
           | demizer wrote:
           | Maybe the US Government needs to put its line in the sand and
           | mandate the end of autotools. :D
        
           | pornel wrote:
           | Maybe it's time to dramatically simplify autoconf?
           | 
           | How long do we need to (pretend to) keep compatibility with
           | pre-ANSI C compilers, broken shells on exotic retro-unixes,
           | and running scripts that check how many bits are in a byte?
        
           | hgs3 wrote:
           | Autoconf is m4 macros and Bourne shell. Most mainstream
           | programming languages have a packaging system that lets you
           | invoke a shell script. This attack is a reminder to keep your
           | shell scripts clean. Don't treat them as an afterthought.
        
           | hypnagogic wrote:
           | I'm wondering is there i.e. no way to add an automated
           | flagging system that A/B / `diff` checks the tarball contents
           | against the repo's files and warns if there's a mismatch?
           | This would be on i.e. GitHub's end so that there'd be this
           | sort of automated integrity test and subsequent warning? Just
           | a thought, since tainted tarballs like these might be
           | altogether be (and become) a threat vector, regardless of the
           | repo.
        
         | omoikane wrote:
         | It looks like an earlier commit with a binary blob "test data"
         | contained the bulk of the backdoor, then the configure script
         | enabled it, and then later commits patched up valgrind errors
         | caused by the backdoor. See the commit links in the
         | "Compromised Repository" section.
         | 
         | Also, seems like the same user who made these changes are still
         | submitting changes to various repositories as of a few days
         | ago. Maybe these projects need to temporarily stop accepting
         | commits until further review is done?
        
         | 20after4 wrote:
         | > "Given the activity over several weeks, the committer is
         | either directly involved or there was some quite severe
         | compromise of their system. Unfortunately the latter looks like
         | the less likely explanation, given they communicated on various
         | lists about the "fixes" mentioned above."
         | 
         | Crazy indeed.
        
         | tetromino_ wrote:
         | A big part of the problem is all the tooling around git (like
         | the default github UI) which hides diffs for binary files like
         | these pseudo-"test" files. Makes them an ideal place to hide
         | exploit data since comparatively few people would bother
         | opening a hex editor manually.
        
           | acdha wrote:
           | How many people read autoconf scripts, though? I think those
           | filters are symptom of the larger problem that many popular
           | C/C++ codebases have these gigantic build files which even
           | experts try to avoid dealing with. I know why we have them
           | but it does seem like something which might be worth
           | reconsidering now that the tool chain is considerably more
           | stable than it was in the 80s and 90s.
        
             | bonzini wrote:
             | How many people read build.rs files of all the transitive
             | dependencies of a moderately large Rust project?
             | 
             | Autoconf is bad in this respect but it's not like the
             | alternatives are better (maybe Bazel).
        
               | bdd8f1df777b wrote:
               | Bazel has its problems but the readability is definitely
               | better. And bazel BUILD files are quite constrained in
               | what it can do.
        
               | acdha wrote:
               | The alternatives are _better_ but still not great.
               | build.rs is much easier to read and audit, for example,
               | but it's definitely still the case that people probably
               | skim past it. I know that the Rust community has been
               | working on things like build sandboxing and I'd expect
               | efforts to be a lot easier there than in a mess of m4/sh
               | where everyone is afraid to break 4 decades of prior
               | usage.
        
               | bonzini wrote:
               | build.rs is easier to read, but it's the tip of the
               | iceberg when it comes to auditing.
               | 
               | If I were to sneak in some underhanded code, I'd do it
               | through either _a dependency_ that is used by build.rs
               | (not unlike what was done for xz) or a crate purporting
               | to implement a very useful procedural macro...
        
             | salawat wrote:
             | I mean, autoconf is basically a set of template programs
             | for snffing out whether a system has X symbol available to
             | the linker. Any replacement for it would end up morphing
             | into it over time.
             | 
             | Some things are just that complex.
        
               | acdha wrote:
               | We have much better tools now and much simpler support
               | matrices, though. When this stuff was created, you had
               | more processor architectures, compilers, operating
               | systems, etc. and they were all much worse in terms of
               | features and compatibility. Any C codebase in the 90s was
               | half #ifdef blocks with comments like "DGUX lies about
               | supporting X" or "SCO implemented Y but without option Z
               | so we use Q instead".
        
           | johnny22 wrote:
           | I don't see how showing the binary diffs would help.
           | 99.99999% of people would just scroll right past them
           | anyways.
        
             | AeroNotix wrote:
             | Even in binary you can see patterns. Not saying it's
             | perfect to show binary diffs (but it is better than showing
             | nothing) but I know even my slow mammalian brain can spot
             | obvious human readable characters in various binary
             | encoding formats. If I see a few in a row which doesn't
             | make sense, why wouldn't I poke it?
        
               | ok123456 wrote:
               | What should I look for? The evil bit set?
        
               | johnny22 wrote:
               | Sure, the same person who's gonna be looking is the same
               | person who'd click "show diff"
        
               | janc_ wrote:
               | This particular file was described as an archive file
               | with corrupted data somewhere in the middle. Assuming you
               | wanted to scroll that far through a hexdump of it, there
               | could be pretty much any data in there without being
               | suspicious.
        
           | londons_explore wrote:
           | testdata should not be on the same machine as the build is
           | done. testdata (and tests generally) aren't as well audited,
           | and therefore shouldn't be allowed to leak into the finished
           | product.
           | 
           | Sure - you want to test stuff, but that can be done with a
           | special "test build" in it's own VM.
        
             | Hackbraten wrote:
             | That could easily double build cost. Most open-source
             | package repositories are not exactly in a position to
             | splurge on their build infra.
        
             | hanwenn wrote:
             | In the Bazel build system, you would mark the test data
             | blob as testonly=1. Then the build system guarantees that
             | the blob can only be used in tests.
             | 
             | This incident shows that killing the autoconf goop is long
             | overdue.
        
           | bangoodirro wrote:
           | 00011900: 0000 4883 f804 7416 b85f 5f5f 5f33 010f
           | ..H...t..____3.. | 00011910: b651 0483 f25a 09c2 0f84 5903
           | 0000 488d .Q...Z....Y...H. | 00011920: 7c24 40e8 5875 0000
           | 488b 4c24 4848 3b4c |$@.Xu..H.L$HH;L | 00011930: 2440 7516
           | 4885 c074 114d 85ff 0f84 3202 $@u.H..t.M....2. | 00011940:
           | 0000 498b 0ee9 2c02 0000 b9fe ffff ff45 ..I...,........E |
           | 00011950: 31f6 4885 db74 0289 0b48 8bbc 2470 1300
           | 1.H..t...H..$p.. | 00011960: 0048 85ff 0f85 c200 0000 0f57
           | c00f 2945 .H.........W..)E | 00011970: 0048 89ac 2470 1300
           | 0048 8bbc 2410 0300 .H..$p...H..$... | 00011980: 0048 8d84
           | 2428 0300 0048 39c7 7405 e8ad .H..$(...H9.t... | 00011990:
           | e6ff ff48 8bbc 24d8 0200 0048 8d84 24f0 ...H..$....H..$. |
           | 000119a0: 0200 0048 39c7 7405 e893 e6ff ff48 8bbc
           | ...H9.t......H.. | 000119b0: 2480 0200 0048 8d84 2498 0200
           | 0048 39c7 $....H..$....H9. | 000119c0: 7405 e879 e6ff ff48
           | 8bbc 2468 0100 004c t..y...H..$h...L | Please tell me what
           | this code does, Sheldon
        
             | tetromino_ wrote:
             | You're right - the two exploit files are lzma-compressed
             | and then deliberately corrupted using `tr`, so a hex dump
             | wouldn't show anything immediately suspicious to a
             | reviewer.
             | 
             | Mea culpa!
        
               | maxcoder4 wrote:
               | Is this lzma compressed? Hard to tell because of the lack
               | of formatting, but this looks like amd64 shellcode to me.
               | 
               | But that's not really important to the point - I'm not
               | looking at a diff of every committed favicon.ico or ttf
               | font or a binary test file to make sure it doesn't
               | contain a shellcode.
        
               | bangoodirro wrote:
               | it's just an arbitrary section of
               | libcxx-18.1.1/lib/libc++abi.so.1.0
        
           | lyu07282 wrote:
           | in this case the backdoor was hidden in a nesting doll of
           | compressed data manipulated with head/tail and tr, even
           | replacing byte ranges inbetween. Would've been impossible to
           | find if you were just looking at the test fixtures.
        
         | ptx wrote:
         | The use of "eval" stands out, or at least it _should_ stand out
         | - but there are two more instances of it in the same script,
         | which presumably are not used maliciously.
         | 
         | A while back there was a discussion[0] of an arbitrary code
         | execution vulnerability in exiftool which was also the result
         | of "eval".
         | 
         | Avoiding casual use of this overpowered footgun might make it
         | easier to spot malicious backdoors. Usually there is a better
         | way to do it in almost all cases where people feel the need to
         | reach for "eval", unless the feature you're implementing really
         | is "take a piece of arbitrary code from the user and execute
         | it".
         | 
         | [0] https://news.ycombinator.com/item?id=39154825
        
           | bonzini wrote:
           | Unfortunately eval in a shell script has an effect on the
           | semantics but is not necessary to do some kind of parsing of
           | the contents of a variable, unlike Python or Perl or
           | JavaScript. A                   $goo
           | 
           | line (without quotes) will already do word splitting, though
           | it won't do another layer of variable expansion and
           | unquoting, for which you'll need                   eval
           | "$goo"
           | 
           | (This time with quotes).
        
           | jwilk wrote:
           | eval in autoconf macros is nothing unusual.
           | 
           | In (pre-backdoor) xz 5.4.5:                 $ grep -wl eval
           | m4/*       m4/gettext.m4       m4/lib-link.m4       m4/lib-
           | prefix.m4       m4/libtool.m4
        
           | lyu07282 wrote:
           | > Usually there is a better way to do it in almost all cases
           | where people feel the need to reach for "eval"
           | 
           | unfortunately thats just standard in configure scripts, for
           | example from python:
           | 
           | ``` grep eval Python-3.12.2/configure | wc -l 165 ```
           | 
           | and its 32,958 lines of code, plenty of binary fixtures as
           | well in the tarball to hide stuff.
           | 
           | who knows, but I have feeling us finding the backdoor in this
           | case was more of a happy accident.
        
       | youainti wrote:
       | Summary: "The upstream xz repository and the xz tarballs have
       | been backdoored."
       | 
       | It is known to be in version 5.6.0 and 5.6.1, and the obfuscated
       | code is found in the test directory.
        
       | buildbot wrote:
       | This potentially could be a full automated rootkit type breach
       | right? Great - is any system with 5.6.1 possibly vulnerable?
       | 
       | Also super weird a contributor thought they could slip this in
       | and not have it be noticed at some point. It may point to burning
       | that person (aka, they go to jail) for whatever they achieved
       | with this. (And whoever they are...)
        
       | q3k wrote:
       | NixOS/Pkgs 23.11 unaffected, unstable contains backdoored
       | implementations (5.6.0, 5.6.1) but their OpenSSH sshd does not
       | seem to link against systemd/liblzma, and the backdoor doesn't
       | get configured in (only happens on .deb/.rpm systems).
        
         | jchw wrote:
         | It may not have really mattered much for NixOS:
         | 
         | > b) argv[0] needs to be /usr/sbin/sshd
         | 
         | For once, the lack of FHS interoperability is a benefit, if
         | only on accident.
        
           | q3k wrote:
           | Right, but in this case it's not even compiled it, which is
           | arguably better than compiled in but assumed dormant :) (at
           | least until someone actually does a full analysis of the
           | payload).
        
         | o11c wrote:
         | Note that NixOS has a unique advantage in that `dlopen` is
         | easier to analyze, but you do have to check for it. A lot of
         | people are looking only at `ldd` and missing that they can be
         | vulnerable at runtime.
        
         | dandanua wrote:
         | That's one of the advantages of NixOS - viruses and mass hacks
         | have lesser chance to function due to how different this OS is.
         | Until it gets more popular, of course.
        
           | rany_ wrote:
           | It's actually not an advantage. The reason why the exploit
           | wasn't included is because the attacker specifically decided
           | to only inject x86_64 Debian and RHEL to reduce the chances
           | of this getting detected.
        
             | bmacho wrote:
             | Then it's an actual advantage.
        
           | lambdanil wrote:
           | That's just security by obscurity, not something I'd consider
           | a good security measure.
        
       | AdmiralAsshat wrote:
       | > Red Hat assigned this issue CVE-2024-3094.
       | 
       | Does that mean this affects RHEL and Fedora?
        
         | formerly_proven wrote:
         | RHEL no, Fedora 41 and Rawhide yes.
         | 
         | https://www.redhat.com/en/blog/urgent-security-alert-fedora-...
         | 
         | https://lists.debian.org/debian-security-announce/2024/msg00...
        
           | dralley wrote:
           | Note that Fedora _40_ isn 't even released yet, it's in beta,
           | Fedora 41 / rawhide is basically a development branch used
           | only by a small number of people.
        
             | dTP90pN wrote:
             | A small number of people with likely professional
             | involvement in the Fedora project and possibly RHEL.
             | 
             | A supply chain attack serve as the basis for another supply
             | chain attack.
        
         | jethro_tell wrote:
         | RHEL won't get this bug for 2 years =)
        
           | fargle wrote:
           | i _knew_ there was an advantage to being 8-10 years out of
           | date at all times...
           | 
           | and when they do port finally backport this bug in 2026, they
           | will probably implement the systemd integration with openssl
           | (pbthththt...) via 600 patch files in some nonstandard
           | divergent manner that thwarts the payload anyhow. see? i knew
           | they were super duper secure.
        
         | richardwhiuk wrote:
         | Red Hat helps to do the job of making sure OSS has CVEs so
         | there's common vernacular for the problem.
        
       | lpapez wrote:
       | So many security companies publishing daily generic blog posts
       | about "serious supply chain compromises" in various distros on
       | packages with 0 downloads, and yet it takes a developer debugging
       | performance issues to find an actual compromise.
       | 
       | I worked in the software supply chain field and cannot resist
       | feeling the entire point of that industry is to make companies
       | pay for a security certificate so you can shift the blame onto
       | someone else when things go wrong.
        
         | r0ckarong wrote:
         | > cannot resist feeling the entire point of that industry is to
         | make companies pay for a security certificate so you can shift
         | the blame onto someone else when things go wrong.
         | 
         | That's the entire point. You did everything you could by
         | getting someone else look at it and saying it's fine.
        
           | numpad0 wrote:
           | This needs a Rust joke. You know, the problem with the whole
           | certification charade is it slows down jobs and prevents
           | __actual_problems getting evaluated. But is it safe?
        
         | bawolff wrote:
         | > the entire point of that industry is to make companies pay
         | for a security certificate so you can shift the blame onto
         | someone else when things go wrong.
         | 
         | That is actually a major point of a lot of corporate security
         | measures (shifting risk)
        
         | keepamovin wrote:
         | If you installed xz on macOS using brew, then you have
         | xz (XZ Utils) 5.6.1       liblzma 5.6.1
         | 
         | which are within the release target for the vuln. As elsewhere
         | in these comments, people say macOS effect is uncertain. If
         | concerned you can revert to 5.4.6 with                 brew
         | upgrade xz
        
           | neodypsis wrote:
           | It's been reverted now: https://github.com/Homebrew/homebrew-
           | core/blob/9a0603b474804...
        
             | keepamovin wrote:
             | Yeah it was when I posted the comment too. That's why you
             | could type brew upgrade xz and it went back to 5.4.6 I
             | guess? But it might have been around that time, cutting it
             | fine, not out for everybody. I don't know. Comment race
             | condition haha! :)
        
           | pjl wrote:
           | Similarly if you're using MacPorts, make sure to sync and
           | upgrade xz if you have it installed.
           | 
           | 5.6.1 was available for a few days and just rolled back ~20
           | minutes ago: https://github.com/macports/macports-
           | ports/commit/a1388aee09...
        
           | quinncom wrote:
           | Thank you for this tip. `brew upgrade xz` worked.
           | 
           | I was going to uninstall but it's used by _so many things_
           | ...                    brew uninstall xz         Error:
           | Refusing to uninstall /opt/homebrew/Cellar/xz/5.6.1
           | because it is required by aom, composer, curl, ffmpeg, gcc,
           | gd, ghostscript, glib, google-cloud-sdk, grc, harfbuzz,
           | httpie, img2pdf, jbig2enc, jpeg-xl, leptonica, libarchive,
           | libavif, libheif, libraw, libtiff, libzip, little-cms2,
           | numpy, ocrmypdf, openblas, openjpeg, openvino, php, pillow,
           | pipx, pngquant, poppler, python@3.11, python@3.12, rsync,
           | tesseract, tesseract-lang, unpaper, webp, wp-cli, yt-dlp and
           | zstd, which are currently installed.
        
             | keepamovin wrote:
             | You're welcome!
        
         | CableNinja wrote:
         | Thats basically the whole point actually... A company pays for
         | insurance for the business. The insurance company says sure we
         | will insure you, but you need to go through audits A B and C,
         | and you need certifications X and Y to be insured by us. Those
         | audits are often industry dependent, mostly for topics like
         | HIPAA, PCI, SOC, etc.
         | 
         | Insurance company hears about supply chain attacks. Declares
         | that insured must have supply chain validation. Company goes
         | and gets a shiny cert.
         | 
         | Now when things go wrong, the company can point to the cert and
         | go "it wasnt us, see we have the cert you told us to get and
         | its up to date". And the company gets to wash their hands of
         | liability (most of the time).
        
           | 77pt77 wrote:
           | > And the company gets to wash their hands of liability (most
           | of the time).
           | 
           | Certification theater.
           | 
           | It's completely performative.
        
         | markus_zhang wrote:
         | That's the entire point of certification, and any certification
         | at all. Certification does not guarantee performance. Actually,
         | I would always cast a suspect glance to anyone who is FOCUSED
         | on getting certification after certification without any side
         | project.
        
       | fourfour3 wrote:
       | Looks like Arch Linux shipped both compromised versions - and
       | 5.6.1-2 is out to hopefully resolve it.
        
         | tutfbhuf wrote:
         | I upgraded Arch Linux on my server a few hours ago. Arch Linux
         | does not fetch one of the compromised tarballs but builds from
         | source and sshd does not link against liblzma on Arch.
         | [root@archlinux ~]# pacman -Qi xz | head -n2         Name
         | : xz         Version         : 5.6.1-2         [root@archlinux
         | ~]# pacman -Qi openssh | head -n2       Name            :
         | openssh       Version         : 9.7p1-1       [root@archlinux
         | ~]# ldd $(which sshd) | grep liblzma       [root@archlinux ~]#
         | 
         | It seems that Arch Linux is not affected.
        
           | gpm wrote:
           | 5.6.1-1 was built from what I understand to be one of the
           | affected tarballs. This was patched in 5.6.1-2: https://gitla
           | b.archlinux.org/archlinux/packaging/packages/xz...
           | 
           | I agree on the sshd linking part.
        
             | tutfbhuf wrote:
             | Interesting, they just switched from tarballs to source 19
             | hours ago. It seems to me that Frederik Schwan had prior
             | knowledge of the security issue, or it is just a rare
             | coincidence.
        
               | ComputerGuru wrote:
               | Distributions were notified under embargo.
        
         | gpm wrote:
         | On arch, `ldd $(which sshd)` doesn't list lzma or xz, so I
         | think it's unaffected? Obviously still not great to be shipping
         | malicious code that just happens to not trigger.
        
           | fullstop wrote:
           | My Arch setup is the same, they must not patch openssh.
        
           | altairprime wrote:
           | Deleted per below
        
             | gpm wrote:
             | This is what the `detect_sh.bin` attached to the email
             | does. I can only assume that the pesron who reported the
             | vulnerability checked that this succeeds in detecting it.
             | 
             | Note that I'm not looking for the vulnerable symbols, I'm
             | looking for the library that does the patching in the first
             | place.
        
               | altairprime wrote:
               | Deleted, thanks.
        
         | Macha wrote:
         | 5.6.1-2 is not an attempted fix, it's just some tweaks to
         | Arch's own build script to improve reproducibility. Arch's
         | build script ultimately delegates to the compromised build
         | script unfortunately, but it also appears the payload itself is
         | specifically targeting deb/RPM based distros, so a narrow miss
         | for Arch here.
         | 
         | (EDIT: as others have pointed out, part of the exploit is in
         | the artifact from libxz, which Arch is now avoiding by
         | switching to building from a git checkout)
        
           | gpm wrote:
           | Are you sure about that? The diff moves away from using the
           | compromised tarballs to the not-compromised (by this) git
           | source. The comment message says it's about reproducibility,
           | but especially combined with the timing it looks to me like
           | that was just to avoid breaking an embargo.
        
             | tutfbhuf wrote:
             | So, you suggest that Frederik Schwan had prior knowledge of
             | the security issues but hid the real purpose of the commit
             | under "improve reproducibility"?
        
               | gpm wrote:
               | Yes.
               | 
               | I've never had to do it myself but I believe that's
               | common practice with embargos on security
               | vulnerabilities.
        
               | bombcar wrote:
               | It can lead to amusing cases where the intentional vuln
               | comes in "to improve x" and the quiet fix comes in "to
               | improve x".
        
               | jethro_tell wrote:
               | And, If you break the embargo too many times then you
               | just find out with the rest of us and that's not a great
               | way to run a distro. I believe openbsd is or was in that
               | position around the time of the intel speculative
               | execution bugs.
        
               | Starlevel004 wrote:
               | xz was masked in the Gentoo repositories earlier today
               | with the stated reason of "Investigating serious bug". No
               | mention of security. It's pretty likely.
        
               | donio wrote:
               | 5.6.1 is masked specifically.
               | 
               | Also, https://mastodon.social/@mgorny@treehouse.systems/1
               | 121802382... from a Gentoo dev mentions that Gentoo
               | doesn't use the patch that results in sshd getting linked
               | against liblzma.
               | 
               | As far as I know this is not an official communication
               | channel so don't take it as such.
        
               | NekkoDroid wrote:
               | This is very likely the case. Arch maintainers do get
               | early information on CVEs just like any other major
               | distro.
               | 
               | But with pacman/makepkg 6.1 (which recently released) git
               | sources can also now be check summed IIRC which is a
               | funny coincidence.
        
         | mook wrote:
         | The writeup indicates that the backdoor only gets applied when
         | building for rpm or deb, so Arch probably would have been okay
         | either way? Same with Nix, Homebrew, etc.
        
         | aquova wrote:
         | The project has made an official post on the subject
         | 
         | https://archlinux.org/news/the-xz-package-has-been-backdoore...
        
       | bawolff wrote:
       | The terrifying part is that this was primarily found because the
       | backdoor was poorly made and causing performance problems.
       | 
       | Makes you wonder what more competent actors can do.
        
         | rwmj wrote:
         | I've analysed the backdoor myself and it's very sophisticated,
         | not poorly made at all. The performance problem is surprising
         | in this context, but I think next time they won't make that
         | mistake.
        
           | Nextgrid wrote:
           | Do you have a writeup or any details as to what it does? The
           | logical thing based on this post is that it hooks the SSH key
           | verification mechanism to silently allow some attacker-
           | controlled keys but I wonder if there's more to it?
        
             | rwmj wrote:
             | I was starting one, but the openwall message linked here is
             | far more detailed and gets much further than I did. It's
             | fiendishly difficult to follow the exploit.
        
             | dhx wrote:
             | sshd starts with root privileges and then proceeds to, in
             | summary:[1]
             | 
             | 1. Parse command line arguments
             | 
             | 2. Setup logging
             | 
             | 3. Load configuration files
             | 
             | 4. Load keys/certificates into memory (notably including
             | private keys)
             | 
             | 5. Listen on a socket/port for incoming connections
             | 
             | 6. Spawn a child process with reduced permissions (on
             | Linux, using seccomp filters [2]) to respond to each
             | incoming connection request
             | 
             | This backdoor executes at order 0 before sshd's main
             | function is invoked, overwriting internal sshd functions
             | with compromised ones. As some ideas of what the backdoor
             | could achieve:
             | 
             | 1. Leak server private keys during handshakes with users
             | (including unauthenticated users) allowing the keys to be
             | passively stolen
             | 
             | 2. Accept backdoor keys as legitimate credentials
             | 
             | 3. Compromise random number generation to disable perfect
             | forward secrecy
             | 
             | 4. Execute code on the host (supplied remotely by a
             | malicious user) with the 'root' permissions available to
             | sshd upon launch. On most Linux distributions, systemd-
             | analyze security sshd.service will give a woeful score of
             | 9.6/10 (10 being the worst).[3] There is essentially NO
             | sandboxing used because an assumption is made that you'd
             | want to login as root with sshd (or sudo/su to root) and
             | thus would not want to be restricted in what filesystem
             | paths and system calls your remote shell can then invoke.
             | 
             | The same attacker has also added code to Linux kernel build
             | scripts which causes xz to be executed (xz at this point
             | has a backdoor compiled into it) during the build of the
             | Linux kernel where xz compression is used for the resulting
             | image. Using this approach, the attacker can selectively
             | choose to modify certain (or all) Linux kernel builds to do
             | some very nasty things:
             | 
             | 1. Leak Wireguard keys allowing them to be passively
             | intercepted.
             | 
             | 2. Compromise random number generation, meaning keys may be
             | generated with minimal entropy (see Debian certificate
             | problem from a few years ago).
             | 
             | 3. Write LUKS master keys (keys used by dm-crypt for
             | actually decrypting disks) to disks in retrievable format.
             | 
             | 4, Introduce remote root code execution vulnerabilities
             | into basic networking features such as TCP/IP code paths.
             | 
             | [1] 'main' function:
             | https://anongit.mindrot.org/openssh.git/tree/sshd.c
             | 
             | [2] https://anongit.mindrot.org/openssh.git/tree/sandbox-
             | seccomp...
             | 
             | [3] https://github.com/gentoo/gentoo/blob/HEAD/net-
             | misc/openssh/...
        
           | bawolff wrote:
           | I guess it seems like the operational parts are a bit poorly
           | done. Valgrind issues, adding a new version with symbols
           | removed, the aforementioned performance issues. Like i would
           | assume the type of person who would do this sort of thing,
           | over a 2 year period no less, would test extensively and be
           | sure all their i's are dotted. Its all kind of surprising
           | given how audacious the attack is.
        
             | dist-epoch wrote:
             | There are so many variations of Linux/FreeBSD and weird
             | setups and environments that it's almost guaranteed that
             | you'll hit a snag somewhere if you do any major
             | modification like inserting a backdoor.
        
               | bombcar wrote:
               | It's hard enough to get code to work correctly; getting
               | it to be also doing something else is even harder.
               | 
               | The way they went around it, however, was brilliant.
               | Completely reduce the variables to directly target
               | whatever it is you're attacking. Reminds me of stuxnet
               | somewhat.
        
               | SkiFire13 wrote:
               | Note that in this case the backdoor was only inserted in
               | some tarballs and enabled itself only when building
               | deb/rpm packages for x86-64 linux and with gcc and the
               | gnu linker. This should already filter out the most
               | exotic setups and makes it harder to reproduce.
        
               | 1oooqooq wrote:
               | the point we got, when even exploits have to rely on user
               | agent string sniffing.
               | 
               | reminds me of the gnu hack discovered because one of the
               | savannah build hosts was some odd architecture the
               | exploit wasn't expecting
        
             | rdtsc wrote:
             | But they almost got away with it. We could have found
             | ourselves 5 years later with this code in all stable
             | distribution versions, IoT devices etc.
             | 
             | Also, we only catch the ones that we ... catch. The ones
             | that do everything perfectly, unless they come out and
             | confess eventually, we don't get to "praise" them for their
             | impeccable work.
        
         | pinko wrote:
         | s/can do/have done/
        
         | aidenn0 wrote:
         | So many malicious actors have been caught because they
         | accidentally created a mild annoyance for someone that went on
         | to bird-dog the problem.
        
           | cpach wrote:
           | Case in point: https://news.ycombinator.com/item?id=39843930
        
           | londons_explore wrote:
           | Which is why a really good backdoor is a one line logic bug
           | somewhere which is fiendishly difficult to trigger.
        
             | bombcar wrote:
             | http://underhanded-c.org if people want examples of what
             | could (and probably, somewhere, IS) being done.
        
             | tialaramex wrote:
             | Sure, however the problem that software is really hard
             | _also_ impacts bad actors. So it 's probably at least as
             | hard to write that one line logic bug _and have it do
             | exactly what you intended_ as to write equivalent real code
             | that works precisely as intended.
        
             | Lammy wrote:
             | Like the 2003 Linux kernel attempt
             | https://lwn.net/Articles/57135/
        
           | entropie wrote:
           | Unrelated: as a dog/pointer lover i really like the term "to
           | bird-dog the problem". Never heard of it (iam from germany
           | though)
        
             | umanwizard wrote:
             | I'm from the U.S. and have never heard it either, and don't
             | understand what it means.
        
               | 65a wrote:
               | It's somewhat regional, and it means to hunt down the
               | target at the expense of everything else, as a dedicated
               | hunting dog might.
        
               | entropie wrote:
               | Pointing dogs (bird dogs) are made to point in the
               | direction where they have perceived game. Good dogs are
               | then not distracted by anything and stand there
               | motionless, sometimes so far that they have to be carried
               | away because they cannot turn away themselves.
        
         | yard2010 wrote:
         | You must mean, "Makes you wonder what more competent actors are
         | doing"
        
       | rwmj wrote:
       | Very annoying - the apparent author of the backdoor was in
       | communication with me over several weeks trying to get xz 5.6.x
       | added to Fedora 40 & 41 because of it's "great new features". We
       | even worked with him to fix the valgrind issue (which it turns
       | out now was caused by the backdoor he had added). We had to race
       | last night to fix the problem after an inadvertent break of the
       | embargo.
       | 
       | He has been part of the xz project for 2 years, adding all sorts
       | of binary test files, and to be honest with this level of
       | sophistication I would be suspicious of even older versions of xz
       | until proven otherwise.
        
         | gigatexal wrote:
         | Name and shame this author. They should never be allowed
         | anywhere near any open projects ever again.
        
           | Lichtso wrote:
           | They might have burnt the reputation built for this
           | particular pseudonym but what is stopping them from doing it
           | again? They were clearly in it for the long run.
        
             | jethro_tell wrote:
             | You're assuming that it's even a single person, it's just a
             | gmail address and an avatar with a j icon from a clip art
             | thing.
        
               | Lichtso wrote:
               | I literally said "they", I know, I know, in English that
               | can also be interpreted as a gender unspecific singular.
               | 
               | Anyways, yes it is an interesting question whether he/she
               | is alone or they are a group. Conway's law probably
               | applies here as well. And my hunch in general is that
               | these criminal mad minds operate individually / alone.
               | Maybe they are hired by an agency but I don't count that
               | as a group effort.
        
           | 0xbadcafebee wrote:
           | Please don't?
           | 
           | 1. You don't actually know what has been done by whom or why.
           | You don't know if the author intended all of this, or if
           | their account was compromised. You don't know if someone is
           | pretending to be someone else. You don't know if this person
           | was being blackmailed, forced against their will, etc. You
           | don't really know much of anything, except a backdoor was
           | introduced by somebody.
           | 
           | 2. Assuming the author did do something maliciously, relying
           | on personal reputation is bad security practice. The majority
           | of successful security attacks come from insiders. You have
           | to trust insiders, because _someone_ has to get work done,
           | and you don 't know who's an insider attacker until they are
           | found out. It's therefore a best security practice to limit
           | access, provide audit logs, sign artifacts, etc, so you can
           | trace back where an incursion happened, identify poisoned
           | artifacts, remove them, etc. Just saying "let's ostracize
           | Phil and hope this never happens again" doesn't work.
           | 
           | 3. A lot of today's famous and important security researchers
           | were, at one time or another, absolute dirtbags who did bad
           | things. Human beings are fallible. But human beings can also
           | grow and change. Nobody wants to listen to reason or
           | compassion when their blood is up, so nobody wants to hear
           | this right now. But that's why it needs to be said now. If
           | someone is found guilty beyond a reasonable doubt (that's
           | really the important part...), then name and shame, sure,
           | shame can work wonders. But at some point people need to be
           | given another chance.
        
             | gigatexal wrote:
             | 100% fair -- we don't know if their account was compromised
             | or if they meant to do this intentionally.
             | 
             | If it were me I'd be doing damage control to clear my name
             | if my account was hacked and abused in this manner.
             | 
             | Otherwise if I was doing this knowing full well what would
             | happen then full, complete defederation of me and my
             | ability to contribute to anything ever again should
             | commence -- the open source world is too open to such
             | attacks where things are developed by people who assume
             | good faith actors.
        
               | gigatexal wrote:
               | upon further reflection all 3 of your points are cogent
               | and fair and valid. my original point was a knee-jerk
               | reaction to this. :/
        
               | Biganon wrote:
               | Your being able to reflect upon it and analyze your own
               | reaction is rare, valuable and appreciated
        
               | gigatexal wrote:
               | I think I went through all the stages of grief. Now at
               | the stage of acceptance here's what I hope: I hope
               | justice is done. Whoever is doing this be they a
               | misguided current black hat (hopefully, future white hat)
               | hacker, or just someone or someones that want to see the
               | world burn or something in between that we see justice.
               | And then forgiveness and acceptance and all that can
               | happen later.
               | 
               | Mitnick reformed after he was convicted (whether you
               | think that was warranted or not). Here if these folks are
               | Mitnick's or bad actors etc let's get all the facts on
               | the table and figure this out.
               | 
               | What's clear is that we all need to be ever vigilant:
               | that seemingly innocent patch could be part of a more
               | nefarious thing.
               | 
               | We've seen it before with that university sending patches
               | to the kernel to "test" how well the core team was at
               | security and how well that went over.
               | 
               | Anyways. Yeah. Glad you all allowed me to grow. And I
               | learned that I have an emotional connection to open
               | source for better or worse: so much of my life
               | professional and otherwise is enabled by it and so
               | threats to it I guess I take personally.
        
             | Kwpolska wrote:
             | It is reasonable to consider all commits introduced by the
             | backdoor author untrustworthy. This doesn't mean all of it
             | is backdoored, but if they were capable of introducing this
             | backdoor, their code needs scrutiny. I don't care why they
             | did it, whether it's a state-sponsored attack, a long game
             | that was supposed to end with selling a backdoor for all
             | Linux machines out there for bazillions of dollars, or
             | blackmail -- this is a serious incident that should
             | eliminate them from open-source contributions and the xz
             | project.
             | 
             | There is no requirement to use your real name when
             | contributing to open source projects. The name of the
             | backdoor author ("Jia Tan") might be fake. If it isn't, and
             | if somehow they are found to be innocent (which I doubt,
             | looking at the evidence throughout the thread), they can
             | create a new account with a new fake identity.
        
         | yieldcrv wrote:
         | I wonder who the target was!
        
           | juliusdavies wrote:
           | Every Linux box inside AWS, Azure, and GCP and other cloud
           | providers that retains the default admin sudo-able user
           | (e.g., "ec2") and is running ssh on port 22.
           | 
           | I bet they intended for their back door to eventually be
           | merged into the base Amazon Linux image.
        
             | Bulat_Ziganshin wrote:
             | my understanding is that any Debian/RPM-based Linux running
             | sshd would become vulnerable in a year or two. The best
             | equivalent of this exploit is the One Ring.
             | 
             | So the really strange thing is why they put so little
             | effort into making this undetectable. All they needed was
             | to make it use less time to check each login attempt.
        
               | kevincox wrote:
               | In the other hand it was very hard to detect. The slow
               | login time was the only thing that gave it away. It more
               | seems like they were so close to being highly successful.
               | In retrospect improving the performance would have been
               | the smart play. But that is one part that went wrong
               | compared to very many that went right.
        
             | throwaway7356 wrote:
             | You don't need a "ec2" user. A backdoor can just allow root
             | login even when that is disabled for people not using the
             | backdoor.
             | 
             | It just requires the SSH port to be reachable unless there
             | is also a callout function (which is risky as people might
             | see the traffic). And with Debian and Fedora covered and
             | the change eventually making its way into Ubuntu and RHEL
             | pretty much everything would have this backdoor.
        
           | swagmoney1606 wrote:
           | Probably less of an individual and more of an exploit to
           | sell.
        
           | njsg wrote:
           | Distro build hosts and distro package maintainers might not
           | be a bad guess. Depends on whether getting this shipped was
           | the final goal. It might have been just the beginning, part
           | of some bootstrapping.
        
         | formerly_proven wrote:
         | I think this has been in the making for almost a year. The
         | whole ifunc infrastructure was added in June 2023 by Hans
         | Jansen and Jia Tan. The initial patch is "authored by" Lasse
         | Collin in the git metadata, but the code actually came from
         | Hans Jansen: https://github.com/tukaani-
         | project/xz/commit/ee44863ae88e377...
         | 
         | > Thanks to Hans Jansen for the original patch.
         | 
         | https://github.com/tukaani-project/xz/pull/53
         | 
         | There were a ton of patches by these two subsequently because
         | the ifunc code was breaking with all sorts of build options and
         | obviously caused many problems with various sanitizers.
         | Subsequently the configure script was modified multiple times
         | to detect the use of sanitizers and abort the build unless
         | either the sanitizer was disabled or the use of ifuncs was
         | disabled. That would've masked the payload in many testing and
         | debugging environments.
         | 
         | The hansjans162 Github account was created in 2023 and the only
         | thing it did was add this code to liblzma. The same name later
         | applied to do a NMU at Debian for the vulnerable version.
         | Another "<name><number>" account (which only appears here,
         | once) then pops up and asks for the vulnerable version to be
         | imported: https://www.mail-archive.com/search?l=debian-bugs-
         | dist@lists...
        
           | zb3 wrote:
           | Also I saw this hans jansen user pushing for merging the
           | 5.6.1 update in debian: https://bugs.debian.org/cgi-
           | bin/bugreport.cgi?bug=1067708
        
             | hxelk1 wrote:
             | From: krygorin4545 <krygorin4545@proton.me> To:
             | "1067708@bugs.debian.org" <1067708@bugs.debian.org> Cc:
             | "sebastian@breakpoint.cc" <sebastian@breakpoint.cc>,
             | "bage@debian.org" <bage@debian.org> Subject: Re: RFS: xz-
             | utils/5.6.1-0.1 [NMU] -- XZ-format compression utilities
             | Date: Tue, 26 Mar 2024 19:27:47 +0000
             | 
             | Also seeing this bug. Extra valgrind output causes some
             | failed tests for me. Looks like the new version will
             | resolve it. Would like this new version so I can continue
             | work.
             | 
             | --
             | 
             | Wow.
             | 
             | (Edited for clarity.)
        
           | zb3 wrote:
           | Also I see this PR: https://github.com/tukaani-
           | project/xz/pull/64
        
           | bluecheese33 wrote:
           | > because the ifunc code was breaking with all sorts of build
           | options and obviously caused many problems with various
           | sanitizers
           | 
           | for example, https://github.com/google/oss-fuzz/pull/10667
        
           | bed99 wrote:
           | 1 week ago "Hans Jansen" user "hjansen" was created in debian
           | and opened 8 PRs including the upgrade to 5.6.1 to xz-utils
           | 
           | From https://salsa.debian.org/users/hjansen/activity
           | 
           | Author: Hans Jansen <hansjansen162@outlook.com>
           | 
           | - [Debian Games / empire](https://salsa.debian.org/games-
           | team/empire): opened merge request "!2 New upstream version
           | 1.17" - March 17, 2024
           | 
           | - [Debian Games / empire](https://salsa.debian.org/games-
           | team/empire): opened merge request "!1 Update to upstream
           | 1.17" - March 17, 2024
           | 
           | - [Debian Games / libretro / libretro-core-
           | info](https://salsa.debian.org/games-team/libretro/libretro-
           | core-i...): opened merge request "!2 New upstream version
           | 1.17.0" - March 17, 2024
           | 
           | - [Debian Games / libretro / libretro-core-
           | info](https://salsa.debian.org/games-team/libretro/libretro-
           | core-i...): opened merge request "!1 Update to upstream
           | 1.17.0" - March 17, 2024
           | 
           | - [Debian Games / endless-
           | sky](https://salsa.debian.org/games-team/endless-sky): opened
           | merge request "!6 Update upstream branch to 0.10.6" - March
           | 17, 2024
           | 
           | - [Debian Games / endless-
           | sky](https://salsa.debian.org/games-team/endless-sky): opened
           | merge request "!5 Update to upstream 0.10.6" - March 17, 2024
           | 
           | - [Debian / Xz Utils](https://salsa.debian.org/debian/xz-
           | utils): opened merge request "!1 Update to upstream 5.6.1" -
           | March 17, 2024
        
             | bombcar wrote:
             | That looks exactly like what you'd want to see to disguise
             | the actual request you want, a number of pointless upstream
             | updates in things that are mostly ignored, and then the one
             | you want.
        
               | bed99 wrote:
               | agree
        
             | detistea wrote:
             | glad I didn't merge it ...
        
           | amluto wrote:
           | Wow, what a big pile of infrastructure for a non-
           | optimization.
           | 
           | An internal call via ifunc is not magic -- it's just a call
           | via the GOT or PLT, which boils down to function pointers. An
           | internal call through a hidden visibility function pointer
           | (the right way to do this) is also a function pointer.
           | 
           | The _even better_ solution is a plain old if statement, which
           | implements the very very fancy "devirtualization"
           | optimization, and the result will be effectively predicted on
           | most CPUs and is not subject to the whole pile of issue that
           | retpolines are needed to work around.
        
           | snvzz wrote:
           | >Hans Jansen and Jia Tan
           | 
           | Are they really two people conspiring?
           | 
           | Unless proven otherwise, it is safe to assume one is just a
           | pseudonym alias of the other.
        
             | EasyMark wrote:
             | or possibly just one person acting as two, or a group of
             | people?
        
           | formerly_proven wrote:
           | Make it two years.
           | 
           | Jia Tan getting maintainer access looks like it is almost
           | certainly to be part of the operation. Lasse Colling
           | mentioned multiple times how Jia has helped off-list and to
           | me it seems like Jia befriended Lasse as well (see how Lasse
           | talks about them in 2023).
           | 
           | Also the pattern of astroturfing dates back to 2022. See for
           | example this thread where Jia, who has helped at this point
           | for a few weeks, posts a patch, and a
           | <name><number>@protonmail (jigarkumar17) user pops up and
           | then bumps the thread three times(!) lamenting the slowness
           | of the project and pushing for Jia to get commit access:
           | https://www.mail-archive.com/xz-
           | devel@tukaani.org/msg00553.h...
           | 
           | Naturally, like in the other instances of this happening,
           | this user only appears once on the internet.
        
           | tootie wrote:
           | Does anybody know anything about Jia Tan? Is it likely just a
           | made up persona? Or is this a well-known person.
        
         | jonathanspw wrote:
         | Yesterday sure was fun wasn't it :p Thanks for all your
         | help/working with me on getting this cleaned up in Fedora.
        
           | w4ffl35 wrote:
           | Is it normal that when I try to uninstall xz it is trying to
           | install lzma?
        
             | inetknght wrote:
             | It means that `xz` was depended upon by something that
             | depends on eg "xz OR lzma"
        
           | speleding wrote:
           | PSA: I just noticed homebrew installed the compromised
           | version on my Mac as a dependency of some other package. You
           | may want to check this to see what version you get:
           | xz --version
           | 
           | Homebrew has already taken action, a `brew upgrade` will
           | downgrade back to the last known good version.
        
             | mthoms wrote:
             | Thanks for this. I just ran brew upgrade and the result was
             | as you described:                 xz 5.6.1 -> 5.4.6
        
             | jonahx wrote:
             | I also had a homebrew installed affected version.
             | 
             | I understand it's unlikely, but is there anything I can do
             | to check if the backdoor was used? Also any other steps I
             | should take after "brew upgrade"?
        
               | tomputer wrote:
               | Quoting[1] from Homebrew on Github:
               | 
               | >> Looks like that Homebrew users (both macOS and Linux,
               | both Intel and ARM) are unlikely affected?
               | 
               | > Correct. Though we do not appear to be affected, this
               | revert was done out of an abundance of caution.
               | 
               | [1] https://github.com/Homebrew/homebrew-core/pull/167512
        
             | cozzyd wrote:
             | Is it actually compromised on homebrew though? I guess we
             | can't be sure but it seemed to be checking if it was being
             | packaged as .deb or .rpm?
        
             | erhaetherth wrote:
             | Is 5.2.2 safe? Just 5.6.0 and 5.6.1 are bad?
        
         | pfortuny wrote:
         | Unfortunately, this is how _good_ bad actors work: with a very
         | long-term point of view. There is no "harmless" project any
         | more.
        
           | ametrau wrote:
           | Probably a state actor. You can look far into the future when
           | you're working for the party.
        
             | DrewRWx wrote:
             | And that long term perspective could be used constructively
             | instead!
        
             | calvinmorrison wrote:
             | Which like, also wouldn't be totally weird if I found out
             | that the xz or whatever library maintainer worked for the
             | DoE as a researcher? I kind of expect governments to be
             | funding this stuff.
        
               | CanaryLayout wrote:
               | From what I read on masto, the original maint had
               | personal life breakdown, etc. Their interest in staying
               | as primary maint is gone.
               | 
               | This is a very strong argument for FOSS to pick up the
               | good habit of ditching/un-mainlining projects where they
               | are sitting around for state actors to volunteer
               | injecting commits to, and dep-stripping active projects
               | from this cruft.
               | 
               | Who wants to maintain on a shitty compression format?
               | Someone who is dephunting, it turns out.
               | 
               | Okay so your pirate-torrent person needs liblzma.so Offer
               | it in the scary/oldware section of the package library
               | that you need to hunt down the instructions to turn on.
               | Let the users see that it's marked as obsolete,
               | enterprises will see that it should go on the banlist.
        
               | soraminazuki wrote:
               | Um, what? This incident is turning into such a big deal
               | because xz is deeply ingrained as a core dependency in
               | the software ecosystem. It's not an obscure tool for
               | "pirates."
        
               | Bulat_Ziganshin wrote:
               | Collin worked on XZ and its predecessor ~15 years. It
               | seems that he did that for free, at least in recent
               | times. Anyone will lose motivation to work for free over
               | this period of time.
               | 
               | At the same time, XZ became a cornerstone of major Linxus
               | distributions, being systemd dependency and loaded, in
               | particular, as part of sshd. What could go wrong?
               | 
               | In hindsight, the commercial idea of Red Hat, utilizing
               | the free work of thousands of developers working "just
               | for fun", turned out to be not so brilliant.
        
               | wannacboatmovie wrote:
               | On the contrary, this is a good example for why
               | 'vulnerable' OSS projects that have become critical
               | components, for which the original developer has
               | abandoned or lost interest, should be turned over to an
               | entity like RedHat who can assign a paid developer. It's
               | important to do this before some cloak and dagger rando
               | steps out of the shadows to offer friendly help, who oh
               | by the way happens to be a cryptography and compression
               | expert.
               | 
               | A lot of comments in this thread seem to be missing the
               | forest for the trees: this was a multiyear long operation
               | that targeted a vulnerable developer of a heavily-used
               | project.
               | 
               | This was not the work of some lone wolf. The amount of
               | expertise needed and the amount of research and
               | coordination needed to execute this required hundreds of
               | man-hours. The culprits likely had a project manager....
               | 
               | Someone had to stalk out OSS developers to find out who
               | was vulnerable (the xz maintainer had publicly disclosed
               | burnout/mental health issues); then the elaborate trap
               | was set.
               | 
               | The few usernames visible on GitHub are like pulling a
               | stubborn weed that pops up in the yard... until you start
               | pulling on it you don't realize the extensive reality
               | lying beneath the surface.
               | 
               | The implied goal here was to add a backdoor into
               | production Debian and Red Hat EL. Something that would
               | take years to execute. This was NOT the work of one
               | person.
        
           | hangonhn wrote:
           | I imagine it might be easier to just compromise a weakly
           | protected account than to actual put in a 2 years long effort
           | with real contributions. If we mandated MFA for all
           | contributors who contribute to these really important
           | projects then we can know with greater certainty if it was
           | really a long con vs. a recently compromised account.
        
             | __s wrote:
             | This seems like a great way to invest in supporting open
             | source projects in meantime if these projects are being
             | used by these actors. Just have to maintain an internal
             | fork without the backdoors
             | 
             | Maybe someone can disrupt the open source funding problem
             | by brokering exploit bounties /s
        
             | the8472 wrote:
             | github already mandates MFA for members of important
             | projects
        
               | rvense wrote:
               | Doesn't it mandate it for everyone? I don't use it
               | anymore and haven't logged in since forever, but I think
               | I got a series of e-mails that it was being made
               | mandatory.
        
               | ryukoposting wrote:
               | It will soon. I think I have to sort it out before April
               | 4. My passwords are already >20 random characters, so I
               | wasn't going to do it until they told me to.
        
               | pabs3 wrote:
               | If you are using pass to store those, check out pass-otp
               | and browserpass, since GitHub still allows TOTP for MFA.
               | pass-otp is based on oathtool, so you can do it more
               | manually too if you don't use pass.
        
               | loeg wrote:
               | It mandates it for everyone. I'm locked out of Github
               | because fuck that.
        
               | illusive4080 wrote:
               | Why opposed to MFA? Source code is one of the most
               | important assets in our realm.
        
               | userbinator wrote:
               | Freedom is far more important.
        
               | illusive4080 wrote:
               | But you can use any totp authenticator. The protocol is
               | free and open.
        
               | userbinator wrote:
               | It's more to make the point that "no means no." An act of
               | protest.
               | 
               | (I have written a TOTP implementation myself. I do not
               | have a GH account, and likely never will.)
        
               | Dylan16807 wrote:
               | It's ridiculous to say "no means no" about not wanting to
               | use a password to get an account, right?
               | 
               | What makes TOTP different from a password in terms of use
               | or refusal?
        
               | ndriscoll wrote:
               | Browsers don't save the TOTP seed and auto fill it for
               | you for one, making it much less user friendly than a
               | password in practice.
               | 
               | The main problem I have with MFA is that it gets used too
               | frequently for things that don't need that much
               | protection, which from my perspective is basically
               | anything other than making a transfer or trade in my
               | bank/brokerage. Just user-hostile requiring of manual
               | action, including finding my phone that I don't always
               | keep on me.
               | 
               | It's also often used as a way to justify collecting a
               | phone number, which I wouldn't even _have_ if not for
               | MFA.
        
               | saagarjha wrote:
               | Mine does. Yours doesn't?
        
               | jeromegv wrote:
               | We are talking of non-SMS MFA
        
               | creatonez wrote:
               | Should you have the freedom to put in a blank password,
               | too?
        
               | voidz wrote:
               | My password is a single 'a'. Nobody will ever guess that
               | one.
        
               | Freedom2 wrote:
               | They have the freedom to request whatever authentication
               | method they want.
        
               | hobobaggins wrote:
               | Most people will have to sync their passwords (generally
               | strong and unique, given that it's for github) to the
               | same device where their MFA token is stored, rendering it
               | (almost) completely moot, but at a significantly higher
               | risk of permanent access loss (depending on what they do
               | with the reset codes, which, if compromised, would _also_
               | make MFA moot.) (a cookie theft makes it all moot as
               | well.)
               | 
               | The worse part is that people _think_ they 're more
               | protected, when they're really not.
        
               | Dylan16807 wrote:
               | Bringing everyone up to the level of "strong and unique
               | password" sounds like a huge benefit. Even if your
               | "generally" is true, which I doubt, that leaves a lot of
               | gaps.
        
               | loeg wrote:
               | It's inconvenient, SMS 2FA is arguably security theater,
               | and redundant with a real password manager. Hopefully
               | Passkeys kills 2FA for most services.
        
               | Hawxy wrote:
               | FYI github already supports using passkeys as a combined
               | login/2FA source. I haven't used my 2FA codes for a while
               | now.
        
               | LtWorf wrote:
               | My source code is important to others but not to me. I
               | have backups. but 2FA is annoying to me.
               | 
               | It's very easy to permanently lose accounts when 2FA is
               | in use. If I lose my device my account is gone for good.
               | 
               | Tokens from github never expire, and can do everything
               | via API without ever touching 2FA, so it's not that
               | secure.
        
               | Brian_K_White wrote:
               | "If I lose my device my account is gone for good."
               | 
               | Incorrect, unless you choose not to record your seeds
               | anywhere else, which is not a 2fa problem.
               | 
               | 2fa is in the end nothing more than a 2nd password that
               | just isn't sent over the wire when used.
               | 
               | You can store a totp seed exactly the same as a password,
               | in any form you want, anywhere you want, and use on a
               | brand new device at any time.
        
               | LtWorf wrote:
               | > Incorrect, unless you choose not to record your seeds
               | anywhere else, which is not a 2fa problem.
               | 
               | You know google authenticator app introduced a backup
               | feature less than 1 year ago right?
               | 
               | You know phones break all the time right?
        
               | Brian_K_White wrote:
               | You know google authenticator doesn't matter right? You
               | know you could always copy your totp seeds since day one
               | regardless of which auth app or it's features or limits
               | right? You know that a broken device does not matter at
               | all, because you have other copies of your seeds just
               | like the passwords, right?
               | 
               | When I said they are just another password, I was neither
               | lying nor in error. I presume you can think of all the
               | infinite ways that you would keep copies of a password so
               | that when your phone or laptop with keepassxc on it
               | breaks, you still have other copies you can use. Well
               | when I say just like a password, that's what I mean. It's
               | just another secret you can keep anywhere, copy 50 times
               | in different password managers or encrypted files, print
               | on paper and stick in a safe, whatever.
               | 
               | Even if some particular auth app does not provide any
               | sort of manual export function (I think google auth did
               | have an export function even before the recent cloud
               | backup, but let's assume it didn't), you can still just
               | save the original number the first time you get it from a
               | qr code or a link. You just had to know that that's what
               | those qr codes are doing. They aren't single-use, they
               | are nothing more than a random secret which you can keep
               | andbcopy and re-use forever, exactly the same as a
               | password. You can copy that number into any password
               | manager or plain file or whatever you want just like a
               | password, and then use it to set up the same totp on 20
               | different apps on 20 different devices, all working at
               | the same time, all generating valid totp codes at the
               | same time, destroy them all, buy a new phone, retrieve
               | any one of your backup keepass files or printouts, and
               | enter them into a fresh app on a fresh phone and get all
               | your totp fully working again. You are no more locked out
               | than by having to reinstall a password manager app and
               | access some copy of your password db to regain the
               | ordinary passwords.
               | 
               | The only difference from a password is, the secret is not
               | sent over the wire when you use it, something derived
               | from it is.
               | 
               | Google authenticators particular built in cloud copy, or
               | lack of, doesn't matter at all, and frankly I would not
               | actually use that particular feature or that particular
               | app. There are lots of totp apps on all platforms and
               | they all work the same way, you enter the secret give it
               | a name like your bank or whatever, select which algorithm
               | (it's always the default, you never have to select
               | anything) and instantly the app starts generating valid
               | totp codes for that account the same as your lost device.
               | 
               | Aside from saving the actual seed, let's say you don't
               | have the original qr code any more (you didn't print it
               | or screenshot it or right-click save image?) there is yet
               | another emergency recovery which is the 10 or 12 recovery
               | passwords that every site gives you when you first set up
               | totp. You were told to keep those. They are special
               | single-use passwords that get you in without totp, but
               | each one can only be used once. So, you are a complete
               | space case and somehow don't have any other copiesbof
               | your seeds in any form, including not even simple
               | printouts or screenshots of the original qr code, STILL
               | no problem. You just burn one of your 12 single-use
               | emergency codes, log in, disable and re-enable totp on
               | that site, get a new qr code and a new set of emergency
               | codes. Your old totp seed and old emergency codes no
               | longer work so thow those out. This time, not only keep
               | the emergency codes, also keep the qr code, or more
               | practical, just keep the seed value in the qr code. It's
               | right there in the url in the qr code. Sometimes they
               | even display the seed value itself in plain text so that
               | you can cut & paste it somewhere, like into a field in
               | keepass etc.
               | 
               | In fact keepass apps on all platforms also will not only
               | store the seed value but display the current totp for it
               | just like a totp app does. But a totp app is a more
               | convenient.
               | 
               | And for proper security, you technically shouldn't store
               | both the password and the totp seed for an account in the
               | same place, so that if someone gains access to one, they
               | don't gain access to both. That's inconvenient but has to
               | be said just for full correctness.
               | 
               | I think most sites do a completely terrible job of
               | conveying just what totp is when you're setting it up.
               | They tell you to scan a qr code but they kind of hide
               | what that actually is. They DO all explain about the
               | emergency codes but really those emergency codes are kind
               | of stupid. If you can preserve a copy of the emergency
               | codes, then you can just as easily preserve a copy of the
               | seed value itself exactly the same way, and then, what's
               | the point of a hanful of single-use emergency passwords
               | when you can just have your normal fully functional totp
               | seed?
               | 
               | Maybe one use for the emergency passwords is you could
               | give them to different loved ones instead of your actual
               | seed value?
               | 
               | Anyway if they just explained how totp basically works,
               | and told you to keep your seed value instead of some
               | weird emergency passwords, you wouldn't be screwed when a
               | device breaks, and you would know it and not be worried
               | about it.
               | 
               | Now, if, because of that crappy way sites obscure the
               | process, you currently don't have your seeds in any re-
               | usable form, and also don't have your emergency codes,
               | well then you will be F'd when your phone breaks.
               | 
               | But that is fixable. Right now while it works you can log
               | in to each totp-enabled account, and disable & reenable
               | totp to generate new seeds, and take copies of them this
               | time. Set them up on some other device just to see that
               | they work. Then you will no longer have to worry about
               | that.
        
               | LtWorf wrote:
               | > since day one
               | 
               | But if you forgot to do it on day one, you can't do it on
               | day two because there is no way of getting them out other
               | than rooting the phone.
               | 
               | Giving how your premise was wrong, I won't bother to read
               | that novel you wrote. I'll just assume it's all derived
               | from the wrong premise.
        
               | jeromegv wrote:
               | There has been other apps doing the same as Google
               | Authenticator for 10 years and they didn't require you
               | not to lose your phone.
        
               | LtWorf wrote:
               | And why would I invest the time to figure this all out
               | when the only one being advantaged are others?
        
               | jjav wrote:
               | > Why opposed to MFA? Source code is one of the most
               | important assets in our realm.
               | 
               | Because if you don't use weak passwords MFA doesn't add
               | value. I do recommend MFA for most people because for
               | most people their password is the name of their dog
               | (which I can look up on social media) followed by "1!" to
               | satisfy the silly number and special character rules. So
               | yes please use MFA.
               | 
               | But if your (like my) passwords are 128+bits out of
               | /dev/random, MFA isn't adding value.
        
               | samatman wrote:
               | The slogan is "something you know and something you
               | have", right?
               | 
               | I don't have strong opinions about making it mandatory,
               | but I turned on 2FA for all accounts of importance years
               | ago. I use a password manager, which means everything I
               | "know" could conceivably get popped with one exploit.
               | 
               | It's not that much friction to pull out (or find) my
               | phone and authenticate. It only gets annoying when I
               | switch phones, but I have a habit of only doing that
               | every four years or so.
               | 
               | You sound like you know what you're doing, that's fine,
               | but I don't think it's true that MFA doesn't add security
               | on average.
        
               | jjav wrote:
               | > It only gets annoying when I switch phones
               | 
               | Right. I don't ever want to tie login to a phone because
               | phones are pretty disposable.
               | 
               | > I don't think it's true that MFA doesn't add security
               | on average
               | 
               | You're right! On average it's better, because most people
               | have bad password and/or reuse them in more than one
               | place. So yes MFA is better.
               | 
               | But if your password is already impossible to guess (as
               | 128+ random bits are) then tacking on a few more bytes of
               | entropy (the TOTP seed) doesn't do much.
        
               | satokema wrote:
               | Those few bits are the difference between a keylogged
               | password holder waltzing in and an automated monitor
               | noticing that someone is failing the token check and
               | locking the account before any damage occurs.
        
               | StillBored wrote:
               | I think your missing parents point, both are just
               | preshared keys, one has some additional fuzz around it so
               | that the user in theory isn't themselves typing the same
               | second key in all the time, but much of that security is
               | in keeping the second secret in a little keychain device
               | that cannot itself leak the secret. Once people put the
               | seeds in their password managers/phones/etc its just more
               | data to steal.
               | 
               | Plus, the server/provider side remains a huge weak point
               | too. And the effort of enrolling/giving the user the
               | initial seed is suspect.
               | 
               | This is why the FIDO/hardware passkeys/etc are so much
               | better because is basically hardware enforced two way
               | public key auth, done correctly there isn't any way to
               | leak the private keys and its hard has hell to MITM.
               | Which is why loss of the hw is so catastrophic. Most
               | every other MFA scheme is just a bit of extra theater.
        
               | jjav wrote:
               | > both are just preshared keys
               | 
               | Exactly, that's it. Two parties have a shared secret of,
               | say 16 bytes total, upon which authentication depends.
               | 
               | They could have a one byte long password but a 15 byte
               | long shared secret used to compute the MFA code. The
               | password is useless but the MFA seed is unguessable.
               | Maybe have no password at all (zero length) and 16 byte
               | seed. Or go the other way and a 16 byte password and zero
               | seed. In terms of an attacker brute forcing the keyspace,
               | it's always the same, 16 bytes.
               | 
               | We're basically saying (and as a generalization, this is
               | true) that the password part is useless since people will
               | just keep using their pets name, so let's put the
               | strenght on the seed side. Fair enough, that's true.
               | 
               | But if you're willing to use a strong unique password
               | then there's no real need.
               | 
               | (As to keyloggers, that's true, but not very interesting.
               | If my machine is already compromised to the level that it
               | has malicious code running logging all my input, it can
               | steal both the passwords and the TOTP seeds and all the
               | website content and filesystem content and so on. Game's
               | over already.)
               | 
               | > This is why the FIDO/hardware passkeys/etc are so much
               | better
               | 
               | Technically that's true. But in practice, we now have a
               | few megacorporations trying to own your authentication
               | flow in a way that introduces denial of service
               | possibilities. I must control my authentication access,
               | not cede control of it to a faceless corporation with no
               | reachable support. I'd rather go back to using
               | password123 everywhere.
        
               | admax88qqq wrote:
               | Sure it is. If your system ever gets keylogged and
               | somebody gets your password you are compromised
               | 
               | With MFA even if somebody has your password if they don't
               | have your physical authenticator too then you're
               | relatively safe.
        
               | ndriscoll wrote:
               | If you have a keylogger, they can also just take your
               | session cookie/auth tokens or run arbitrary commands
               | while you're logged in. MFA does nothing if you're
               | logging into a service on a compromised device.
        
               | heyoni wrote:
               | Session keys expire and can be scoped to do anything
               | except reset password, export data, etc...that's why
               | you'll sometimes be asked to login again on some
               | websites.
        
               | ndriscoll wrote:
               | If you're on a service on a compromised device, you have
               | effectively logged into a phishing site. They can pop-up
               | that same re-login page on you to authorize whatever
               | action they're doing behind the scenes whenever they need
               | to. They can pretend to be acting wonky with a "your
               | session expired log in again" page, etc.
               | 
               | This is part of why MFA just to log in is a bad idea.
               | It's much more sensible if you use it _only_ for
               | sensitive actions (e.g. changing password, authorizing a
               | large transaction, etc.) that the user almost never does.
               | But you need _everyone_ to treat it that way, or users
               | will think it 's just normal to be asked to approve all
               | the time.
        
               | snnn wrote:
               | Some USB keys have a LCD screen on it to prevent that.
               | You can comprise the computer that the key was inserted
               | to, but you cannot comprise the key. If you see the
               | things messages shows up on your computer screen differs
               | from the messages on the key, you reject the auth
               | request.
        
               | mr_mitm wrote:
               | Keyloggers can be physically attached to your keyboard.
               | There could also be a vulnerability in the encryption of
               | wireles keyboards. Certificate-based MFA is also phishing
               | resistant, unlike long, random, unique passwords.
               | 
               | There are plenty of scenarios where MFA is more secure
               | than just a strong password.
        
               | RaisingSpear wrote:
               | But password managers typically don't send keyboard
               | commands to fill in a password, so a physical device
               | would be useless.
               | 
               | > There are plenty of scenarios where MFA is more secure
               | than just a strong password.
               | 
               | And how realistic are they? Or are they just highly
               | specific scenarios where all the stars must align, and
               | are almost never going to happen?
        
               | mr_mitm wrote:
               | I don't think phishing is such an obscure scenario.
               | 
               | The point is also that you as an individual can make
               | choices and assess risk. As a large service provider, you
               | will always have people who reuse passwords, store them
               | unencrypted, fall for phishing, etc. There is a
               | percentage of users that will get their account
               | compromised because of bad password handling which will
               | cost you, and by enforcing MFA you can decrease that
               | percentage, and if you mandate yubikeys or something
               | similar the percentage will go to zero.
        
               | RaisingSpear wrote:
               | > I don't think phishing is such an obscure scenario.
               | 
               | For a typical person, maybe, but for a tech-minded
               | individual who understands security, data entropy and
               | what /dev/random is?
               | 
               | And I don't see how MFA stops phishing - it can get you
               | to enter a token like it can get you to enter a password.
               | 
               | I'm also looking at this from the perspective of an
               | individual, not a service provider, so the activities of
               | the greater percentage of users is of little interest to
               | me.
        
               | mr_mitm wrote:
               | > And I don't see how MFA stops phishing - it can get you
               | to enter a token like it can get you to enter a password.
               | 
               | That's why I qualified it with "certificate-based". The
               | private key never leaves the device, ideally a yubikey-
               | type device.
        
               | RaisingSpear wrote:
               | > That's why I qualified it with "certificate-based". The
               | private key never leaves the device
               | 
               | Except that phishing doesn't require the private key - it
               | just needs to echo back the generated token. And even if
               | that isn't possible, what stops it obtaining the session
               | token that's sent back?
        
               | mr_mitm wrote:
               | The phisher will not receive a valid token, though,
               | because you sign something that contains the domain you
               | are authenticating to.
        
               | RaisingSpear wrote:
               | The phisher can just pass on whatever you sign, and
               | capture the token the server sends back.
               | 
               | Sure, you can probably come up with some non-HTTPS scheme
               | that can address this, but I don't see any site actually
               | doing this, so you're back to the unrealistic scenario.
        
               | mr_mitm wrote:
               | No, because the phisher will get a token that is
               | designated for, say, mircos0ft.com which microsoft.com
               | will not accept. It is signed with the user's private key
               | and the attacker cannot forge a signature without it.
        
               | smw wrote:
               | Doesn't work for FIDO-based tokens, they auth the site as
               | well, so won't send anything to phishing site.
        
               | ndriscoll wrote:
               | These scenarios are getting into some Mission Impossible
               | level threats.
               | 
               | Most people use their phones most of the time now,
               | meaning the MFA device is the same device they're using.
               | 
               | Of the people who aren't using a phone, how many are
               | using a laptop with a built in keyboard? It's pretty
               | obvious if you have a USB dongle hanging off your laptop.
               | 
               | If you're using a desktop, it's going to be in a
               | relatively secure environment. Bluetooth probably doesn't
               | even reach outside. No one's breaking into my house to
               | plant a keylogger. And a wireless keyboard seems kind of
               | niche for a desktop. It's not going to move, so you're
               | just introducing latency, dropouts, and batteries into a
               | place where they're not needed.
               | 
               | Long, random, unique passwords are phishing resistant. I
               | don't know my passwords to most sites. My web browser
               | generates and stores them, and only uses them if it's on
               | the right site. This has been built in functionality for
               | years, and ironically it's sites like _banks_ that are
               | most likely to disable auto fill and require weak, manual
               | passwords.
        
               | mr_mitm wrote:
               | I mean, both can be true at the same time. I have to
               | admit that I only use MFA when I'm forced to, because I
               | also believe my strong passwords are good enough. Yet I
               | can still acknowledge that MFA improves security further
               | and in particular I can see why certain services make it
               | a requirement, because they don't control how their users
               | choose and use their passwords and any user compromise is
               | associated with a real cost, either for them like in the
               | case of credit card companies or banks, or a cost for
               | society, like PyPI, Github, etc.
        
               | martinflack wrote:
               | These days I wonder about all the cameras in a modern
               | environment and "keylogging" from another device filming
               | the user typing.
        
               | throwaway2990 wrote:
               | Haha yes they do. Everyone stores their 2fa in 1Password
               | so once that's stolen by a key longer they're fucked.
        
               | 12_throw_away wrote:
               | > But if your (like my) passwords are 128+bits out of
               | /dev/random, MFA isn't adding value.
               | 
               | no. a second factor of authentication is completely
               | orthogonal to password complexity.
        
               | White_Wolf wrote:
               | Your password is useless when it comes to hardware
               | keyloggers. We run yearly tests to see if people check
               | for "extra hardware". Needles to say we have a very high
               | failure rate.
               | 
               | It's hard to get a software keylooger installed on a
               | corp. machine. It's easy to get physical access to the
               | office or even their homes and install keyloggers all
               | over the place and download the data via BT.
        
               | jjav wrote:
               | > Your password is useless when it comes to hardware
               | keyloggers.
               | 
               | You are of course correct.
               | 
               | This is where threat modeling comes in. To really say if
               | something is more secure or less secure or a wash, threat
               | modeling needs to be done, carefully considering which
               | threats you want to cover and not cover.
               | 
               | I this thread I'm talking from the perspective of an
               | average individual with a personal machine and who is not
               | interesting enough to be targeted by corporate espionage
               | or worse.
               | 
               | Thus, the threat of operatives breaking into my house and
               | installing hardware keyloggers on my machines is not part
               | of my threat model. I don't care about that at all, for
               | my personal use.
               | 
               | For sensitive company machines or known CxOs and such,
               | yes, but that's a whole different discussion and threat
               | model exercise.
        
               | rst wrote:
               | Which helps with some kinds of threats, but not all. It
               | keeps someone from pretending to be the maintainer -- but
               | if an actual maintainer is compromised, coerced, or just
               | bad from the start and biding their time, they can still
               | do whatever they want with full access rights.
        
               | the8472 wrote:
               | You probably should have replied that to the GP, not me.
               | I only clarified that what they were suggesting already
               | is the case.
        
             | guinea-unicorn wrote:
             | I find it funny how MFA is treated as if it would make
             | account takeover suddenly impossible. It's just a bit more
             | work, isn't it? And a big loss in convenience.
             | 
             | I'd much rather see passwords entirely replaced by key-
             | based authentication. _That_ would improve security. Adding
             | 2FA to my password is just patching a fundamentally broken
             | system.
        
               | hangonhn wrote:
               | yeah someone replied to one of my comments about adding
               | MFA that an attacker can get around all that simply by
               | buying the account from the author. I was way too
               | narrowly focused on the technical aspects and was
               | completely blind to other avenues like social
               | engineering, etc.
               | 
               | All very fair points.
        
               | ryukoposting wrote:
               | Customer service at one of my banks has an official
               | policy of sending me a verification code via email that I
               | then read to them over the phone, and that's not even
               | close to the most "wrong" 2FA implementation I've ever
               | seen. Somehow that institution knows what a YubiKey is,
               | but several major banks don't.
        
               | Liquix wrote:
               | Financial institutions are very slow to adopt new tech.
               | Especially tech that will inevitably cost $$$ in support
               | hours when users start locking themselves out of their
               | accounts. There is little to no advantage to being the
               | first bank to implement YubiKey 2FA. To a risk-averse
               | org, the non-zero chance of a botched rollout or
               | displeased customers outweighs any potential benefit.
        
               | monksy wrote:
               | They're pretty terrible when they do.
               | 
               | For the longest time the max password size was 8
               | characters and the csr knew what your password was.
               | 
               | Heck I've had Chase security tell me they'd call me
               | back.. dude that's exactly how people get compromised.
        
               | biglost wrote:
               | A friensd bank, hopefully not the one i use, only allow a
               | password off 6 digits. Yes You read it right, 6 fucking
               | digits to login, i hace him the asvice to run away from
               | that shitty bank
        
               | foepys wrote:
               | Did this bank start out as a "telephone bank"? One of the
               | largest German consumer banks still does this because
               | they were the first "direct bank" without locations and
               | typing in digits on the telephone pad was the most secure
               | way of authenticating without telling the "bank teller"
               | your password. So it was actually a good security measure
               | but it is apparently too complicated to update their
               | backend to modern standards.
               | 
               | They do require 2FA, though.
        
               | asm0dey wrote:
               | DiBa?
        
               | doubled112 wrote:
               | Exactly. 8 character password in the 2010s as the only
               | factor was fine. It was only my money we're talking
               | about.
               | 
               | Now I have to wait for an SMS. Great...
        
               | throwaway2990 wrote:
               | SMS is fine on most countries. It's just America is dumb
               | and allows number transfers to anyone.
        
               | KiwiJohnno wrote:
               | Its also been a problem in Australia, Optus (2nd biggest
               | teleco) used to allow number porting or activating sim
               | against an existing account with a bare minimum of detail
               | - Like a name, address and date of birth. If you had
               | those details of a target you could clone their SIM and
               | crack any SMS based MFA.
        
               | chris_wot wrote:
               | Is that alllowed now still?
        
               | throwaway2990 wrote:
               | Apparently changed in 2022 to protect consumers.
        
               | eru wrote:
               | Number transfers in other countries is also mostly just a
               | question of a bit of social engineering.
        
               | throwaway2990 wrote:
               | No. Most require some form of identification or matching
               | identification between mobile providers.
        
               | macrolime wrote:
               | I recently had an issue with a sim card and went to phone
               | store that gave me a new one and disabled the old.
               | They're supposed to ask for ID, but often doesn't bother.
               | This is true for pretty much every country. Phone 2FA is
               | simply completely insecure.
        
               | StillBored wrote:
               | SMS is not E2E encrypted, so for all intents is just a
               | plain text message that can/has been snooped. Might as
               | well just send a plaintext emails as well.
        
               | hwertz wrote:
               | Nope, I read The Register (UK based) and they've had
               | scandals from celebrities having their confidential SMS
               | messages leaked; SMS spoofing; I think they even have SIM
               | cloning going on every now and then in UK and some
               | European countries. (since The Register is a tech site,
               | my recollection is some carriers took technical measures
               | to prevent these issues while quite a few didn't.)
               | 
               | I don't think it's a thing that happens that often in UK
               | etc.; but, it doesn't happen that frequently in the US
               | either. It's just a thing that can potentially happen.
        
               | throwaway2990 wrote:
               | UK has plenty of other problems to solve first with
               | identity thief.
        
               | croemer wrote:
               | ...where identity is proved by utility bills instead of
               | government issued id
        
               | doubled112 wrote:
               | How else do you prove you live some place?
               | 
               | "I pay the bills there" is barely better than nothing,
               | though. We do this in Canada too. It is what I used for a
               | driver's license one renewal.
        
               | vladvasiliu wrote:
               | I don't know about other parts, but here in France SMS is
               | a shitshow. I regularly fail to receive them even though
               | I know I have good reception.
               | 
               | This happened the other day while I was on a conference
               | call with perfect audio and video using my phone's mobile
               | data.
               | 
               | A few weeks back, had some shop which sends out an SMS to
               | inform you the job's done tell me this is usually hit and
               | miss when I complained about not hearing from them.
        
               | high_priest wrote:
               | Many single radio phones can either receive sms/calls, or
               | transmit data. My relative owns such a device and cannot
               | use internet during calls or receive/make calls during
               | streaming like YT video playback.
        
               | vladvasiliu wrote:
               | In my case this is an iPhone 14 pro. I'm pretty sure I
               | can receive calls while using data, since I often look
               | things up on the internet while talking to my parents.
               | 
               | And, by the way, the SMS in question never arrived. I
               | don't know if there's some kind of timeout happening, and
               | the network gives up after a while. Some 15 years ago I
               | remember getting texts after an hour or two if I only had
               | spotty reception. This may of course have changed in the
               | meantime, plus this is a different provider.
        
               | dewey wrote:
               | SS7 is a global issue, and so is social engineering to
               | get a number transferred or SIM card transferred.
               | 
               | https://hitcon.org/2015/CMT/download/day1-d-r0.pdf
        
               | grepfru_it wrote:
               | Uh many banks provide MFA. And secure with hardware keys.
               | It's just that your level of assets doesn't warrant that
               | kind of protection.
               | 
               | Source: worked at all the major banks, all the wealthy
               | clients use hardware MFA
        
               | heyoni wrote:
               | They're a bank. If they can secure their portals with
               | hardware keys, at least allow customers to onboard their
               | own keys.
        
               | LtWorf wrote:
               | My bank gave me an hardware token to protect my 5kEUR
               | account.
               | 
               | Get better banks people :)
        
               | grepfru_it wrote:
               | I meant to say in the us. You know how backwards we are
               | here :)
        
               | wsc981 wrote:
               | The bank I used in The Netherlands provides a MFA device
               | as well. The device requires an ATM card as well to
               | generate a random number.
               | 
               | This is the default for all their customers, wealthy or
               | not.
               | 
               | https://www.abnamro.nl/en/commercialbanking/internetbanki
               | ng/...
        
               | grepfru_it wrote:
               | I meant to say in the us :)
        
               | darkr wrote:
               | > There is little to no advantage to being the first bank
               | to implement YubiKey 2FA
               | 
               | Ideally they'd just implement passkeys (webauthn/fido).
               | More secure, and it works with iOS, android, 1password,
               | and yubikeys
        
               | krinchan wrote:
               | Just say BofA.
        
               | snnn wrote:
               | Not actually. Even if you enabled passkey, you still can
               | login to their phone app via SMS. So it is not more
               | secure. People who knows how to do SMS attacks certainly
               | knows how to install a mobile app. And BofA gave their
               | customers a fake assurance.
        
               | eairy wrote:
               | I'm security consultant in the financial industry. I've
               | literally been involved in the decision making on this at
               | a bank. Banks are very conservative, and behave like
               | insecure teenagers. They won't do anything bold, they all
               | just copy each other.
               | 
               | I pushed YubiKey as a solution and explained in detail
               | why SMS was an awful choice, but they went with SMS
               | anyway.
               | 
               | It mostly came down to cost. SMS was the cheapest option.
               | YubiKey would involve buying and sending the keys to
               | customers, and they having the pain/cost of supporting
               | them. There was also the feeling that YubiKeys were too
               | confusing for customers. The nail in the coffin was "SMS
               | is the standard solution in the industry" plus "If it's
               | good enough for VISA it's good enough for us".
        
               | heyoni wrote:
               | But why won't banks at least support customer provided
               | yubikeys?
        
               | eru wrote:
               | Because it's extra hassle?
        
               | mulmen wrote:
               | > But why won't banks at least support customer provided
               | yubikeys?
               | 
               | > support
               | 
               | You answered your own question.
        
               | heyoni wrote:
               | And that's the answer isn't it? Banks are behind the
               | times in terms of security and tech.
        
               | intelVISA wrote:
               | Banks loathe anything relating, or adjacent, to good SWE
               | principles.
        
               | jalk wrote:
               | Bank of America supports user purchased TOTP devices.
               | 
               | https://www.bankofamerica.com/security-center/online-
               | mobile-...
        
               | rainbowzootsuit wrote:
               | Brokerage, not bank, but you can do Yubikey-only at
               | Vanguard.
               | 
               | https://www.bogleheads.org/forum/viewtopic.php?t=349826
        
               | mlrtime wrote:
               | The largest US crypto brokerages/exchanges support
               | yubikey.
               | 
               | Some will provide and require them for top customers to
               | ensure they are safe.
        
               | dalyons wrote:
               | I mean your employer wasn't wrong. Yubikeys ARE way too
               | confusing for the average user, way too easy to lose,
               | etc. maybe have it as an option for power users, but they
               | were right it would be a disastrous default.
        
               | ryukoposting wrote:
               | Interesting. I assumed a lot of client software for small
               | banks was vendored - I know that's the case for
               | brokerages. Makes it all the weirder that they all
               | imitate each other.
               | 
               | Here's the thing about SMS: your great aunt who doesn't
               | know what a JPEG is, knows what a text is. Ok, she might
               | not fully "get it" but she knows where to find a text
               | message in her phone. My tech-literate fiancee struggles
               | to get her YubiKey to work with her phone, and I've tried
               | it with no more luck than she's had. YubiKeys should be
               | _supported_ but they 're miles away from being usable
               | enough to totally supplant other 2FA flows.
        
               | gonzo41 wrote:
               | Banks are in a tough spot. Remember, banks have you as a
               | customer, they also have a 100 year old person who still
               | wants to come to the branch in person as a customer. Not
               | everyone can grapple with the idea of a Yubikey, or why
               | their bank shouldn't be protecting their money like it
               | did in the past.
        
               | codedokode wrote:
               | The problem is that the bank will automatically enable
               | online access and SMS-confirmed transfers for that 100
               | year old person who doesn't even know how to use
               | Internet.
        
               | apitman wrote:
               | https://xkcd.com/538/
        
               | AgentME wrote:
               | Passkeys are being introduced right now in browsers and
               | popular sites like a MFA option, but I think the
               | intention is that they will grow and become the main
               | factor in the future.
        
               | riddley wrote:
               | From what I've seen they're all controlled by huge tech
               | companies. Hard pass.
        
               | doubled112 wrote:
               | I liked the username, password and TOTP combination. I
               | could choose my own password manager, and TOTP generator
               | app, based on my preferences.
               | 
               | I have a feeling this won't hold true forever. Microsoft
               | has their own authenticator now, Steam has another one,
               | Google has their "was this you?" built into the OS.
               | 
               | Monetization comes next? "View this ad before you login!
               | Pay 50c to stay logged in for longer?"
        
               | AgentME wrote:
               | Passkeys are an open standard with multiple
               | implementations. It represents the opposite of the trend
               | you're worried about there.
        
               | thayne wrote:
               | But the way it is designed, you can require a certain
               | provider, and you can bet at least some sites will start
               | requiring attestation from Google and or Apple.
        
               | nmadden wrote:
               | Do they do attestation by default? I thought for Apple at
               | least that was only a feature for enterprise managed
               | devices (MDM). Attestation is also a registration-time
               | check, so doesn't necessarily constrain where the passkey
               | is synced to later on.
        
               | tux3 wrote:
               | MS Azure Active Entra's FIDO2 implementation only allows
               | a select list of vendors. You need a certification from
               | FIDO ($,$$$), you need to have an account that can upload
               | on the MDS metadata service, and you need to talk to MS
               | to see if they'll consider adding you to the list
               | 
               | It's not completely closed, but in practice no one on
               | that list is a small independent open source project,
               | those are all the kind of entrenched corporate security
               | companies you'd expect
        
               | endgame wrote:
               | Because that worked so well for OpenID. If you're lucky,
               | you have the choice of which BigTech account you can use.
        
               | wepple wrote:
               | TOTP has substantial security gaps to make it a non-
               | starter.
               | 
               | Maybe a pubkey system where you choose your own client
               | would be what you're looking for?
        
               | pabs3 wrote:
               | TLS Client Certs (aka mTLS) is an option for that, but
               | the browser UI stuff for it is terrible and getting
               | worse.
        
               | doubled112 wrote:
               | I couldn't imagine trying to train the general public to
               | use mTLS and deploy that system.
               | 
               | I'm not even sure it is difficult. Most people I've
               | talked to in tech don't even realize it is a possibility.
               | Certificates are "complicated" as they put it.
        
               | LoganDark wrote:
               | > Google has their "was this you?" built into the OS.
               | 
               | Not only that, but it's completely impossible to disable
               | or remove that functionality or even make TOTP the
               | primary option. Every single time I try to sign in,
               | Google prompts my phone first, giving me a useless
               | notification for later, and I have to manually click a
               | couple of buttons to say "no I am not getting up to grab
               | my phone and unlock it for this bullshit, let me enter my
               | TOTP code". Every single time.
        
               | SahAssar wrote:
               | Password managers are adding support (as in they control
               | the keys) and I've used my yubikeys as "passkeys" (with
               | the difference that I can't autofill the username).
               | 
               | It's a good spec. I wish more people who spread FUD about
               | it being a "tech-giant" only thing would instead focus on
               | the productive things like demanding proper import/export
               | between providers.
        
               | AgentME wrote:
               | I don't understand this criticism. What is being
               | controlled? Passkeys are an open standard that a browser
               | can implement with public key crypto.
        
               | ndriscoll wrote:
               | Doesn't passkeys give the service a signature to prove
               | what type of hardware device you're using? e.g. it
               | provides a way for the server to check whether you are
               | using a software implementation? It's not really open if
               | it essentially has type of DRM built in.
        
               | LoganDark wrote:
               | You're thinking of hardware-backed attestation, which
               | provides a hardware root of trust. I believe passkeys are
               | just challenge-response (using public key cryptography).
               | You _could_ probably add some sort of root of trust (for
               | example, have the public key signed by the HSM that
               | generated it) but that would be entirely additional to
               | the passkey itself.
        
               | pabs3 wrote:
               | Passkeys do have the option of attestation, but the way
               | Apple at least do them means Apple users won't have
               | attestation, so most services won't require attestation.
        
               | pabs3 wrote:
               | They also require JavaScript to work unfortunately.
        
               | pabs3 wrote:
               | KeepassXC is working on supporting them natively in
               | software, so you would not need to trust big tech
               | companies, unless you are logging into a service that
               | requires attestation to be enabled.
        
               | fsckboy wrote:
               | > _I 'd much rather see passwords entirely replaced by
               | key-based authentication_
               | 
               | I've never understood how key-based systems are
               | considered better. I understand the encryption angle,
               | nobody is compromising that. But now I have a key I need
               | to personally shepherd? where do I keep it, and my
               | backups, and what is the protection on those places? how
               | many local copies, how many offsite? And I still need a
               | password to access/use it, but with no recourse should I
               | lose or forget. how am I supposed to remember that? It's
               | all just kicking the same cans down the same roads.
        
             | LtWorf wrote:
             | As I said recently in a talk I gave, 2FA as implemented by
             | pypy or github is meaningless, when in fact all actions are
             | performed via tokens that never expire, that are saved
             | inside a .txt file on the disk.
        
               | CapstanRoller wrote:
               | Doesn't GH explicitly warn against using non-expiring
               | tokens?
        
               | yrro wrote:
               | I wonder what the point is, I don't remember GitHub
               | warning me that I've used the se SSH key for years...
        
               | LtWorf wrote:
               | And?
        
               | heyoni wrote:
               | Passwords have full scope of permission while session
               | tokens can be limited.
        
             | tw04 wrote:
             | For some random server, sure. For a state sponsored attack?
             | Having an embedded exploit you can use when convenient, or
             | better yet an unknown exploit affecting every linux-based
             | system connected to the internet that you can use when war
             | breaks out - that's invaluable.
        
               | eru wrote:
               | Yes, but even states have only finite resources, so even
               | for them compromising an account would be cheaper.
               | 
               | (But you are right that a sleeper would be affordable for
               | them.)
        
               | treflop wrote:
               | Having one or two people on payroll to occasionally add
               | commits to a project isn't exactly that expensive if it
               | pays off. There are ~29,000,000 US government employees
               | (federal, state and local). Other countries like China
               | and India have tens of millions of government employees.
        
               | danieldk wrote:
               | And they might as well be working on compromising other
               | projects using different handles.
        
               | eru wrote:
               | Not all government employees are equally capable.
        
               | EvmRoot wrote:
               | Even if they contract it out, at $350/hr (which is not a
               | price that would raise any flags), that is less that
               | $750k. Even with a fancy office, couple of laptops and 5'
               | monitors, this is less than a day at the bombing range or
               | a few minutes keeping an aircraft carrier operational.
               | 
               | Even a team of 10 people working on this - the code and
               | social aspect - would be a drop in the bucket for any
               | nation-state.
        
               | dgellow wrote:
               | It's a very cheap investment given the blast radius
        
             | EasyMark wrote:
             | they might not have been playing the long con. maybe
             | approached by actors willing to pay them a lot of money to
             | try and slip in a back door. I'm sure a deep dive into code
             | contributions would clear that up for anyone familiar with
             | the code base and some free time.
        
               | bobba27 wrote:
               | They did fuck up quite a bit though. They injected their
               | payload before they checked if oss-fuzz or valgrind or
               | ... would notice something wrong. That is sloppy and
               | should have been anticipated and addressed BEFORE
               | activating the code.
               | 
               | Anyway. This team got caught. What are the odds that this
               | state-actor that did this, that this was the only project
               | / team / library that they decided to attack?
        
             | gamer191 wrote:
             | This PR from July 8 2023 is suspicious, so it was very
             | likely a long con: https://github.com/google/oss-
             | fuzz/pull/10667
        
             | beginner_ wrote:
             | Not MFA but git commit signing. I don't get why such core
             | low-level projects don't mandate it. MFA doesn0t help if a
             | github access token is stolen and I bet most of use such a
             | token for pushing from an IDE.
             | 
             | Even if an access token to github is stolen, the sudden
             | lack of signed commit should raise red flags. github should
             | allow projects to force commit signing (if not already
             | possible).
             | 
             | Then the access token plus the singing key would need to be
             | stolen.
             | 
             | But of course all that doesn't help in the here more likley
             | scenario of a long con by a state-sponsored hacker or in
             | case of duress (which in certain countries seems pretty
             | likley to happen)
        
             | bobba27 wrote:
             | This is a state sponsored event. Pretty poorly executed
             | though as they were tweaking and modifying things in their
             | and other tools after the fact though.
             | 
             | As a state sponsored project. What makes you think this is
             | their only project and that this is a big setback? I am
             | paranoid myself to think yesterdays meeting went like :
             | "team #25 has failed/been found out. Reallocate resources
             | to the other 49 teams."
        
           | jnxx wrote:
           | And, Joey Hess has counted at least 750 commits to xz from
           | that handle.
           | 
           | https://hachyderm.io/@joeyh/112180715824680521
           | 
           | This does not look trust-inspiring. If the code is complex,
           | there could be many more exploits hiding.
        
             | ebfe1 wrote:
             | clickhouse has pretty good github_events dataset on
             | playground that folks can use to do some research - some
             | info on the dataset https://ghe.clickhouse.tech/
             | 
             | Example of what this user JiaT75 did so far:
             | 
             | https://play.clickhouse.com/play?user=play#U0VMRUNUICogRlJP
             | T...
             | 
             | pull requests mentioning xz, 5.6 without downgrade, cve
             | being mentioned in the last 60 days:
             | 
             | https://play.clickhouse.com/play?user=play#U0VMRUNUIGNyZWF0
             | Z...
        
             | waynesonfire wrote:
             | 750 commits... is xz able to send e-mails yet?
        
               | wyldfire wrote:
               | No. But if you have any centrifuges they will probably
               | exhibit inconsistent behavior.
        
               | shermantanktop wrote:
               | Maybe it's the centrifuges which will send the mail,
               | making the world's first uranium-enriching spam botnet.
        
               | lovasoa wrote:
               | Yes, it sends an email containing your private key on
               | installation.
        
               | soraminazuki wrote:
               | It's hardly surprising given that parsing is generally
               | considered to be a tricky problem. Plus, it's a 15 years
               | old project that's widely used. 750 commits is nothing to
               | sneer about. No wonder the original maintainer got burned
               | out.
        
             | indigodaddy wrote:
             | Anyone have any level of confidence that for example EL7/8
             | would not be at risk even if more potential exploits at
             | play?
        
               | soraminazuki wrote:
               | I wouldn't count on it. RedHat packages contain lots of
               | backported patches.
        
               | indigodaddy wrote:
               | Right, that notion was what was making me nervous
        
               | worthless-trash wrote:
               | At least in my group, the backports are hand picked to
               | solve specific problems, not just random wholesale
               | backports.
        
               | soraminazuki wrote:
               | Sure, they aren't backporting patches wholesale. But the
               | patches that did get backported, if any, needs much more
               | scrutiny given the situation.
               | 
               | The thing about enterprise Linux distros is that they
               | have a long support period. Bug fixes and security
               | patches pile up.
               | 
               | Fortunately though, xz doesn't seem to have a lot of
               | backported patches.
               | 
               | https://git.centos.org/rpms/xz/blob/c8s/f/SPECS/xz.spec
               | https://git.centos.org/rpms/xz/blob/c7/f/SPECS/xz.spec
               | 
               | But take glibc for example. The amount of patches makes
               | my head spin.
               | 
               | https://git.centos.org/rpms/glibc/blob/c8s/f/SPECS/glibc.
               | spe...
        
               | bobba27 wrote:
               | For the duration of a major release, up until ~x.4 pretty
               | much everything from upstream gets backported with a
               | delay of 6-12 months, depending on how conservative to
               | change the rhel engineer maintaining this part of the
               | kernel is.
               | 
               | After ~x.4 things slow down and only "important" fixes
               | get backported but no new features.
               | 
               | After ~x.7 or so different processes and approvals come
               | into play and virtually nothing except high severity bugs
               | or something that "important customer" needs will be
               | backported.
        
               | worthless-trash wrote:
               | Sadly, 8.6 and 9.2 kernel are the exception to this.
               | Mainly as they are openshift container platform and
               | fedramp requirements.
               | 
               | The goal is that 8.6, 9.2 and 9.4 will have releases at
               | least every two weeks.
               | 
               | Maybe soon all Z streams will have a similar release
               | cadence to keep up with the security expectations, but
               | will keep a very similar expectations that you outlined
               | above.
        
               | rwmj wrote:
               | These changes were not backported to RHEL.
        
               | Zenul_Abidin wrote:
               | I don't think EL7 gets _minor_ version updates anymore
               | though
        
               | Gelob wrote:
               | RedHat blog says no versions of RHEL are affected.
               | 
               | https://www.redhat.com/en/blog/urgent-security-alert-
               | fedora-...
        
             | jnxx wrote:
             | If this is a conspiracy or a state-sponsored attack, they
             | might have gone specifically for embedded devices and the
             | linux kernel. Here archived from tukaani.org:
             | 
             | https://web.archive.org/web/20110831134700/http://tukaani.o
             | r...
             | 
             | > XZ Embedded is a relatively small decompressor for the XZ
             | format. It was developed with the Linux kernel in mind, but
             | is easily usable in other projects too.
             | 
             | > *Features*
             | 
             | > * Compiled code 8-20 KiB
             | 
             | > [...]
             | 
             | > * All the required memory is allocated at initialization
             | time.
             | 
             | This is targeted at embedded and real-time stuff. Could
             | even be part of boot loaders in things like buildroot or
             | RTEMS. And this means potentially millions of devices, from
             | smart toasters or toothbrushes to satellites and missiles
             | which most can't be updated with security fixes.
        
               | jnxx wrote:
               | Also, the XZ file format, which was designed by Lasse
               | Collins, was analyzed and seems to have a number of
               | problems in terms of reliability and security:
               | 
               | https://www.nongnu.org/lzip/xz_inadequate.html
        
               | masklinn wrote:
               | That is just technical disagreements and sour grapes by
               | someone involved in a competing format (Lzip).
               | 
               | There's no evidence Lasse did anything "wrong" beyond
               | looking for / accepting co-maintainers, something package
               | authors are taken to task for _not_ doing every time they
               | have life catching up or get fed up and can't  / won't
               | spend as much time on the thing.
        
               | jnxx wrote:
               | You appeal to trust people and give them the benefit of
               | doubt which is normally a good thing. But is this
               | appropiate here?
               | 
               | If this is a coordinated long-term effort by a state
               | entity, there is no reason to trust the supposed creator
               | of the project, especially given what it was targeting
               | from the start.
        
               | matsemann wrote:
               | > _But is this appropiate here?_
               | 
               | Yes, nothing points to the inventor of the format and
               | maintainer for decades has done anything with the format
               | to make it suspect. If so, the recent backdoor wouldn't
               | be needed.
               | 
               | It's good to be skeptic, but don't drag people through
               | the mud without anything to back it up.
        
               | jnxx wrote:
               | If a project targets a high-profile, very security
               | sensitive project like the linux kernel from the start,
               | as the archived tukaani web site linked above shows, it
               | is justified to ask questions.
               | 
               | Also, the exploit shows a high effort, and a high level
               | of competence, and a very obvious willingness to play a
               | long game. These are not circumstances for applying
               | Hanlon's razor.
        
               | matsemann wrote:
               | Are you raising the same concerns and targeting
               | individuals behind all other sensitive projects? No,
               | because that would be insane.
               | 
               | It's weird to have one set of standards to a maintainer
               | since 2009 or so, and different standards for others.
               | This witch hunt is just post-hoc smartassery.
        
               | jnxx wrote:
               | Yes, I think if a project has backdoors and its old
               | maintainers are unable to review them, I am more critical
               | than with normal projects. As said, compression is used
               | everywhere and in embedded systems, it touches a lot of
               | critical stuff. And the project went straight for that
               | since the beginning.
               | 
               | And this is in part because I can not even tell for sure
               | that he even exists. If I had met him a few times in a
               | bar, I would be more inclined to believe he is not
               | involved.
        
               | saagarjha wrote:
               | > As said, compression is used everywhere and in embedded
               | systems, it touches a lot of critical stuff. And the
               | project went straight for that since the beginning.
               | 
               | Uh, because it's a compression library?
        
               | ljahier wrote:
               | From the project readme: > XZ Utils provide a general-
               | purpose data-compression library plus 21 command-line
               | tools.
               | 
               | https://git.tukaani.org/?p=xz.git;a=blob;f=README;h=ac812
               | ff1...
        
               | UncleEntity wrote:
               | I'm inclined to believe that whatever state actor was
               | involved sent a memo to their sockpuppets to do whatever
               | they can to deflect blame away.
               | 
               | See what I did there?
        
               | masklinn wrote:
               | > You appeal to trust people and give them the benefit of
               | doubt which is normally a good thing. But is this
               | appropiate here?
               | 
               | Yes.
               | 
               | Without evidence to the contrary there is no reason to
               | believe Lasse has been anything other than genuine so all
               | you're doing is insulting and slandering them out of
               | personal satisfaction.
               | 
               | And conspiratorial witch hunts are actively counter-
               | productive, through that mode of thinking it doesn't take
               | much imagination to figure out _you_ are part of the
               | conspiracy for instance.
        
               | jnxx wrote:
               | The thing is there are two possibilities:
               | 
               | 1. An important project has an overburdened / burnt out
               | maintainer, and that project is taken over by a persona
               | who appears to help kindly, but is part of a campaign of
               | a state actor.
               | 
               | 2. A state actor is involved in setting up such a project
               | from the start.
               | 
               | The first possibility is not only being an asshole to the
               | original maintainer, but it is also more risky - that
               | original maintainer surely feels responsible for his
               | creation and could ring alarm bells. This is not unlikely
               | because he knows the code. And alarm bells is something
               | that state actors do not like.
               | 
               | The second possibility has the risk of the project not
               | being successful, which would mean a serious investment
               | in resources to fail. But that could be countered by
               | having competent people working on that. And in that
               | case, you don't have any real persons,just account names.
               | 
               | What happened here? I don't know.
        
               | Delk wrote:
               | I don't think state actors would care one bit about being
               | assholes. Organized crime black hats probably wouldn't
               | either.
               | 
               | The original maintainer has said in the past, before Jia
               | Tan's increased involvement and stepping up as a
               | maintainer, that he couldn't put as much into the project
               | due to mental health and other reasons [1]. Seems to fit
               | possibility number one rather well.
               | 
               | If you suspect that Lasse Collin was somehow in it from
               | the start, that'd mean the actor orchestrated the whole
               | thing about mental health and not being able to keep up
               | with sole maintainership. Why would they even do that if
               | they had the project under their control already?
               | 
               | Of course we don't know what's really been happening with
               | the project recently, or who's behind the backdoor and
               | how. But IMO creating suspicions about the original
               | maintainer's motives based _entirely_ on speculation is
               | also a bit assholey.
               | 
               | edit: [1] https://www.mail-archive.com/xz-
               | devel@tukaani.org/msg00567.h...
        
               | jnxx wrote:
               | > Why would they even do that
               | 
               | More layers of obfuscation. For example in order to be
               | able to attribute the backdoor to a different party.
               | 
               | It is of course also possible that Lasse Collins is a
               | nice real person who just has not been able to review
               | this. Maybe he is too ill,or has to care for an ill
               | spouse, or perhaps he is not even alive any more. Who
               | knows him as a person (not just an account name) and
               | knows how he is doing?
        
               | roenxi wrote:
               | That is kinda crazy - state actors don't need to care
               | about that level of obfuscation. From a state's
               | perspective the situation here would be simple - hire a
               | smart & patriotic programmer to spend ~1+ years
               | maintaining an important package, then they slip a
               | backdoor in. There isn't any point in making it more
               | complicated than that.
               | 
               | They don't even need plausible deniability, groups like
               | the NSA have been caught spying on everyone and it
               | doesn't hurt them all that much. The publicity isn't
               | ideal. But it only confirms what we already new - turns
               | out the spies are spying on people! Who knew.
               | 
               | There are probably dozens if not hundreds of this sort of
               | attempt going on right now. I'd assume most don't get
               | caught. Or go undetected for a many years which is good
               | enough enough. If you have government money on the
               | budget, it makes sense to go with large-volume low-effort
               | attempts rather than try some sort of complex good-cop-
               | bad-cop routine.
        
               | saagarjha wrote:
               | You can imagine all the layers of obfuscation you want,
               | but it doesn't seem necessary to explain what is going on
               | here.
        
               | johnisgood wrote:
               | On https://www.mail-archive.com/xz-
               | devel@tukaani.org/msg00567.h..., Lasse Collin mentions
               | long-term mental health issues among other things.
        
               | beanjuiceII wrote:
               | would be nice if he'd come out with some statements
               | considering he's still committing to xz as of few hours
               | ago
               | 
               | https://git.tukaani.org/?p=xz.git;a=commit;h=f9cf4c05edd1
               | 4de...
        
               | LigmaBaulls wrote:
               | You mean a statement like this https://tukaani.org/xz-
               | backdoor/
        
               | doug_durham wrote:
               | It makes me wonder. Is it possible to develop a robust
               | Open Source ecosystem without destroying the mental
               | health of the contributors? Reading his posting really
               | made me feel for him. There are exceedingly few people
               | who are willing do dedicate themselves to developing
               | critical system in the first place. Now there is the
               | burden of extensively vetting every volunteer contributor
               | who helps out. This does not seem sustainable. Perhaps
               | users of open source need to contribute more
               | resources/money to the software that makes their products
               | possible.
        
               | themoonisachees wrote:
               | False dichotomy much? It doesn't have to be a motivated
               | state actor pulling the strings from the begging. It
               | could also just be some guy, who decided he didn't care
               | anymore and either wanted to burn something or got paid
               | by someone (possibly a state actor) to do this.
        
               | voidz wrote:
               | It argues the topic pretty well: xz is unsuitable for
               | long-term archival. The arguments are in-depth and well
               | worded. Do you have any argument to the contrary beyond
               | "sour grapes"?
        
               | matsemann wrote:
               | It's not relevant to the current issue at hand.
        
               | evrial wrote:
               | If you say "sour grapes", then back down your bold
               | statement or don't say at all.
        
               | masklinn wrote:
               | What are you talking about? Do you understand multiple
               | people use this site?
               | 
               | Also do you mean back up?
               | 
               | Antonio literally used to go around mailing lists asking
               | for lzip support and complaining about xz:
               | 
               | - https://gcc.gnu.org/legacy-ml/gcc/2017-06/msg00044.html
               | 
               | - https://lists.debian.org/debian-
               | devel/2017/06/msg00433.html
               | 
               | Also, https://web.archive.org/web/20190605225651/http://o
               | ctave.159...
               | 
               | I can understand wanting your project to succeed, it's
               | pretty natural and human, but it's flagrant Antonio had a
               | lot of feels about the uptake of xz compared to lzip, as
               | both are container formats around raw lzma data streams
               | and lzip predates xz by 6 months. His complaint article
               | about xz is literally one of the "Introductory links" of
               | lzip.
        
               | goodpoint wrote:
               | > That is just technical disagreements and sour grapes
               | 
               | Care to provide some evidence to back this statement?
        
               | shzhdbi09gv8ioi wrote:
               | This link is opinion piece about the file format and has
               | nothing to do with today's news.
               | 
               | Also, Lasse has not been accused of any wrong-doings.
        
               | varjag wrote:
               | His GH account was suspended, in what I believe a very
               | unfortunate case of collateral damage.
        
               | semi-extrinsic wrote:
               | Collateral damage yes, but it seems like he is currently
               | away from the internet for an extended time. So it could
               | be that Github needed to suspend his account in order to
               | bypass things that he would otherwise have to do/approve?
               | Or to preempt the possibility that his account was also
               | compromised and we don't know yet.
        
               | evrial wrote:
               | Except it unnecessary complexity is very convenient to
               | limit the code audit to only domain experts.
        
               | jart wrote:
               | People don't always reveal the true reason they want to
               | destroy something.
        
               | jnxx wrote:
               | One scenario for malicious code in embedded devices would
               | be a kind of killswitch which listens to a specific byte
               | sequence and crashes when encountering it. For a state
               | actor, having such an exploit would be gold.
        
               | HankB99 wrote:
               | That's an "interesting" thought.
               | 
               | One of my complaints about so many SciFi stories is the
               | use of seemingly conventional weapons. I always thought
               | that with so much advanced technology that weapons would
               | be much more sophisticated. However if the next "great
               | war" is won not by the side with the most destructive
               | weapons but by the side with the best kill switch,
               | subsequent conflicts might be fought with weapons that
               | did not rely on any kind of computer assistance.
               | 
               | This is eerily similar to Einstein's (purported)
               | statement that if World War III was fought with nuclear
               | weapons, World War IV would be fought with sticks and
               | stones. Similar, but for entirely different reasons.
               | 
               | I'm trying to understand why the characters in Dune
               | fought with swords, pikes and knives.
        
               | ethbr1 wrote:
               | > _I 'm trying to understand why the characters in Dune
               | fought with swords, pikes and knives._
               | 
               | Because the slow blade penetrates the shield. (And
               | personal shields are omnipresent)
        
               | cblum wrote:
               | > I'm trying to understand why the characters in Dune
               | fought with swords, pikes and knives.
               | 
               | At least part of the reason is that the interaction
               | between a lasgun and a shield would cause a powerful
               | explosion that would kill the shooter too. No one wants
               | that and no one will give up their shield, so they had to
               | go back to melee weapons.
        
               | potro wrote:
               | Were drones unthinkable at the time of Dune creation? Or
               | suicide attacks?
        
               | throwaway7356 wrote:
               | No, there is a in-world reason at least for no drones.
               | Wikipedia:
               | 
               | > However, a great reaction against computers has
               | resulted in a ban on any "thinking machine", with the
               | creation or possession of such punishable by immediate
               | death.
        
               | ethbr1 wrote:
               | For anyone who wants the short version:
               | https://www.youtube.com/watch?v=2YnAs4NpRd8
               | 
               | tl;dr - Machine intelligences existed in Dune history,
               | were discovered to be secretly controlling humanity
               | (through abortion under false pretenses, forced
               | sterilization, emotional/social control, and other ways),
               | then were purged and replaced with a religious
               | commandment: "Thou shalt not make a machine in the
               | likeness of a human mind"
        
               | Muromec wrote:
               | There is a drone attack in a first movie
        
               | lobocinza wrote:
               | In the book Paul is attacked by an insect drone while in
               | his room. The drone was controlled by a Harkonnen agent
               | placed weeks in anticipation inside a structure of the
               | palace so it was also a suicide attack as the agent had
               | no chance to escape and would die of hunger/thirsty if
               | not found.
        
               | cam-o-man wrote:
               | No, and there is a (piloted) drone attack in the first
               | book -- Paul is attacked by a hunter-seeker.
               | 
               | The reason nobody tries to use the lasgun-shield
               | interaction as a weapon is because the resulting
               | explosion is indistinguishable from a nuclear weapon, and
               | the Great Convention prohibits the use of nukes on human
               | targets.
               | 
               | Just the _perception_ of having used a nuclear device
               | would result in the House which did so becoming public
               | enemy #1 and being eradicated by the Landsraad and
               | Sardaukar combined.
        
               | varjag wrote:
               | All this circus makes me happy for never moving from
               | sysvinit on embedded.
        
               | jnxx wrote:
               | It is not just systemd which uses xz. For example,
               | Debian's dpkg links xz-utils.
        
               | ahartmetz wrote:
               | However, this particular attack only works through
               | libsystemd to compromise sshd and it _is_ related to
               | systemd 's kitchen sink "design".
        
               | JdeBP wrote:
               | It's related to excessive coupling between modules and
               | low coherence.
               | 
               | There _is_ a way for programs to implement the systemd
               | readiness notification protocol without using libsystemd,
               | and thus without pulling in liblzma, which is coupled to
               | libsystemd even though the readiness notification
               | protocol does not require any form of compression.
               | libsystemd provides a wide range of things which have
               | only weak relationships to each other.
               | 
               | There are in fact two ways, as two people independently
               | wrote their own client code for the systemd readiness
               | notification protocol, which really does not require the
               | whole of libsystemd and its dependencies to achieve. (It
               | might be more than 2 people nowadays.)
               | 
               | * https://jdebp.uk/FGA/unix-daemon-readiness-protocol-
               | problems...
        
               | Matl wrote:
               | It's easy to have your existing biases validated if you
               | already dislike systemd. The reality is that systemd is
               | much more coherently designed than its predecessors from
               | a 'end user interface' point of view, hence why its units
               | are largely portable etc. which was not the case for
               | sysvinit.
               | 
               | The reality is that it is not systemd specifically but
               | our modern approach to software design where we tend to
               | rely on too much third party code and delight in
               | designing extremely flexible, yet ultimately extremely
               | complex pieces of software.
               | 
               | I mean this is even true as far as the various CPU attack
               | vectors have shown in recent years, that yes speculative
               | execution is a neat and 'clever' optimization and that we
               | rely on it for speed, but that maybe that was just too
               | clever a path to go down and we should've stuck with
               | simpler designs that would maybe led to slower speedups
               | but a more solid foundation to build future CPU
               | generations on.
        
               | EvmRoot wrote:
               | This is only evidence that libsystemd is popular. If you
               | want to 0wn a bunch of systems, or even one particular
               | system but make it non-obvious, you choose a popular
               | package to mess with.
               | 
               | BeOS isn't getting a lot of CVEs attached to it, these
               | days. That doesn't mean its good or secure, though.
        
               | varjag wrote:
               | All that could change if BeOS adopts systemd.
        
             | codedokode wrote:
             | > If the code is complex, there could be many more exploits
             | hiding.
             | 
             | Then the code should not be complex. Low-level hacks and
             | tricks (like pointer juggling) should be not allowed and
             | simplicity and readability should be preferred.
        
               | sgarland wrote:
               | For tools like compression programs, you'd generally
               | prefer performance over everything (except data
               | corruption, of course).
        
               | jononor wrote:
               | Probably you would prefer no backdoors also? Performance
               | without correctness or trustworthiness is useless.
        
               | sgarland wrote:
               | Yes, but my point was that at the level of performance
               | tools like this are expected to operate at, it's highly
               | probable that you'll need to get into incredibly esoteric
               | code. Look at ffmpeg - tons of hand-written Assembly,
               | because they need it.
               | 
               | To be clear, I have no idea how to solve this problem; I
               | just don't think saying that all code must be non-hacky
               | is the right approach.
        
               | jononor wrote:
               | Performance can be bought with better hardware. It gets
               | cheaper and cheaper every year. Trustworthiness cannot be
               | purchased in the same way. I do not understand why
               | performance would ever trumph clean code, especially for
               | for code that processes user provided input.
        
               | patchguard wrote:
               | > Performance can be bought with better hardware.
               | 
               | I hate this argument. If current hardware promises you a
               | theoretical throughput of 100 MB/s for an operation,
               | someone will try to hit that limit. Your program that has
               | no hard to understand code but gives me 5 MB/s will loose
               | in the face of a faster one, even if that means writing
               | harder to understand code.
        
               | jononor wrote:
               | There is no reason that understandable and safe code will
               | hit just 5% of a theoretical max. It may be closer to
               | 95%.
        
               | sgarland wrote:
               | No, but often it is far worse than 95%. A good example is
               | random.randint() vs math.ceil(random.random() * N) in
               | Python. The former is approximately 5x slower than the
               | latter, but they produce effectively the same result with
               | large enough values of N. This isn't immediately apparent
               | from using them or reading docs, and it's only really an
               | issue in hot loops.
               | 
               | Another favorite of mine is bitshifting / bitwise
               | operators. Clear and obvious? Depends on your background.
               | Fast as hell? Yes, always. It isn't always needed, but
               | when it is, it will blow anything else out of the water.
        
               | sgarland wrote:
               | This attitude is how we get streaming music players that
               | consume in excess of 1 GiB of RAM.
               | 
               | Performant code needn't be unclean; it's just often using
               | deeper parts of the language.
               | 
               | I have a small project that became absolute spaghetti. I
               | rewrote it to be modular, using lots of classes,
               | inheritance, etc. It was then slower, but eminently more
               | maintainable and extensible. I'm addressing that by using
               | more advanced features of the language (Python), like
               | MemoryView for IPC between the C libraries it calls. I
               | don't consider this unclean, but it's certainly not
               | something you're likely to find on a Medium article or
               | Twitter take.
               | 
               | I value performant code above nearly everything else. I'm
               | doing this for me, there are no other maintainers, and
               | it's what I enjoy. You're welcome to prioritize something
               | else in your projects, but it doesn't make other
               | viewpoints objectively worse.
        
               | josefx wrote:
               | I suggest you run your browsers Javascript engine in
               | interpreter mode to understand how crippling the simple
               | and sraight forward solution is to performance.
        
             | pmarreck wrote:
             | I have some questions.
             | 
             | 1) Are there no legit code reviews from contributors like
             | this? How did this get accepted into main repos while
             | flying under the radar? When I do a code review, I try to
             | understand the actual code I'm reviewing. Call me crazy I
             | guess!
             | 
             | 2) Is there no legal recourse to this? We're talking about
             | someone who managed to root any linux server that stays up-
             | to-date.
        
               | WhyNotHugo wrote:
               | > 2) Is there no legal recourse to this? We're talking
               | about someone who managed to root any linux server that
               | stays up-to-date.
               | 
               | Any government which uses GNU/Linux in their
               | infrastructure can pitch this as an attempt to backdoor
               | their servers.
               | 
               | The real question is: will we ever even know who was
               | behind this? If it was some mercenary hacker intending to
               | resell the backdoor, maybe. But if it was someone working
               | with an intelligence agency in
               | US/China/Israel/Russia/etc, I doubt they'll ever be
               | exposed.
        
               | treffer wrote:
               | The actual inclusion code was never in the repo. The
               | blobs were hidden as lzma test files.
               | 
               | So you review would need to guess from 2 new test files
               | that those are, decompressed, a backdoor and could be
               | injected which was never in the git history.
               | 
               | This was explicitly build to evade such reviews.
        
               | xghryro wrote:
               | I suppose you think the maintainers shouldn't have
               | scrutinized those files? Please tell me it's a joke.
        
               | andrepd wrote:
               | >How did this get accepted into main repos while flying
               | under the radar? When I do a code review, I try to
               | understand the actual code I'm reviewing. Call me crazy I
               | guess!
               | 
               | And? You never do any mistakes? Google "underhanded C
               | contest"
        
           | moritonal wrote:
           | Warning, drunk brain talking. But a LLM driven email based
           | "collaborator" could play a very long gMw adding basic
           | features to a code made whilst earning trust backed by a
           | generated online presence. My money is on a resurgance in the
           | Web of Trust.
        
             | w4ffl35 wrote:
             | Clearly a human is even better at it.
        
             | llmblockchain wrote:
             | State level actor? China?
        
               | logankeenan wrote:
               | You're likely being downvoted because the Github profile
               | looking like east Asian isn't evidence of where the
               | attacker/attackers are from.
               | 
               | Nation states will go to long lengths to disguise their
               | identity. Using broken Russian English when they are not
               | Russian, putting comments in the code of another
               | language, and all sorts of other things to create
               | misdirection.
        
               | llmblockchain wrote:
               | That's certainly true-- at the very least it "seems" like
               | Asian, but it could very well be from any nation. If they
               | were patient enough to work up to this point they would
               | likely not be dumb enough to leak such information.
        
             | jnxx wrote:
             | The web of trust is a really nice idea, but it works badly
             | against that kind of attacks. Just consider that in the
             | real world, most living people (all eight billions) are
             | linked by only six degrees of separation. It really works,
             | for code and for trusted social relations (like "I lend you
             | 100 bucks and you pay me them back when you get your
             | salary") mostly when you know the code author in person.
             | 
             | This is also not a new insight. In the beginning of the
             | naughties, there was a web site named kuro5hin.org, which
             | experiemented with user ratings and trust networks. It
             | turned out impossible to prevent take-overs.
        
               | groby_b wrote:
               | IIRC, kuro5hin and others all left out a crucial step in
               | the web-of-trust approach: There were absolutely no
               | repercussions when you extended trust to somebody who
               | later turned out to be a bad actor.
               | 
               | It considers trust to be an individual metric instead of
               | leaning more into the graph.
               | 
               | (There are other issues, e.g. the fact that "trust" isn't
               | a universal metric either, but context dependent. There
               | are folks whom you'd absolutely trust to e.g. do great &
               | reliable work in a security context, but you'd still not
               | hand them the keys to your car)
               | 
               | At least kuro5hin modeled a degradation of trust over
               | time, which most models _still_ skip.
               | 
               | It'd be a useful thing, but we have a long way to go
               | before there's a working version.
        
               | vintermann wrote:
               | There were experiments back in the day. Slashdot had one
               | system based on randomly assigned moderation duty which
               | worked pretty great actually, except that for the longest
               | time you couldn't sort by it.
               | 
               | Kuro5hin had a system which didn't work at all, as you
               | mentioned.
               | 
               | But the best was probably Raph Levien's Advogato. That
               | had a web of trust system which actually worked. But had
               | a pretty limited scope (open source devs).
               | 
               | Now everyone just slaps an upvote/downvote button on and
               | calls it a day.
        
             | cyanydeez wrote:
             | State actors have equaly long horizons to compromise
        
           | lucasRW wrote:
           | More likely that the account of that dev was breawched, dont
           | you think ?
        
         | KingOfCoders wrote:
         | Sleeper.
        
         | smeehee wrote:
         | Debian have reverted xz-utils (in unstable) to 5.4.5 - actual
         | version string is "5.6.1+really5.4.5-1". So presumably that
         | version's safe; we shall see...
        
           | xorcist wrote:
           | Is that version truly vetted? "Jia Tan" has been the official
           | maintainer since 5.4.3, could have pushed code under any
           | other pseudonym, and controls the signing keys. I would have
           | felt better about reverting farther back, xz hasn't had any
           | breaking changes for a long time.
        
             | rnmkr wrote:
             | It's not only that account, other maintainer has been
             | pushing the same promotion all over the place.
        
             | tobias2014 wrote:
             | It looks like this is being discussed, with a complication
             | of additional symbols that were introduced
             | https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1068024
        
               | binkHN wrote:
               | Thanks for this! I found this URL in the thread very
               | interesting!
               | 
               | https://www.nongnu.org/lzip/xz_inadequate.html
        
               | mehdix wrote:
               | It is an excellent technical write-up and yet again
               | another testimonial to the importance of keeping things
               | simple.
        
               | userbinator wrote:
               | The other comments here showing that the backdoor was a
               | long-term effort now make me wonder just _how_ long of an
               | effort it was...
        
           | kzrdude wrote:
           | There are suggestions to roll back further
        
           | sgarland wrote:
           | TIL that +really is a canonical string. [0]
           | 
           | [0]: https://www.debian.org/doc/debian-policy/ch-
           | controlfields.ht...
        
         | Jommi wrote:
         | the account was either sold or stolen
        
         | userbinator wrote:
         | _because of it 's "great new features"_
         | 
         | "great" for whom? I've seen enough of the industry to
         | immediately feel suspicious when someone uses that sort of
         | phrasing in an attempt to persuade me. It's no different from
         | claiming a "better experience" or similar.
        
           | SilasX wrote:
           | You can find more examples of that kind of puffer if you go
           | to a website's cookie consent pop-up and find the clause
           | after "we use cookies to...".
        
             | transcriptase wrote:
             | I've long thought that those "this new version fixes bugs
             | and improves user experience" patch notes that Meta et al
             | copy and paste on every release shouldn't be permitted.
        
               | nebula8804 wrote:
               | Tell me about it. I look at all these random updates that
               | get pushed to my mobile phone and they all pretty much
               | have that kind of fluff in the description. Apple/Android
               | should take some steps to improve this or outright ban
               | this practice. In terms of importance to them though I
               | imagine this is pretty low on the list.
               | 
               | I have dreamed about an automated LLM system that can
               | "diff" the changes out of the binary and provide some
               | insight. You know give back a tiny bit of power to the
               | user. I'll keep dreaming.
        
           | LtWorf wrote:
           | I made a library where version 2 is really really much faster
           | than version 1. I'd want everyone to just move to version 2.
        
             | Brian_K_White wrote:
             | But then you are saying a specific great new feature,
             | performance, and not just the claim and concept
             | performance, but numbers.
        
               | LtWorf wrote:
               | I'm sure they actually had new features...
        
               | CanaryLayout wrote:
               | Yeah... RISCV routine was put in, then some binary test
               | files were added later that are probably now suspect.
               | 
               | don't miss out on the quality code, like the line that
               | has: i += 4 - 2;
               | 
               | https://git.tukaani.org/?p=xz.git;a=commitdiff;h=50255fee
               | aab...
        
               | gamer191 wrote:
               | > some binary test files were added later that are
               | probably now suspect
               | 
               | That's confirmed
               | 
               | From https://www.openwall.com/lists/oss-
               | security/2024/03/29/4:
               | 
               | > The files containing the bulk of the exploit are in an
               | obfuscated form in
               | 
               | > tests/files/bad-3-corrupt_lzma2.xz
               | 
               | > tests/files/good-large_compressed.lzma
               | 
               | > committed upstream. They were initially added in
               | 
               | > https://github.com/tukaani-
               | project/xz/commit/cf44e4b7f5dfdbf...
        
               | m0dest wrote:
               | It probably makes sense to start isolating build
               | processes from test case resources.
        
               | saagarjha wrote:
               | Sure but then you can smuggle it into basically any other
               | part of the build process...?
        
               | jwilk wrote:
               | FWIW, "4 - 2" is explained earlier in the file:
               | // The "-2" is included because the for-loop will
               | // always increment by 2. In this case, we want to
               | // skip an extra 2 bytes since we used 4 bytes       //
               | of input.       i += 4 - 2;
        
               | Brian_K_White wrote:
               | What are they specifically?
               | 
               | I don't know how you can be missing the essence of the
               | problem here or that comments point.
               | 
               | Vague claims are meaningless and valueless and are now
               | even worse than that, they are a red flag.
               | 
               | Please don't tell me that you would accept a pr that
               | didn't explain what it did, and why it did it, and how it
               | did it, with code that actually matched up with the
               | claim, and was all actually something you wanted or
               | agreed was a good change to your project.
               | 
               | Updating to the next version of a library is completely
               | unrelated. When you update a library, you don't know what
               | all the changes were to the library, _but the librarys
               | maintainers do_, and you essentially trust that librarys
               | maintainers to be doing their job not accepting random
               | patches that might do anything.
               | 
               | Updating a dependency and trusting a project to be sane
               | is entirely a different prospect from accepting a pr and
               | just trusting that the submitter only did things that are
               | both well intentioned and well executed.
               | 
               | If you don't get this then I for sure will not be using
               | or trusting _your_ library.
        
         | kapouer wrote:
         | Github accounts of both xz maintainers have been suspended.
        
           | miduil wrote:
           | Not true, the original author wasn't suspended:
           | https://github.com/Larhzu
           | 
           | https://github.com/JiaT75 was suspended for a moment, but
           | isn't anymore?
        
             | boutique wrote:
             | Both are suspended for me. Check followers on both
             | accounts, both have a suspended pill right next to their
             | names.
        
               | miduil wrote:
               | Ah, thanks for correcting me there - really weird that
               | this isn't visible from the profile itself. Not even from
               | the organization.
               | 
               | The following page for each other show both accounts
               | suspended indeed.
               | 
               | https://github.com/Larhzu?tab=following
               | 
               | https://github.com/JiaT75?tab=following
        
               | fargle wrote:
               | github should add a badge for "inject backdoor into core
               | open source infrastructure"
        
             | FridgeSeal wrote:
             | GitHub's UI has been getting notoriously bad for showing
             | consistent and timely information lately, could be an issue
             | stemming from that.
        
               | justinclift wrote:
               | Yeah. Had a weird problem last week where GitHub was
               | serving old source code from the raw url when using curl,
               | but showing the latest source when coming from a browser.
               | 
               |  _Super_ frustrating when trying to develop automation.
               | :(
        
         | mongol wrote:
         | Interesting that one of the commits commented on update of the
         | test file that it was for better reproducibility for having
         | been generated by a fixed random seed (although how goes
         | unmentioned). For the future, random test data better be
         | generated as part of the build, rather than being committed as
         | opaque blobs...
        
           | WhyNotHugo wrote:
           | I agree on principle, but sometimes programmatic generating
           | test data is not so easy.
           | 
           | E.g.: I have a specific JPEG committed into a repository
           | because it triggers a specific issue when reading its
           | metadata. It's not just _random_ data, but specific bogus
           | data.
           | 
           | But yeah, if the test blob is purely random, then you can
           | just commit a seed and generate in during tests.
        
         | api wrote:
         | I'm surprised there isn't way more of this stuff. The supply
         | chain is so huge and therefore represents so much surface area.
        
           | SoftTalker wrote:
           | There probably is. Way more than anyone knows. I bet every
           | major project on github is riddled with state actors.
        
           | cozzyd wrote:
           | Imagine if sshd was distributed by PyPI or cargo or npm
           | instead of by a distro.
        
         | junon wrote:
         | GitHub has suspended @JiaT75's account.
         | 
         | EDIT: Lasse Collin's account @Larhzu has also been suspended.
         | 
         | EDIT: Github has disabled all Tukaani repositories, including
         | downloads from the releases page.
         | 
         | --
         | 
         | EDIT: Just did a bit of poking. xz-embedded was touched by Jia
         | as well and it appears to be used in the linux kernel. I did
         | quick look and it doesn't appear Jia touched anything of
         | interest in there. I also checked the previous mirror at the
         | tukaani project website, and nothing was out of place other
         | than lagging a few commits behind:
         | 
         | https://gist.github.com/Qix-/f1a1b9a933e8847f56103bc14783ab7...
         | 
         | --
         | 
         | Here's a mailing list message from them ca. 2022.
         | 
         | https://listor.tp-sv.se/pipermail/tp-sv_listor.tp-sv.se/2022...
         | 
         | --
         | 
         | MinGW w64 on AUR was last published by Jia on Feb 29:
         | https://aur.archlinux.org/cgit/aur.git/log/?h=mingw-w64-xz
         | (found by searching their public key:
         | 22D465F2B4C173803B20C6DE59FCF207FEA7F445)
         | 
         | --
         | 
         | pacman-static on AUR still lists their public key as a
         | contributor, xz was last updated to 5.4.5 on 17-11-2023:
         | https://aur.archlinux.org/cgit/aur.git/?h=pacman-static
         | 
         | EDIT: I've emailed the maintainer to have the key removed.
         | 
         | --
         | 
         | Alpine was patched as of 6 hours ago.
         | 
         | https://git.alpinelinux.org/aports/commit/?id=982d2c6bcbbb57...
         | 
         | --
         | 
         | OpenSUSE is still listing Jia's public key:
         | https://sources.suse.com/SUSE:SLE-15-SP6:GA/xz/576e550c49a36...
         | (cross-ref with https://web.archive.org/web/20240329235153/http
         | s://tukaani.o...)
         | 
         | EDIT: Spoke with some folks in the package channel on libera,
         | seems to be a non-issue. It is not used as attestation nor an
         | ACL.
         | 
         | --
         | 
         | Arch appears to still list Jia as an approved publisher, if I'm
         | understanding this page correctly.
         | 
         | https://gitlab.archlinux.org/archlinux/packaging/packages/xz...
         | 
         | EDIT: Just sent an email to the last committer to bring it to
         | their attention.
         | 
         | EDIT: It's been removed.
         | 
         | --
         | 
         | jiatan's Libera info indicates they registered on Dec 12
         | 13:43:12 2022 with no timezone information.
         | -NickServ- Information on jiatan (account jiatan):
         | -NickServ- Registered : Dec 12 13:43:12 2022 +0000 (1y 15w 3d
         | ago)         -NickServ- Last seen : (less than two weeks ago)
         | -NickServ- User seen : (less than two weeks ago)
         | -NickServ- Flags : HideMail, Private         -NickServ- jiatan
         | has enabled nick protection         -NickServ- *** End of Info
         | ***
         | 
         | /whowas expired not too long ago, unfortunately. If anyone has
         | it I'd love to know.
         | 
         | They are not registered on freenode.
         | 
         | EDIT: Libera has stated they have not received any requests for
         | information from any agencies as of yet (30th Saturday March
         | 2024 00:39:31 UTC).
         | 
         | EDIT: Jia Tan was using a VPN to connect; that's all I'll be
         | sharing here.
        
           | junon wrote:
           | Just for posterity since I can no longer edit: Libera staff
           | has been firm and unrelenting in their position not to
           | disclose anything whatsoever about the account. I obtained
           | the last point on my own. Libera has made it clear they will
           | not budge on this topic, which I applaud and respect. They
           | were not involved whatsoever in ascertaining a VPN was used,
           | and since that fact makes anything else about the connection
           | information moot, there's nothing else to say about it.
        
             | Fnoord wrote:
             | You applaud and respect this?
             | 
             | Either way, I do hope LE will get into this. They better
             | cooperate.
        
               | freeone3000 wrote:
               | Respect not giving out identifying information on
               | individuals whenever someone asks, no matter what company
               | they work for and what job they do? Yes. I respect this.
        
               | junon wrote:
               | I am not LE nor a government official. I did not present
               | a warrant of any kind. I asked in a channel about it.
               | Libera refused to provide information. Libera respecting
               | the privacy of users is of course something I applaud and
               | respect. Why wouldn't I?
        
               | supposemaybe wrote:
               | I hope you aren't in control of any customer data.
        
               | flykespice wrote:
               | It's called keeping integrity on not disclosing private
               | info any users from your network, regardless whether they
               | are bad actors.
               | 
               | I respect them for that.
               | 
               | Violating that code is just as bad as the bad actor
               | slipping backdoors.
        
           | Phenylacetyl wrote:
           | The alpine patch includes gettext-dev which is likely also
           | exploited as the same authors have been pushing gettext to
           | projects where their changes have been questioned
        
             | jwilk wrote:
             | What do you mean?
        
               | everybackdoor wrote:
               | Look at the newest commits, do you see anything
               | suspicious:
               | 
               | https://git.alpinelinux.org/aports/log/main/gettext
               | 
               | libunistring could also be affected as that has also been
               | pushed there
        
               | whoopdedo wrote:
               | Seeing so many commits that are "skip failing test" is a
               | very strong code smell.
        
               | jwilk wrote:
               | > do you see anything suspicious
               | 
               | No.
               | 
               | > libunistring could also be affected as that has also
               | been pushed there
               | 
               | What do you mean by "that"?
        
           | mook wrote:
           | FWIW, that's mingw-w64-xz (cross-compiled xz utils) in AUR,
           | not ming-w64 (which would normally refer to the compiler
           | toolchain itself).
        
             | junon wrote:
             | Good catch, thanks :)
        
           | reisse wrote:
           | > EDIT: Github has disabled all Tukaani repositories,
           | including downloads from the releases page.
           | 
           | Why? Isn't it better to freeze them and let as many people as
           | possible analyze the code?
        
             | Sebb767 wrote:
             | You can still find the source everywhere, if you look for
             | it. Having a fine-looking page distribute vulnerable source
             | code is a much bigger threat.
        
             | junon wrote:
             | Good question, though I can imagine they took this action
             | for two reasons:
             | 
             | 1. They don't have the ability to freeze repos (i.e. would
             | require some engineering effort to implement it), as I've
             | never seen them do that before.
             | 
             | 2. Many distros (and I assume many enterprises) were still
             | linking to the GitHub releases to source the infected
             | tarballs for building. Disabling the repo prevents that.
             | 
             | The infected tarballs and repo are still available
             | elsewhere for researchers to find, too.
        
               | nihilanth wrote:
               | They could always archive it. Theoretically (and I mean
               | theoretically only), there's another reason for Microsoft
               | to prevent access to repo: if a nation state was
               | involved, and there've been backdoor conversations to
               | obfuscate the trail.
        
               | jarfil wrote:
               | Archiving the repo doesn't stop the downloads. They would
               | need to rename it in order to prevent distro CI/CD from
               | keeping downloading untrustworthy stuff.
        
             | AtNightWeCode wrote:
             | Maybe one can get the code from here. New commits being
             | added it seems.
             | 
             | https://git.tukaani.org/
        
               | ptx wrote:
               | The latest commit is interesting (f9cf4c05edd14, "Fix
               | sabotaged Landlock sandbox check").
               | 
               | It looks like one of Jia Tan's commits (328c52da8a2)
               | added a stray "." character to a piece of C code that was
               | part of a check for sandboxing support, which I guess
               | would cause the code to fail to compile, causing the
               | check to fail, causing the sandboxing to be disabled.
        
               | junon wrote:
               | Lasse has also started his own documentation on the
               | incident.
               | 
               | https://tukaani.org/xz-backdoor/
        
               | josefx wrote:
               | Shouldn't they have tests running to ensure that the
               | check works on at least some systems?
        
             | megous wrote:
             | Yeah, another of the reasons not to host anything on Github
             | but optional git mirrors, if you don't want your FOSS
             | project to be subject to Microsoft playing a "world
             | police".
        
               | junon wrote:
               | Don't agree here. I've only ever seen GitHub do this in
               | extreme circumstances where they were _absolutely_
               | warranted.
        
               | megous wrote:
               | You mean like when they banned all people from wirte
               | access who didn't want to jump to their 2FA bandwagon?
        
             | godelski wrote:
             | You can find it on archive. Someone archived it last night
        
           | hypnagogic wrote:
           | Asking this here too: why isn't there an automated A/B or
           | diff match for the tarball contents to match the repo, auto-
           | flag with a warning if that happens? Am I missing something
           | here?
        
             | nolist_policy wrote:
             | The tarballs mismatching from the git tree is a feature,
             | not a bug. Projects that use submodules may want to include
             | these and projects using autoconf may want to generate and
             | include the configure script.
        
               | ano-ther wrote:
               | Here is a longer explainer:
               | https://www.redhat.com/en/blog/what-open-source-upstream
        
           | mikolajw wrote:
           | I've posted an earlier WHOWAS of jiatan here:
           | https://news.ycombinator.com/item?id=39868773
        
           | junon wrote:
           | It appears to be an RCE, not a public key bypass:
           | https://news.ycombinator.com/item?id=39877312
        
         | drazk wrote:
         | After reading the original post by Andres Freund,
         | https://www.openwall.com/lists/oss-security/2024/03/29/4, his
         | analysis indicates that the RSA_public_decrypt function is
         | being redirected to the malware code. Since RSA_public_decrypt
         | is only used in the context of RSA public key - private key
         | authentication, can we reasonably conclude that the backdoor
         | does not affect username-password authentication?
        
           | cbolton wrote:
           | Isn't it rather that the attacker can log in to the
           | compromised server by exploiting the RSA code path?
        
         | thayne wrote:
         | Do you know if it was actually the commit author, of if their
         | commit access was compromised?
        
           | bpye wrote:
           | If it was a compromise it also included the signing keys as
           | the release tarball was modified vs the source available on
           | GitHub.
        
         | LispSporks22 wrote:
         | Nice. I worked on a Linux disto when I was a wee lad and all we
         | did was compute a new md5 and ship it.
        
         | heresWaldo wrote:
         | Blows a big hole in open sources "more eyeballs == more secure"
         | if an obfuscated backdoor can make it in for months.
         | 
         | git pull git@github:all the deps to make my startup work
         | development practices will never be secured. Too many layers of
         | leaky abstraction.
         | 
         | I anticipate such obliviousness and so far, indifference, by
         | the "go fast, disrupt" industry will motivate governments to
         | push for more software on chip.
        
       | thesnide wrote:
       | The discussion to upload it to Debian is interesting on its own
       | https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1067708
        
         | 0x0 wrote:
         | Wow, that's a lot of anonymous accounts adding comments there
         | urging for a fast merge!
         | 
         | And this "Hans Jansen" guy is apparently running around
         | salsa.debian.org pushing for more updates in other projects as
         | well: https://salsa.debian.org/users/hjansen/activity
        
           | matthews2 wrote:
           | > running around salsa.debian.org pushing for more updates in
           | other projects as well
           | 
           | This is quite common in most (all?) distributions. People are
           | going through lists of outdated packages, updating them,
           | testing them, and pushing them.
        
             | mardifoufs wrote:
             | That account seems to be a contributor for xz though, you
             | can see him interact a lot with the author of the backdoor
             | on the GitHub repo. Some pull requests seem to be just the
             | two of them discussing and merging stuff (which is normal
             | but looks weird in this context)
        
           | 2OEH8eoCRo0 wrote:
           | And now we see why I don't trust anons, aliases, or anime
           | characters to make contributions.
           | 
           | My GitHub says exactly who I am!
        
             | LtWorf wrote:
             | It has been on the agenda for years to identify FOSS
             | contributors with an id... Wet dream for authoritarians
             | like you.
             | 
             | What would it solve when identity theft happens on a mass
             | scale on a day to day basis?
             | 
             | It'd just ruin the life of some random person whose
             | identity got stolen to create the account...
        
             | kgeist wrote:
             | You can quite easily generate a realistic photo, bio, even
             | entire personal blogs and GitHub projects, using generative
             | AI, to make it look like it's a real person.
        
               | isbvhodnvemrwvn wrote:
               | With close to zero OSS participation rate you can just
               | pick a real living person and just keep in sync with
               | their LinkedIn.
        
             | kunagi7 wrote:
             | Even if they have a "real" picture or a credible
             | description that is not good enough. Instead of using an
             | anime character a malicious actor could use an image
             | generator [0], they could generate a few images, obtain
             | something credible to most folks, and use that to get a few
             | fake identities going. Sadly, trusting people to be the
             | real thing and not a fake identity on the Internet is
             | difficult now and it will get worse.
             | 
             | [0] https://thispersondoesnotexist.com
        
           | ValdikSS wrote:
           | >that's a lot of anonymous accounts
           | 
           | Just FYI, krygorin4545@proton.me (the latest message before
           | the upload) was created Tue Mar 26 18:30:02 UTC 2024, about
           | an hour earlier than the message was posted.
           | 
           | Proton generates PGP key upon creating the account, with the
           | real datetime of the key (but the key does not include the
           | timezone).
        
         | nammi wrote:
         | That name jumped out at me, Hans Jansen is the name Dominic
         | Monaghan used when posing as a German interviewer with Elijah
         | Woods. Not that it can't be a real person
         | 
         | https://youtu.be/IfhMILe8C84
        
           | joeyh wrote:
           | See comments about "Hans Janson" upthread, he appeared to
           | collaborate on the exploit in other ways as well.
        
           | rsync wrote:
           | Hans Gruber would have Been a much more stylish choice...
        
         | fullstop wrote:
         | I get the feeling that a number of the comments are all the
         | same person / group.
        
         | oasisaimlessly wrote:
         | For anyone else feeling some deja vu about ifunc / Valgrind
         | errors, this Red Hat issue [1] was previously linked from HN 12
         | days ago [2].
         | 
         | [1]: https://bugzilla.redhat.com/show_bug.cgi?id=2267598
         | 
         | [2]: https://news.ycombinator.com/item?id=39733185
        
       | formerly_proven wrote:
       | Quite ironic: The most recent commit in the git repo is "Simplify
       | SECURITY.md", committed by the same Github account which added
       | the backdoor.
       | 
       | https://github.com/tukaani-project/xz/commit/af071ef7702debe...
        
         | rany_ wrote:
         | It's not ironic, this change is really sinister IMO. They want
         | you to waste more time after you've submitted the security
         | report and maximize the amount of back and forth. Basically the
         | hope is that they'd be able to pester you with requests for
         | more info/details in order to "resolve the issue" which would
         | give them more time to exploit their targets.
        
       | arp242 wrote:
       | I've long since said that if you want to hide something nefarious
       | you'd do that in the GNU autoconf soup (and not in "curl | sh"
       | scripts).
       | 
       | Would be interesting to see what's going on here; the person who
       | did the releases has done previous releases too (are they
       | affected?) And has commits going back to 2022 - relatively
       | recent, but not _that_ recent. Many are real commits with real
       | changes, and they have commits on some related projects like
       | libarchive. Seems like a lot of effort just to insert a backdoor.
       | 
       |  _Edit_ : anyone with access can add files to existing releases
       | and it won't show that someone else added it (I just tested).
       | However, the timestamp of the file will be to when you uploaded
       | it, not that of the release. On xz all the timestamps of the
       | files match with the timestamp of the release (usually the
       | .tar.gz is a few minutes earlier, which makes sense). So looks
       | like they were done by the same person who did the release. I
       | suspected someone else might have added/altered the files briefly
       | after the release before anyone noticed, but that doesn't seem to
       | be the case.
        
         | sslayer wrote:
         | I would be curious if their commits could be analyzed for
         | patterns that could then be used to detect commits from their
         | other account
        
           | carom wrote:
           | There was a DARPA program on this topic called Social Cyber.
           | [1]
           | 
           | 1. https://www.darpa.mil/program/hybrid-ai-to-protect-
           | integrity...
        
           | bombcar wrote:
           | One thing that is annoying is that many open source projects
           | have been getting "garbage commits" apparently from people
           | looking to "build cred" for resumes or such.
           | 
           | Easier and easier to hide this junk in amongst them.
        
             | bananapub wrote:
             | annoying ... and convenient for some!
        
         | bawolff wrote:
         | I mean, a backdoor at this scale (particularly if it wasn't
         | noticed for a while and got into stable distros) could be worth
         | millions. Maybe hundreds of millions (think of the insider
         | trading possibilities alone, not to mention espionage). 2 years
         | doesn't seem like that much work relative to the potential pay
         | off.
         | 
         | This is the sort of case where america's over the top hacking
         | laws make sense.
        
           | jethro_tell wrote:
           | And what law would you use to target someone who wrote some
           | code and posted it for free on the internet that was
           | willingly consumed?
        
             | bawolff wrote:
             | The computer abuse and fraud act? Seems like a pretty easy
             | question to answer.
        
               | jethro_tell wrote:
               | Maybe I'm miss understanding things, but it seems like
               | anyone can publish an exploit on the internet without
               | being a crime. In the same way encryption is free speech.
               | 
               | It would seem unlikely this guy would be also logging
               | into peoples boxes after this.
               | 
               | It seems a much tougher job to link something like this
               | to an intentional unauthorized access.
               | 
               | At this point, we have no confirmed access via
               | compromise.
               | 
               | Do you know of a specific case where the existence of a
               | backdoor has been prosecuted without a compromise?
               | 
               | Who would have standing to bring this case? Anyone with a
               | vulnerable machine? Someone with a known unauthorized
               | access. Other maintainers of the repo?
               | 
               | IANAL but it is unclear that a provable crime has been
               | committed here
        
               | refulgentis wrote:
               | > IANAL
               | 
               | Best to leave it at that.
               | 
               | It's not worth your time or the reader's time trying to
               | come up with a technicality to make it perfectly legal to
               | do something we know little about, other than it's
               | extremely dangerous.
               | 
               | Law isn't code, you gotta violate some pretty bedrock
               | principles to pull off something like this and get away
               | with it.
               | 
               | Yes, if you were just a security researcher experimenting
               | on GitHub, it's common sense you should get away with
               | it*, and yes, it's hard to define a logical proof that
               | ensnares this person, and not the researcher.
               | 
               | * and yes, we can come up with another hypothetical where
               | the security researcher shouldn't get away with it.
               | Hypotheticals all the way down.
        
               | SolarNet wrote:
               | And of course an attacker like this has a high likelihood
               | of being a state actor, comfortably secure in their
               | native jurisdiction.
        
               | amiga386 wrote:
               | I think this thread is talking at cross-purposes.
               | 
               | 1. It should be legal to develop or host pen-
               | testing/cracking/fuzzing/security software that can break
               | other software or break into systems. It should be
               | illegal to _use_ the software to gain _unauthorised_
               | access to others' systems. (e.g. it's legal to create or
               | own lockpicks and use them on your own locks, or locks
               | you've been given permission to pick. It's not legal to
               | gain unauthorised access _using_ lockpicks)
               | 
               | 2. It should be illegal to develop malware that
               | _automatically_ gains unauthorised access to systems
               | (trojans, viruses, etc.). However, it should be legal to
               | maintain an archive of malware, limiting access to vetted
               | researchers, so that it can be studied, reverse-
               | engineered and combatted. (e.g. it's illegal to develop
               | or spread a bioweapon, but it's ok for authorised people
               | to maintain samples of a bioweapon in order to provide
               | antidotes or discover what properties it has)
               | 
               | 3. What happened today: It should be illegal to
               | intentionally undermine the security of a project by
               | making bad-faith contributions to it that misrepresent
               | what they do... even if you're a security researcher. It
               | could only possibly be allowed done if an agreement was
               | reached in advance with the project leaders to allow such
               | intentional weakness-probing, with a plan to reveal the
               | deception and treachery.
               | 
               | Remember when university researchers tried to find if
               | LKML submissions could be gamed? They didn't tell the
               | Linux kernel maintainers they were doing that. When the
               | Linux kernel maintainers found out, they banned the
               | entire university from making contributions and removed
               | everything they'd done.
               | 
               | https://lkml.org/lkml/2021/4/21/454
               | 
               | https://arstechnica.com/gadgets/2021/04/linux-kernel-
               | team-re...
        
               | refulgentis wrote:
               | Talking at cross-purposes?
               | 
               | No, people being polite and avoiding the more direct
               | answer that'd make people feel bad.
               | 
               | The rest of us understand that intuitively, and that it
               | is already the case, so pretending there was some need to
               | work through it, at best, validates a misconception for
               | one individual.
               | 
               | Less important, as it's mere annoyance rather than
               | infohazard: it's wildly off-topic. Legal hypotheticals
               | where a security researcher released "rm -rf *" on GitHub
               | and ended up in legal trouble is 5 steps downfield even
               | in this situation, and it is a completely different
               | situation. Doubly so when everyone has to "IANAL" through
               | the hypotheticals.
        
               | bawolff wrote:
               | > but it seems like anyone can publish an exploit on the
               | internet without being a crime
               | 
               | Of course. The mere publishing of the exploit is not the
               | criminal part. Its the manner & intent in which it was
               | published that is the problem.
               | 
               | > At this point, we have no confirmed access via
               | compromise.
               | 
               | While i don't know the specifics for this particular law,
               | generally it doesn't matter what you actually did. What
               | is relavent is what you _tried_ to do. Lack of success
               | doesn 't make you innocent.
               | 
               | > Who would have standing to bring this case?
               | 
               | The state obviously. This is a criminal matter not a
               | civil one. You don't even need the victim's consent to
               | bring a case.
               | 
               | [IANAL]
        
               | anticensor wrote:
               | Some types of criminal cases are only pursued on a
               | victim's complaint.
        
               | dec0dedab0de wrote:
               | By this logic you could say that leaving a poisoned can
               | of food in a public pantry is not a crime because poison
               | is legal for academic purposes, and whoever ate it took
               | it willingly.
               | 
               | Also, I think getting malicious code into a repo counts
               | as a compromise in and of itself.
        
             | unethical_ban wrote:
             | Are you suggesting intent is impossible to determine?
        
         | eigenvalue wrote:
         | Every single commit this person ever did should immediately be
         | rolled back in all projects.
        
           | gopher_space wrote:
           | It's weird and disturbing that this isn't the default
           | perspective.
        
             | freedomben wrote:
             | Well, it is much easier said than done. Philosophically I
             | agree, but in the real world where you have later commits
             | that might break and downstream projects, etc, it isn't
             | very practical. It strikes me as in a similar vein to high
             | school students and beauty pageant constestants calling for
             | world peace. Really great goal, not super easy to
             | implement.
             | 
             | I would definitely be looking at every single commit though
             | and if it isn't obviously safe I'd be drilling in.
        
             | maxcoder4 wrote:
             | Imagine someone tried to revert all the commits you ever
             | did. Doesn't sound easy.
        
             | concordDance wrote:
             | Some of those commits might fix genuine vulnerabilities. So
             | you might trade a new backdoor for an old vulnerability
             | that thousands of criminal orgs have bots for exploiting.
             | 
             | Damage wise, most orgs aren't going to be hurt much by NSA
             | or the Chinese equivalent getting access, but a Nigerian
             | criminal gang? They're far more likely to encrypt all your
             | files and demand a ransom.
        
               | mysidia wrote:
               | Still.. At this point the default assumption should be
               | every commit is a vulnerability or facilitating a
               | potential vulnerability.
               | 
               | For example, change from safe_fprintf to fprintf. It
               | would be appropriate that every commit should be reviewed
               | and either tweaked or re-written to ensure the task is
               | being done in the safest way and doesn't have anything
               | that is "off" or introducing a deviation from the way
               | that codebase standardly goes about tasks within
               | functions.
        
               | KeplerBoy wrote:
               | Surely this is happening right now.
               | 
               | A lot of eyes are on the code. From all sides. Folks
               | trying to find old unpatched backdoors to exploit or
               | patch.
        
             | bananapub wrote:
             | it's not weird at all?
             | 
             | randomly reverting two years of things across dozens of
             | repositories will break them, almost definitely make them
             | unbuildable, but also make them unreleasable in case any
             | other change needs to happen soon.
             | 
             | all of their code needs to be audited to prove it shouldn't
             | be deleted, of course, but that can't happen in the next
             | ten minutes.
             | 
             | I swear that HN has the least-thought-through hot takes of
             | any media in the world.
        
               | datascienced wrote:
               | Yeah if you tried to revert stuff that was done weeks ago
               | on a relatively small team you know how much painstaking
               | work it can be.
        
               | ryanwaggoner wrote:
               | * I swear that HN has the least-thought-through hot takes
               | of any media in the world.*
               | 
               | The irony is too good.
        
             | kaliqt wrote:
             | You can't just go and rip out old code, it'll break
             | everything else, you have to review each commit and decide
             | what to do with each.
        
               | maerF0x0 wrote:
               | "immediately" could mean have humans swarm on the task
               | and make a choice, as opposed to                   for
               | commit in author_commits             git revert $commit
        
             | crest wrote:
             | Too much fallout.
        
           | andruby wrote:
           | Hoe will you do that practically though? That's probably
           | thousands of commits upon which tens or hundred thousand
           | commits from others were built. You can't just rollback
           | everything two years and expect it not to break or bring back
           | older vulnerabilities that were patched in those commits.
        
             | kjs3 wrote:
             | Likely part of what the attacker(s) are counting on. Anyone
             | want to place odds this isn't the only thing that's going
             | to be found?
        
               | umanwizard wrote:
               | I'd bet you at even odds that nothing else malicious by
               | this person is found in 1 month, and at 1:2.5 odds that
               | nothing is found in a year.
        
           | neurostimulant wrote:
           | Rolling back two years worth of commits made by a major
           | contributor is going to be hell. I'm looking forward to see
           | how they'll do this.
        
             | joeyh wrote:
             | Not really. xz worked fine 2 years ago. Roll back to 5.3.1
             | and apply a fix for the 1 security hole that was fixed
             | since that old version. (ZDI-CAN-16587)
             | 
             | Slight oversimplification, see
             | https://bugs.debian.org/1068024 discussion.
        
           | planb wrote:
           | I don't thinks that's necessary: there are enough eyes on
           | _this_ person's work now.
        
             | hcks wrote:
             | No one will do it seriously
        
         | ptx wrote:
         | Couldn't the autoconf soup be generated from simpler inputs by
         | the CI/CD system to avoid this kind of problem?
         | Incomprehensible soup as a build artifact (e.g. executables) is
         | perfectly normal, but it seems to me that such things don't
         | belong in the source code.
         | 
         | (This means you too, gradle-wrapper! And your generated wrapper
         | for your generated wrapper. That junk is not source code and
         | doesn't belong in the repo.)
        
           | bonzini wrote:
           | Yes, it's usually regenerated already. However even the
           | source is often pretty gnarly.
           | 
           | And in general, the build system of a large project is doing
           | a lot of work and is considered pretty uninteresting and
           | obscure. Random CMake macros or shell scripts would be just
           | as likely to host bad code.
           | 
           | This is also why I like meson, because it's much more
           | constrained than the others and the build system tends to be
           | more modular and the complex parts split across multiple
           | smaller, mostly independent scripts (written in Python or
           | bash, 20-30 lines max). It's still complex, but I find it
           | easier to organize.
        
             | jnxx wrote:
             | > And in general, the build system of a large project is
             | doing a lot of work and is considered pretty uninteresting
             | and obscure. Random CMake macros or shell scripts would be
             | just as likely to host bad code.
             | 
             | Build systems can even have undefined behaviour in the C++
             | sense. For example Conan 2 has a whole page on that.
        
           | mysidia wrote:
           | The other thing besides the autoconf soup is the XZ project
           | contains incomprehensible binaries as "test data"; the
           | "bad-3-corrupt_lzma2.xz" part of the backdoor that they even
           | put in the repo.
           | 
           | It's entirely possible they could have got that injection
           | through review, even if they had that framwork and instead
           | put it in source files used to generate autoconf soup.
        
           | WatchDog wrote:
           | gradle-wrapper is just a convenience, you can always just
           | build the project with an installed version of gradle.
           | Although I get your point, it's a great place to hide
           | nefarious code.
        
         | AeroNotix wrote:
         | Pure speculation but my guess is a specific state actor _ahem_
         | is looking for developers innocently working with open source
         | to then strongarm them into doing stuff like this.
        
           | dec0dedab0de wrote:
           | Or hiring them to do it for years without telling them why
           | until they need a favor.
        
           | Bulat_Ziganshin wrote:
           | many people are patriots of their countries. if state agency
           | would approach them proposing to have paid OSS work and help
           | their country to fight
           | terrorism/dictatorships/capitalists/whatever-they-believe,
           | they will feel like killing two birds with one job
        
         | bodyfour wrote:
         | > I've long since said that if you want to hide something
         | nefarious you'd do that in the GNU autoconf soup (and not in
         | "curl | sh" scripts).
         | 
         | Yeah, I've been banging on that same drum for ages too... for
         | example on this very site a decade ago:
         | https://news.ycombinator.com/item?id=7213563
         | 
         | I'm honestly surprised that this autoconf vector hasn't
         | happened more often... or more often that we know of.
        
           | pretzel5297 wrote:
           | Given that this was discovered by sheer luck, I'd expect way
           | more such exploits in the wild.
        
         | dist-epoch wrote:
         | > they have commits on some related projects like libarchive
         | 
         | Windows started using libarchive to support .rar, .7z, ...
         | 
         | https://arstechnica.com/gadgets/2023/05/cancel-your-winrar-t...
        
         | IshKebab wrote:
         | Yeah this was my first thought too. Though I think the case
         | against autoconf is already so overwhelming I think anyone
         | still using it is just irredeemable; this isn't going to
         | persuade them.
        
         | stabbles wrote:
         | How about wheels in the python ecosystem
        
         | WesolyKubeczek wrote:
         | > I've long since said that if you want to hide something
         | nefarious you'd do that in the GNU autoconf soup
         | 
         | If I recall correctly, xz can be built with both autoconf and
         | cmake, are cmake configs similarly affected?
        
           | amiga386 wrote:
           | Yes, there is evidence of sabotage on the CMake configs too.
           | 
           | https://git.tukaani.org/?p=xz.git;a=commit;h=f9cf4c05edd14de.
           | ..
        
       | bonyt wrote:
       | For those panicking, here are some key things to look for, based
       | on the writeup:
       | 
       | - A very recent version of liblzma5 - 5.6.0 or 5.6.1. This was
       | added in the last month or so. If you're not on a rolling release
       | distro, your version is probably older.
       | 
       | - A debian or RPM based distro of Linux on x86_64. In an apparent
       | attempt to make reverse engineering harder, it does not seem to
       | apply when built outside of deb or rpm packaging. It is also
       | specific to Linux.
       | 
       | - Running OpenSSH sshd from systemd. OpenSSH as patched by some
       | distros only pulls in libsystemd for logging functionality, which
       | pulls in the compromised liblzma5.
       | 
       | Debian testing already has a version called '5.6.1+really5.4.5-1'
       | that is really an older version 5.4, repackaged with a newer
       | version to convince apt that it is in fact an upgrade.
       | 
       | It is possible there are other flaws or backdoors in liblzma5,
       | though.
        
         | treffer wrote:
         | Ubuntu still ships 5.4.5 on 24.03 (atm).
         | 
         | I did a quick diff of the source (.orig file from
         | packages.ubuntu.com) and the content mostly matched the 5.4.5
         | github tag except for Changelog and some translation files. It
         | does match the tarball content, though.
         | 
         | So for 5.4.5 the tagged release and download on github differ.
         | 
         | It does change format strings, e.g.                  +#:
         | src/xz/args.c:735        +#, fuzzy        +#| msgid "%s: With
         | --format=raw, --suffix=.SUF is required unless writing to
         | stdout"        +msgid "With --format=raw, --suffix=.SUF is
         | required unless writing to stdout"        +msgstr "%s: amb
         | --format=raw, --suffix=.SUF es necessari si no s'escriu a la
         | sortida estandard"
         | 
         | There is no second argument to that printf for example. I think
         | there is at least a format string injection in the older
         | tarballs.
         | 
         | [Edit] formatting
        
           | fransje26 wrote:
           | Thanks for the heads up.
        
           | chasil wrote:
           | RHEL9 is shipping 5.2.5; RHEL8 is on 5.2.4.
        
           | mort96 wrote:
           | FYI, your formatting is broken. Hacker News doesn't support
           | backtick code blocks, you have to indent code.
           | 
           | Anyway, so... the xz project has been compromised for a long
           | time, at least since 5.4.5. I see that this JiaT75 guy has
           | been the primary guy in charge of at least the GitHub
           | releases for years. Should we view all releases after he got
           | involved as probably compromised?
        
             | treffer wrote:
             | Thank you, formatting fixed.
             | 
             | My TLDR is that I would regard all commits by JiaT75 as
             | potentially compromised.
             | 
             | Given the ability to manipulate gitnhistory I am not sure
             | if a simple time based revert is enough.
             | 
             | It would be great to compare old copies of the repo with
             | the current state. There is no guarantee that the history
             | wasn't tampered with.
             | 
             | Overall the only safe action would IMHO to establish a new
             | upstream from an assumed good state, then fully audit it.
             | At that point we should probably just abandon it and use
             | zstd instead.
        
               | tomrod wrote:
               | Not just Jia. There are some other accounts of concern
               | with associated activity or short term/bot-is names.
        
               | jdright wrote:
               | yes, like this one:
               | https://github.com/facebook/folly/pull/2153
        
               | ogurechny wrote:
               | Zstd belongs to the class of speed-optimized compressors
               | providing "tolerable" compression ratios. Their intended
               | use case is wrapping some easily compressible data with
               | negligible (in the grand scale) performance impact. So
               | when you have a server which sends gigabits of text per
               | second, or caches gigabytes of text, or processes a queue
               | with millions of text protocol messages, you can add
               | compression on one side and decompression on the other to
               | shrink them without worrying too much about CPU usage.
               | 
               | Xz is an implant of 7zip's LZMA(2) compression into
               | traditional Unix archiver skeleton. It trades long
               | compression times and giant dictionaries (that need lots
               | of memory) for better ("much-better-than-deflate")
               | compression ratios. Therefore, zstd, no matter how
               | fashionable that name might be in some circles, is not a
               | replacement for xz.
               | 
               | It should also be noted that those LZMA-based archive
               | formats might not be considered state-of-the-art today.
               | If you worry about data density, there are options for
               | both faster compression at the same size, and better
               | compression in the same amount of time (provided that
               | data is generally compressible). 7zip and xz are
               | widespread and well tested, though, and allow
               | decompression to be fast, which might be important in
               | some cases. Alternatives often decompress much slowly.
               | This is also a trade-off between total time spent on X
               | nodes compressing data, and Y nodes decompressing data.
               | When X is 1, and Y is in the millions (say, software
               | distribution), you can spend A LOT of time compressing
               | even for relatively minuscule gains without affecting the
               | scales.
               | 
               | It should also be noted that many (or most) decoders of
               | top compressing archivers are implemented as virtual
               | machines executing chains of transform and unpack
               | operations defined in archive file over pieces of data
               | also saved there. Or, looking from a different angle,
               | complex state machines initializing their state using
               | complex data in the archive. Compressor tries to find
               | most suitable combination of basic steps based on input
               | data, and stores the result in the archive. (This is
               | logically completed in neural network compression tools
               | which learn what to do with data from data itself.) As
               | some people may know, implementing all that byte juggling
               | safely and effectively is a herculean task, and
               | compression tools had exploits in the past because of
               | that. Switching to a better solution might introduce a
               | lot more potentially exploited bugs.
        
               | treffer wrote:
               | Arch Linux switched switched from xz to zstd, with
               | neglectable increase in size (<1%) but massive speedup on
               | decompression. This is exactly the use case of many
               | people downloading ($$$) and decompressing. It is the
               | software distribution case. Other distributions are
               | following that lead.
               | 
               | You should use ultra settings and >=19 as the compression
               | level. E.g. arch used 20 and higher compression levels do
               | exist, but they were already at a <1% increase.
               | 
               | It does beat xz for these tasks. It's just not the
               | default settings as those are indeed optimized for the
               | lzo to gzip/bzip2 range.
        
               | the8472 wrote:
               | Note that the xz CLI does not expose all available
               | compression options of the library. E.g. rust release
               | tarballs are xz'd with custom compression settings. But
               | yeah, zstd is good enough for many uses.
        
               | ogurechny wrote:
               | My bad, I was too focused on that class in general,
               | imagining "lz4 and friends".
               | 
               | Zstd does reach LZMA compression ratios on high levels,
               | but compression times also drop to LZMA level. Which,
               | obviously, was clearly planned in advance to cover both
               | high speed online applications and slow offline
               | compression (unlike, say, brotli). Official limit on
               | levels can also be explained by absence of gains on most
               | inputs in development tests.
               | 
               | Distribution packages contain binary and mixed data,
               | which might be less compressible. For text and mostly
               | text, I suppose that some old style LZ-based tools can
               | still produce an archive roughly 5% percent smaller (and
               | still unpack fast); other compression algorithms can
               | certainly squeeze it much better, but have symmetric time
               | requirements. I was worried about the latter kind being
               | introduced as a replacement solution.
        
               | shanipribadi wrote:
               | Looking forward to the time when Meta will make
               | https://github.com/facebookincubator/zstrong.git public
               | 
               | found it mentioned in https://github.com/facebook/proxyge
               | n/blob/main/build/fbcode_..., looks like it's going to be
               | cousin of zstd, but maybe for the stronger compression
               | use cases
        
               | joveian wrote:
               | Note that zstd (the utility) currently links to liblzma
               | since it can compress and decompress other formats.
        
           | jwilk wrote:
           | "#, fuzzy" means the translation is out-of-date and it will
           | be discarded at compile time.
        
             | treffer wrote:
             | I tried to get the translation to trigger by switching to
             | french and it does not show. You are right.
             | 
             | So it's just odd that the tags and release tarballs
             | diverge.
        
         | hostyle wrote:
         | $ dpkg-query -W liblzma5
         | 
         | liblzma5:amd64 5.4.1-0.2
        
         | fransje26 wrote:
         | I did notice that my debian-based system got noticeably slower
         | and unresponsive at times the last two weeks, without obvious
         | reasons. Could it be related?
         | 
         | I read through the report, but what wasn't directly clear to me
         | was: what does the exploit actually do?
         | 
         | My normal internet connection has such an appalling upload that
         | I don't think anything relevant could be uploaded. But I will
         | change my ssh keys asap.
        
           | anarazel wrote:
           | > I did notice that my debian-based system got noticeably
           | slower and unresponsive at times the last two weeks, without
           | obvious reasons. Could it be related?
           | 
           | Possible but unlikely.
           | 
           | > I read through the report, but what wasn't directly clear
           | to me was: what does the exploit actually do?
           | 
           | It injects code that runs early during sshd connection
           | establishment. Likely allowing remote code execution if you
           | know the right magic to send to the server.
        
             | fransje26 wrote:
             | Thank you for the explanation.
        
           | cpach wrote:
           | Are you on stable/testing/unstable?
           | 
           | With our current knowledge, stable shouldn't be affected by
           | this.
        
             | fransje26 wrote:
             | Stable, luckily. Thank you for the information.
        
         | idoubtit wrote:
         | The article gives a link to a simple shell script that detects
         | the signature of the compromised function.
         | 
         | > Running OpenSSH sshd from systemd
         | 
         | I think this is irrelevant.
         | 
         | From the article: "Initially starting sshd outside of systemd
         | did not show the slowdown, despite the backdoor briefly getting
         | invoked." If I understand correctly the whole section, the
         | behavior of OpenSSH may have differed when launched from
         | systemd, but the backdoor was there in both cases.
         | 
         | Maybe some distributions that don't use systemd strip the libxz
         | code from the upstream OpenSSH release, but I wouldn't bet on
         | it if a fix is available.
        
           | bonyt wrote:
           | I think the distributions that _do_ use systemd are the ones
           | that add the libsystemd code, which in turn brings in the
           | liblzma5 code. So, it may not be entirely relevant how it is
           | run, but it needs to be a version of OpenSSH patched.
        
           | anarazel wrote:
           | > From the article: "Initially starting sshd outside of
           | systemd did not show the slowdown, despite the backdoor
           | briefly getting invoked." If I understand correctly the whole
           | section, the behavior of OpenSSH may have differed when
           | launched from systemd, but the backdoor was there in both
           | cases.
           | 
           | It looks like the backdoor "deactivates" itself when it
           | detects being started interactively, as a security researcher
           | might. I was eventually able to circumvent that, but unless
           | you do so, it'll not be active when started interactively.
           | 
           | However, the backdoor would also be active if you started it
           | with an shell script (as the traditional sys-v rc scripts
           | did) outside the context of an interactive shell, as TERM
           | wouldn't be set either in that context.
           | 
           | > Maybe some distributions that don't use systemd strip the
           | libxz code from the upstream OpenSSH release, but I wouldn't
           | bet on it if a fix is available.
           | 
           | There's no xz code in openssh.
        
           | nwallin wrote:
           | > Maybe some distributions that don't use systemd strip the
           | libxz code from the upstream OpenSSH release, but I wouldn't
           | bet on it if a fix is available.
           | 
           | OpenSSH is developed by the OpenBSD project, and systemd is
           | not compatible with OpenBSD. The upstream project has no
           | systemd or liblzma code to strip. If your sshd binary links
           | to liblzma, it's because the package maintainers for your
           | distro have gone out of their way to add systemd's patch to
           | your sshd binary.
           | 
           | > From the article: "Initially starting sshd outside of
           | systemd did not show the slowdown, despite the backdoor
           | briefly getting invoked." If I understand correctly the whole
           | section, the behavior of OpenSSH may have differed when
           | launched from systemd, but the backdoor was there in both
           | cases.
           | 
           | From what I understand, the backdoor detects if it's in any
           | of a handful of different debug environments. If it's in a
           | debug environment or not launched by systemd, it won't hook
           | itself up. ("nothing to see here folks...") But if sshd isn't
           | linked to liblzma to begin with, none of the backdoor's code
           | even exists in the processes' page maps.
           | 
           | I'm still downgrading to an unaffected version, of course,
           | but it's nice to know I was never vulnerable just by typing
           | 'ldd `which sshd`' and not seeing liblzma.so.
        
         | blcknight wrote:
         | > Debian testing already has a version called
         | '5.6.1+really5.4.5-1' that is really an older version 5.4,
         | repackaged with a newer version to convince apt that it is in
         | fact an upgrade.
         | 
         | I'm surprised .deb doesn't have a better approach. RPM has
         | epoch for this purpose http://novosial.org/rpm/epoch/index.html
        
           | pja wrote:
           | Debian packages can have epochs too. I'm not sure why the
           | maintainers haven't just bumped the epoch here.
           | 
           | Maybe they're expecting a 5.6.x release shortly that fixes
           | all these issues & don't want to add an epoch for a very
           | short term packaging issue?
        
           | nicolas_17 wrote:
           | .deb has epochs too, but I _think_ Debian developers avoid it
           | where possible because 1:5.4.5 is interpreted as newer than
           | _anything_ without a colon, so it would break eg. packages
           | that depend on liblzma  >= 5.0, < 6. There may be more common
           | cases that aren't coming to mind now.
        
           | stefanor wrote:
           | Debian has epochs, but it's a bad idea to use them for this
           | purpose.
           | 
           | Two reasons:
           | 
           | 1. Once you bump the epoch, you have to use it forever. 2.
           | The deb filename often doesn't contain the epoch (we use a
           | colon which isn't valid on many filesystems), so an epoch-
           | revert will give the same file name as pre-epoch, which
           | breaks your repository.
           | 
           | So, the current best practice is the +really+ thing.
        
             | intel_brain wrote:
             | Got this on OpenSUSE: `5.6.1.revertto5.4-3.2`
        
             | o11c wrote:
             | Honestly, the Gentoo-style global blacklist (package.mask)
             | to force a downgrade is probably a better approach for
             | cases like this. Epochs only make sense if your upstream is
             | insane and does not follow a consistent numbering system.
        
             | kzrdude wrote:
             | Thanks for the info, the filename thing sounds like a
             | problem, one aspect of the epoch system doesn't work for
             | the purpose then.
        
           | 5p4n911 wrote:
           | I really like the XBPS way of the reverts keyword in the
           | package template that forces a downgrade from said software
           | version. It's simple but works without any of the troubles
           | RPM epochs have with resolving dependencies as it's just
           | literally a way to tell xbps-install that "yeah, this is a
           | lower version number in the repository but you should update
           | anyway".
        
         | NotPractical wrote:
         | > If you're not on a rolling release distro, your version is
         | probably older.
         | 
         | Ironic considering security is often advertised as a feature of
         | rolling release distros. I suppose in most instances it does
         | provide better security, but there are some advantages to
         | Debian's approach (stable Debian, that is).
        
           | javajosh wrote:
           | _> Ironic considering security is often advertised as a
           | feature of rolling release distros._
           | 
           | Security _is_ a feature of rolling release. But supply-chain
           | attacks like this are the exception to the rule.
        
             | yreg wrote:
             | Isn't that what security-updates-only is for?
             | 
             | This particular backdoor is not shipped inside of a
             | security patch, right?
        
           | leeoniya wrote:
           | i mean, rolling implies rolling 0-days, too.
        
         | pdw wrote:
         | Focusing on sshd is the wrong approach. The backdoor was in
         | liblzma5. It was discovered to attack sshd, but it very likely
         | had other targets as well. The payload hasn't been analyzed
         | yet, but _almost everything_ links to libzma5. Firefox and
         | Chromium do. Keepassxc does. And it might have made arbitrary
         | changes to your system, so installing the security update might
         | not remove the backdoor.
        
           | junon wrote:
           | From what I'm understanding it's trying to patch itself into
           | the symbol resolution step of ld.so specifically for
           | libcrypto under systemd on x86_64. Am I misreading the
           | report?
           | 
           | That's a strong indication it's targeting sshd specifically.
        
             | pdw wrote:
             | Lots of software links both liblzma and libcrypto. As I
             | read Andres Freund's report, there is still a lot of
             | uncertainty:
             | 
             | "There's lots of stuff I have not analyzed and most of what
             | I observed is purely from observation rather than
             | exhaustively analyzing the backdoor code."
             | 
             | "There are other checks I have not fully traced."
        
           | saagarjha wrote:
           | It checks for argv[0] == "sshd"
        
       | colanderman wrote:
       | The latest commit from the user who committed those patches is
       | weirdly a simplification of the security reporting process, to
       | not request as much detail:
       | 
       | https://github.com/tukaani-project/xz/commit/af071ef7702debe...
       | 
       | Not sure what to make of this.
        
         | caelum19 wrote:
         | Potentially the purpose is that if someone goes to the effort
         | to get those details together, they are more likely to send the
         | same report to other trusted individuals. Maybe it was
         | originally there to add legitimacy, then they got a report sent
         | in, and removed it to slow the spread of awareness
        
           | londons_explore wrote:
           | > Affected versions of XZ Utils
           | 
           | Most people, to find the affected versions, would either have
           | to bisect or delve deep enough to find the offending commit.
           | Either of which would reveal the attacker.
           | 
           | By not asking for the version, there is a good chance you
           | just report "It's acting oddly, plz investigate".
        
         | rany_ wrote:
         | I think the reason is pretty obvious. They want you to waste
         | more time after you've submitted the security report and
         | maximize the amount of back and forth. Basically the hope is
         | that they'd be able to pester you with requests for more
         | info/details in order to "resolve the issue" which would give
         | them more time to exploit their targets.
        
         | colanderman wrote:
         | That repository is now disabled. But here's a similar change to
         | the .github repository of tukaani-project from @JiaT75 to the
         | bug report template:                   + or create a private
         | Security Advisory instead.
         | 
         | Under a commit titled "Wrap text on Issue template .yaml
         | files."
         | 
         | [1] https://github.com/tukaani-
         | project/.github/commit/44b766adc4...
        
       | AdmiralAsshat wrote:
       | Yikes! Do you have any info on the individual's background or
       | possible motivations?
        
         | bbarnett wrote:
         | Yikes indeed. This fix is being rolled out very fast, but what
         | about the entire rest of the codebase? And scripts? I mean,
         | years of access? I'd trust no aspect of this code until a full
         | audit is done, at least of every patch this author contributed.
         | 
         | (note: not referring to fedora here, a current fix is required.
         | But just generally. As in, everyone is rolling out this fix,
         | but... I mean, this codebase is poison in my eyes without a
         | solid audit)
        
           | bbarnett wrote:
           | This seems to be the account, correct me if wrong (linked
           | from the security email commit link):
           | 
           | https://github.com/JiaT75
           | 
           | I hope authors of all these projects have been alerted.
           | 
           | STest - Unit testing framework for C/C++. Easy to use by
           | simply dropping stest.c and stest.h into your project!
           | 
           | libarchive/libarchive - Multi-format archive and compression
           | library
           | 
           | Seatest - Simple C based Unit Testing
           | 
           | Everything this account has done should be investigated.
           | 
           | Woha, is this legit or some sort of scam on Google in some
           | way?:
           | 
           | https://github.com/google/oss-fuzz/pull/11587
           | 
           | edit: I have to be missing something, or I'm confused. The
           | above author seems to be primary contact for xz? Have they
           | just taken over?? Or did the bad commit come from another
           | source, and a legit person applied it?
           | 
           | A bit confused here.
        
             | tux3 wrote:
             | The concern about other projects is fine, but let's be
             | careful with attacks directed at the person.
             | 
             | Maybe their account is compromised, maybe the username
             | borrows the identity of an innocent person with the same
             | name.
             | 
             | Focus on the code, not people. No point forming a mob.
             | 
             | (e: post above was edited and is no longer directed at the
             | person. thanks for the edit.)
        
               | simiones wrote:
               | It's important to focus on people, not just code, when
               | suspecting an adversary. Now, I have no idea if this is
               | the right account, and if it has recently been
               | compromised/sold/lost, or if it has always been under the
               | ownership of the person who committed the backdoor. But
               | IF this is indeed the right account, then it's important
               | to block any further commit from it to any project, no
               | matter how innocuous it seems, and to review thoroughly
               | any past commit. For the most security-conscious
               | projects, it would be a good idea to even consider
               | reverting and re-implementing any work coming from this
               | account if it's not fully understood.
               | 
               | An account that has introduced a backdoor is not the same
               | thing as an account who committed a bug.
        
               | tux3 wrote:
               | I agree we should look at the account and its
               | contributions, I make a distinction between the account
               | and the person.
               | 
               | Sometimes the distinction is not meaningful, but better
               | safe than sorry.
        
               | tsimionescu wrote:
               | Oh, agreed then.
        
               | kevin_b_er wrote:
               | They appear to have moved carefully to set this up over
               | the course of weeks by setting up the framework to
               | perform this attack.
               | 
               | I would now presume this person to be a hostile actor and
               | their contributions anywhere and everywhere must be
               | audited. I would not wait for them to cry 'but my bother
               | did it', because an actual malicious actor would say the
               | same thing. The 'mob' should be pouring over everything
               | they've touched.
               | 
               | Audit now and audit aggressively.
        
               | bbarnett wrote:
               | My above post shows the primary domain for xz moving from
               | tukaani.org to xz.tukaani.org. While it's hosted on
               | github:
               | 
               | $ host xz.tukaani.org
               | 
               |  _host xz.tukaani.org is an alias for tukaani-
               | project.github.io._
               | 
               | And originally it was not:
               | 
               | $ host tukaani.org
               | 
               |  _tukaani.org has address 5.44.245.25_ (seemingly in
               | Finland)
               | 
               | It was moved there in Jan of this year, as per the commit
               | listed in my prior post. By this same person/account.
               | This means that instead of Lasse Collin's more
               | restrictive webpage, an account directly under the
               | control of the untrusted account, is now able to edit the
               | webpage without anyone else's involvement.
               | 
               | For example, to make subtle changes in where to report
               | security issues to, and so on.
               | 
               | So far I don't see anything nefarious, but at the same
               | time, isn't this the domain/page hosting bad tarballs
               | too?
        
               | buildbot wrote:
               | > tukaani.org has address 5.44.245.25 (seemingly in
               | Finland)
               | 
               | Hetzner?
        
               | TimWolla wrote:
               | No:                   route:          5.44.240.0/21
               | descr:          Zoner Oy         origin:         AS201692
               | mnt-by:         MNT-ZONER         created:
               | 2014-09-03T08:09:00Z         last-modified:
               | 2014-09-03T08:09:00Z         source:         RIPE
        
               | buildbot wrote:
               | Interesting, seems to be a tiny finnish hosting company:
               | https://www.zoner.fi/english/
        
               | whizzter wrote:
               | It's Finnish, Oy is short for "Osake Yhtio" (share-
               | association, basically a LLC), seems to be
               | registered/hosted at https://www.zoner.fi/
        
               | yencabulator wrote:
               | For what it's worth, tukaani is how you spell toucan (the
               | bird) in Finnish, and Lasse is a common Finnish name; the
               | site being previously hosted in Finland is very
               | plausible.
        
               | Stagnant wrote:
               | Yeah according to their website[0] it looks like majority
               | of the past contributors were Finnish so nothing odd
               | about the hosting provider. On the same page it says that
               | Jia Tan became co-maintainer of xz in 2022.
               | 
               | 0: https://tukaani.org/about.html
        
               | pja wrote:
               | This account changed the instructions for reporting
               | security issues in the xz github as their very last
               | commit:                   commit
               | af071ef7702debef4f1d324616a0137a5001c14c (HEAD -> master,
               | origin/master, origin/HEAD)         Author: Jia Tan
               | <jiat0218@gmail.com>         Date:   Tue Mar 26 01:50:02
               | 2024 +0800                  Docs: Simplify SECURITY.md.
               | diff --git a/.github/SECURITY.md b/.github/SECURITY.md
               | index e9b3458a..9ddfe8e9 100644         ---
               | a/.github/SECURITY.md         +++ b/.github/SECURITY.md
               | @@ -16,13 +16,7 @@ the chance that the exploit will be
               | used before a patch is released.          You may submit
               | a report by emailing us at
               | [xz@tukaani.org](mailto:xz@tukaani.org), or through
               | [Security Advisories](https://github.com/tukaani-
               | project/xz/security/advisories/new).         -While both
               | options are available, we prefer email. In any case,
               | please         -provide a clear description of the
               | vulnerability including:         -         -- Affected
               | versions of XZ Utils         -- Estimated severity (low,
               | moderate, high, critical)         -- Steps to recreate
               | the vulnerability         -- All relevant files (core
               | dumps, build logs, input files, etc.)         +While both
               | options are available, we prefer email.
               | This project is maintained by a team of volunteers on a
               | reasonable-effort          basis. As such, please give us
               | 90 days to work on a fix before
               | 
               | Seems innocuous, but maybe they were planning further
               | changes.
        
               | bombcar wrote:
               | > Seems innocuous, but maybe they were planning further
               | changes.
               | 
               | Seems like an attempt to get 90 days of "use" of this
               | vulnerability after discovery. If they only had checked
               | performance before!
        
               | hackernudes wrote:
               | No, they just removed the bullet points about what to
               | include in a report. The 90 days part was in both
               | versions.
        
               | bombcar wrote:
               | True, but the "talk only to me" part was new, I think.
        
               | meragrin_ wrote:
               | Yes. An incomplete report allows for dragging out
               | "fixing" the issue longer.
        
               | mxmlnkn wrote:
               | The website change reminds me a bit of lbzip2.org https:/
               | /github.com/kjn/lbzip2/issues/26#issuecomment-1582645...
               | Although, at the moment, it only seems to be spam. The
               | last commit was 6 years ago, so I guess that's better
               | than a maintainer change...
        
               | mort96 wrote:
               | If the owner of the account is innocent and their account
               | was compromised, it's on them to come out and say that.
               | All signs currently point to the person being a malicious
               | actor, so I'll proceed on that assumption.
        
               | londons_explore wrote:
               | Does the person exist at all? Maybe they're a persona of
               | a team working at some three letter agency...
        
               | bonzini wrote:
               | Or for some three letter party.
        
               | Citizen8396 wrote:
               | Probably not. I did some pattern of life analysis on
               | their email/other identifiers. It looks exactly like when
               | I set up a burner online identity- just enough to get
               | past platform registration, but they didn't care enough
               | to make it look real.
               | 
               | For example, their email is only registered to GitHub and
               | Twitter. They haven't even logged into their Google
               | account for almost a year. There's also no history of it
               | being in any data breaches (because they never use it).
               | 
               | Burn the witch.
        
             | soraminazuki wrote:
             | Oh no, not libarchive! GitHub search shows 6 pull requests
             | were merged back in 2021.
             | 
             | https://github.com/search?q=repo%3Alibarchive%2Flibarchive+
             | j...
             | 
             | It does look innocent enough though. Let's hope there's no
             | unicode trickery involved...
        
               | steelframe wrote:
               | Maybe not. They removed safe_fprintf() here and replaced
               | it with the (unsafe) fprintf().
               | 
               | https://github.com/libarchive/libarchive/commit/e37efc16c
               | 866...
        
               | buildbot wrote:
               | If libarchive is also backdoored, would that allow
               | specifically crafted http gzip encoded responses to do
               | bad things?
        
               | nicolas_17 wrote:
               | What software is using libarchive to decode HTTP
               | responses?
        
               | buildbot wrote:
               | I don't know, way outside my domain. Possibly none I
               | guess?
        
               | giantrobot wrote:
               | FreeBSD's archive tools are built on top of libarchive
               | IIRC. Not sure about the other BSDs.
        
               | mattbee wrote:
               | Well for one, the Powershell script I just wrote to build
               | all the 3rd-party library dependencies for a video game.
               | 
               | tar.exe was added to Windows this January, sourced from
               | libarchive: https://learn.microsoft.com/en-
               | us/virtualization/community/t...
               | 
               | Unlike the GNU tar I'm used to, it's actually a "full
               | fat" command line archiving tool, compressing &
               | decompressing zip, xz, bz2 on the command-line - really
               | handy :-O
        
               | duskwuff wrote:
               | No. There's no good reason HTTP response decoding would
               | ever be implemented in terms of libarchive; using libz
               | directly is simpler and supports some use cases (like
               | streaming reads) which libarchive doesn't.
        
               | billyhoffman wrote:
               | EDIT: Ahh, I was wrong and missed the addition of
               | "strerror"
               | 
               | The PR is pretty devious.
               | 
               | JiaT75 claims is "Added the error text when printing out
               | warning and errors in bsdtar when untaring. Previously,
               | there were cryptic error messages" and cites this as
               | fixing a previous issue.
               | 
               | https://github.com/libarchive/libarchive/pull/1609
               | 
               | However it doesn't actually do that!
               | 
               | The PR literally removes a new line between 2 arguments
               | on the first `safe_fprintf()` call, and converts the
               | `safe_fprintf()` to unsafe direct calls to `fprintf()`.
               | In all cases, the arguments to these functions are
               | exactly the same! So it doesn't actually make the error
               | messages any different, it doesn't actually solve the
               | issue it references. And the maintainer accepted it with
               | no comments!
        
               | zb3 wrote:
               | But I see the "strerror" call is added
        
               | londons_explore wrote:
               | reread it...
               | 
               | It does remove the safe prefixes... But it also adds one
               | print statement to "strerror()", which could plausibly
               | give better explanations for the error code...
               | 
               | The only suspicious thing here is the lack for safe_
               | prefix (and the potential for the strerror() function to
               | already be backdoored elsewhere in another commit)
        
               | dchest wrote:
               | That seems to be fine. safe_fprintf() takes care of non-
               | printable characters. It's used for
               | archive_entry_pathname, which can contain them, while
               | "unsafe" fprintf is used to print out
               | archive_error_string, which is a library-provided error
               | string, and strerror(errno) from libc.
        
               | mbauman wrote:
               | We know there's long-cons in action here, though. This PR
               | needn't be the exploit. It needn't be anywhere
               | _temporally_ close to the exploit. It could just be
               | laying groundwork for later pull requests by potentially
               | different accounts.
        
               | datenwolf wrote:
               | Exactly. If we assume the backdoor via liblzma as a
               | template, this could be a ploy to hook/detour both
               | fprintf and strerror in a similar way. Get it to diffuse
               | into systems that rely on libarchive in their package
               | managers.
               | 
               | When the trap is in place deploy a crafted package file
               | that appears invalid on the surface level triggers this
               | trap. In that moment fetch the payload from the (already
               | opened) archive file descriptor, execute it, but also
               | patch the internal state of libarchive so that it will
               | process the rest of the archive file as if nothing
               | happened, and the desired outcome also appearing in the
               | system.
        
               | zrm wrote:
               | Assuming there isn't another commit somewhere modifying a
               | library-provided error string or anything returned by
               | libc. There is all kinds of mischief to be had there,
               | which may or may not have already happened, e.g. now you
               | do some i18n and introduce Unicode shenanigans.
        
             | Zetobal wrote:
             | That looks like a repo that would sound alarms if you look
             | at it from a security standpoint.
        
             | davexunit wrote:
             | JiaT75 also has commits in wasmtime according to
             | https://hachyderm.io/@joeyh/112180082372196735
        
               | pinko wrote:
               | per https://hachyderm.io/@bjorn3/112180226784517099, "The
               | only contribution by them to Wasmtime is a doc change. No
               | actual code or binary blobs have been changed by them."
        
               | mintplant wrote:
               | Just a documentation change, fortunately:
               | 
               | https://github.com/bytecodealliance/wasmtime/commits?auth
               | or=...
               | 
               | They've submitted little documentation tweaks to other
               | projects, too; for example:
               | 
               | https://learn.microsoft.com/en-us/cpp/overview/whats-new-
               | cpp...
               | 
               | I don't know whether this is a formerly-legitimate open
               | source contributor who went rogue, or a deep-cover
               | persona spreading innocuous-looking documentation changes
               | around to other projects as a smokescreen.
        
               | bombcar wrote:
               | Minor documentation change PRs is a well known tactic
               | used to make your GitHub profile look better (especially
               | to potential employers).
               | 
               | He could be doing the same thing for other reasons;
               | nobody really digs into anything very deep so I could see
               | someone handing over co-maintenance to a project based on
               | a decent looking Github graph and some reasonability.
        
               | mysidia wrote:
               | Consider the possibility those type of submissions were
               | part of the adversary's strategy in order to make their
               | account appear more legitimate rather than appearing out
               | of nowhere wanting to become the maintainer of some
               | project.
        
             | ikmckenz wrote:
             | > The above author seems to be primary contact for xz?
             | 
             | They made themselves the primary contact for xz for Google
             | oss-fuzz about one year ago: https://github.com/google/oss-
             | fuzz/commit/6403e93344476972e9...
        
             | bed99 wrote:
             | A SourceGraph search like this shows https://sourcegraph.co
             | m/search?q=context:global+JiaT75&patte...
             | 
             | - Jia Tan <jiat75@gmail.com>
             | 
             | - jiat75 <jiat0218@gmail.com>
             | 
             | ``` amap = generate_author_map("xz")
             | test_author = amap.get_author_by_name("Jia Cheong Tan")
             | self.assertEqual(                 test_author.names, {"Jia
             | Cheong Tan", "Jia Tan", "jiat75"}                  )
             | self.assertEqual(
             | test_author.mail_addresses,
             | {"jiat0218@gmail.com", "jiat75@gmail.com"}
             | )
             | 
             | ```
        
               | pryce wrote:
               | additionally, even though the commit messages they've
               | made are mostly plain, there may be features of their
               | commit messages that could provide leads, such as his
               | using what looks like a very obscure racist joke of
               | referring to a gitignore file as a 'gitnigore'. There's
               | barely a handful of people on the whole planet making
               | this 'joke'.
        
               | berdario wrote:
               | Can you point to where you saw that racist joke?
               | 
               | I don't see anything at https://sourcegraph.com/search?q=
               | context:global+author:jiat0...
        
               | pryce wrote:
               | first commit made in one of JiaT75's other repos
               | https://github.com/JiaT75/STest/commits/master/
        
               | berdario wrote:
               | Thank you. If you wouldn't have explained the background,
               | I totally would've thought that this is just an innocent
               | typo.
               | 
               | (I still think it's like... 60% a typo? don't know)
               | 
               | Anyhow, other people called the CCing of JiaT75 by Lasse
               | suspicious:
               | 
               | https://news.ycombinator.com/item?id=39867593
               | 
               | https://lore.kernel.org/lkml/20240320183846.19475-2-lasse
               | .co...
               | 
               | Someone pointed out the "mental health issues" and "some
               | other things"
               | 
               | https://news.ycombinator.com/item?id=39868881
               | 
               | https://www.mail-archive.com/xz-
               | devel@tukaani.org/msg00567.h...
               | 
               | Lasse is of course a Nordic name, and the whole project
               | has a finnish name and hosting
               | 
               | https://news.ycombinator.com/item?id=39866902
               | 
               | If I wanted to go rogue and insert a backdoor in a
               | project of mine, I'd probably create a new sockpuppet
               | account and hand over management of the project to them.
               | The above is worringly compatible with this hypothesis.
               | 
               | OTOH, JiaT75 did not reuse the existing hosting provider,
               | but rather switched the site to github.io and uploaded
               | there old tarballs:
               | 
               | https://github.com/tukaani-project/tukaani-
               | project.github.io...
               | 
               | If JiaT75 is an old-timer in the project, wouldn't they
               | have kept using the same hosting infra?
               | 
               | There are also some other grim possibilities: someone
               | forced Lasse to hand over the project (violence or
               | blackmailing? as farfetched as that sounds)... or maybe
               | stole Lasse devices (and identity?) and now Lasse is
               | incapacitated?
               | 
               | Or maybe it's just some other fellow scandinavian who
               | pretended to be chinese and got Lasse's trust. In which
               | case I wish Lasse all the best, and hope they'll be able
               | to clear their name.
               | 
               | Is the same person sockpuppeting Hans Jansen? It's
               | amusing (but unsurprising) that they are using both
               | german-sounding and chinese-sounding identities.
               | 
               | That said, I don't think it's unreasonable to think that
               | Lasse genuinely trusted JiaT75, genuinely believed that
               | the ifunc stuff was reasonable (it probably isn't:
               | https://news.ycombinator.com/item?id=39869538 ) and
               | handed over the project to them.
               | 
               | And at the end of the day, the only thing linking JiaT75
               | to a nordic identity is a nordic racist joke which could
               | well be a typo. People already checked the timezone of
               | the commits, but I wonder if anyone has already checked
               | the time-of-day of those commits... does it actually
               | match the working hours that a person genuinely living
               | (and sleeping) in China would follow? (of course, that's
               | also easy to manipulate, but maybe they could've slip up)
               | 
               | Anyhow, I guess that security folks at Microsoft and
               | Google (because of JiaT75 email account) are probably
               | going to cooperate with authorities on trying to pin down
               | the identity of JiaT75 (which might not be very useful,
               | depending on where they live).
        
               | berdario wrote:
               | > does it actually match the working hours that a person
               | genuinely living (and sleeping) in China would follow?
               | 
               | No, it doesn't:
               | 
               | https://play.clickhouse.com/play?user=play#U0VMRUNUIHRvSG
               | 91c...
               | 
               | The vast majority of their Github interactions are
               | between 12.00 UTC and 18.00 UTC
        
               | junon wrote:
               | It's worth mentioning Lasse is still online in the Libera
               | chat room, idling. Nothing's been said.
        
               | Bulat_Ziganshin wrote:
               | i think it's American trauma. outside of the Western
               | hemisphere, sexist and racist jokes are just jokes
        
               | berdario wrote:
               | I tried to understand the significance of this (parent
               | maybe implied that they reused a completely fictitious
               | identity generated by some test code), and I think this
               | is benign.
               | 
               | That project just includes some metadata about a bunch of
               | sample projects, and it links directly to a mirror of the
               | xz project itself:
               | 
               | https://github.com/se-sic/VaRA-Tool-
               | Suite/blob/982bf9b9cbf64...
               | 
               | I assume it downloads the project, examines the git
               | history, and the test then ensures that the correct
               | author name and email addresses are recognized.
               | 
               | (that said, I haven't checked the rest of the project, so
               | I don't know if the code from xz is then subsequently
               | built, and or if this other project could use that in an
               | unsafe manner)
        
             | metzmanj wrote:
             | >Woha, is this legit or some sort of scam on Google in some
             | way?:
             | 
             | I work on OSS-Fuzz.
             | 
             | As far as I can tell, the author's PRs do not compromise
             | OSS-Fuzz in any way.
             | 
             | OSS-Fuzz doesn't trust user code for this very reason.
        
               | packetlost wrote:
               | It looks more like they disabled a feature of oss-fuzz
               | that would've caught the exploit, no?
        
               | metzmanj wrote:
               | That's what people are saying though I haven't had the
               | chance to look into this myself.
               | 
               | Fuzzing isn't really the best tool for catching bugs the
               | maintainer intentionally inserted though.
        
               | bombcar wrote:
               | It's more likely that fuzzing would blow up on new code
               | and they wanted an excuse to remove it.
               | 
               | After all, if it hadn't had a performance regression
               | (someone could submit a PR fixing whatever slowed it
               | down, heh) it still wouldn't be known.
        
             | jnxx wrote:
             | There is also a variety of new, parallelized
             | implementations of compression algorithms which would be
             | good to have a close look at. Bugs causing undefined
             | behaviour in parallel code are notoriously hard to see, and
             | the parallel versions (which are actually much faster)
             | could be take the place of well-established programs which
             | have earned a lot of trust.
        
           | formerly_proven wrote:
           | Well that account also did most of the releases since 5.4.0.
        
             | alwaysbeconsing wrote:
             | +1 Can see from project homepage http://web.archive.org/web
             | /20240329165859/https://xz.tukaani... they have some
             | release responsibility from 5.2.12.
             | 
             | > Versions 5.2.12, 5.4.3 and later have been signed with
             | Jia Tan's OpenPGP key . The older releases have been signed
             | with Lasse Collin's OpenPGP key .
             | 
             | It must be assume that before acquiring that privilege,
             | they also contributed code to project. Probably most was to
             | establish respectable record. Still could be malicious code
             | going back someways.
        
               | 0x0 wrote:
               | Looks like the Jia Tan OpenPGP key was replaced a few
               | months ago as well: https://github.com/tukaani-
               | project/tukaani-project.github.io...
        
         | rwmj wrote:
         | I handed over all the emails I received to the security team,
         | who I guess will send them "higher". I'll let them analyse it.
        
         | 5kg wrote:
         | There is zero web presence for this person and associated email
         | address.
         | 
         | Looks more likely a fake identity than compromised account.
        
           | Retr0id wrote:
           | Did you check Chinese social media?
        
             | hw wrote:
             | Why would you think the person would have social media (or
             | would even be on Chinese social media specifically), given
             | the sophistication and planning?
        
               | Retr0id wrote:
               | I mention Chinese social media specifically because I
               | know it's not indexed so well by western search engines.
               | You can't conclude someone has no social footprint until
               | you've actually checked.
               | 
               | Regardless of how likely you think it is, finding a
               | social media footprint would be useful information. Seek
               | information first, reach conclusions second.
        
           | mrb wrote:
           | Actually the "jiat0218" user part in his email address
           | jiat0218@gmail.com has a bunch of matches on Taiwanese sites:
           | 
           | https://char.tw/blog/post/24397301
           | 
           | https://forum.babyhome.com.tw/topic/167439
           | 
           | https://bmwcct.com.tw/forums/thread1828.html
        
             | 5kg wrote:
             | I think it's just a coincidence.
             | 
             | - All the posts are from 2004/2006. - "jiat" can be
             | abbreviation for many common Chinese names.
        
               | mrb wrote:
               | I agree, probably a coincidence. Just wanted to point out
               | we can, actually, find the username online.
        
             | bruno-miguel wrote:
             | It might just be a coincidence, but the same username from
             | that gmail account also appears to have a Proton Mail
             | address
        
           | johnny22 wrote:
           | I've never had a web presencse for my associated emails due
           | to wanting to avoid spammers. I don't have a false identity.
        
             | johnisgood wrote:
             | Keep in mind that having a "false identity" does not make
             | you a malicious actor. I have a serious project I work on
             | under another pseudonym, but it has to do more with the
             | fact that I do not want my real name to be associated with
             | that project AND having a serious case of impostor
             | syndrome. :/
             | 
             | That, and I used to contribute to various games (forks of
             | ioquake3) when I was a teen and I wanted to keep my real
             | name private.
        
               | occamsrazorwit wrote:
               | Someone named "John is good" claims they aren't a
               | malicious actor... You're trying real hard to convince
               | us, huh.
        
               | johnisgood wrote:
               | Oh yeah, I am using a pseudonym here as well, because I
               | have controversial views in some topics. :P
        
             | stephenr wrote:
             | > I don't have a false identity.
             | 
             | That's just what someone with a false identity would say..
             | get him boys!
             | 
             | The biggest /S
        
           | junon wrote:
           | This is all I can find on them.                   carrd.co
           | jiat0218@gmail.com business
           | https://jiat0218@gmail.com.carrd.co         eBay JiaT75
           | shopping https://www.ebay.com/usr/JiaT75         giters
           | jiat0218 coding https://giters.com/jiat0218         giters
           | JiaT75 coding https://giters.com/JiaT75         GitHub
           | jiat0218 coding https://github.com/jiat0218         GitHub
           | JiaT75 coding https://github.com/JiaT75         Mastodon-
           | meow.so.. jiat0218@gmail.com social
           | https://meow.social/@jiat0218@gmail.com
           | 
           | Beyond that, nothing surefire. (This is all publicly
           | queryable information, if anyone is curious).
        
             | janc_ wrote:
             | JiaT75 also used "jiatan" on Libera.Chat using a Singapore
             | IP address (possibly a proxy/VPN).
        
             | Zenul_Abidin wrote:
             | Where did you gather this information from?
        
         | wpietri wrote:
         | I get why people are focusing on this bad actor. But the
         | question that interests me more: how many other apparent
         | individuals fit the profile that this person presented before
         | caught?
        
           | Phenylacetyl wrote:
           | Apparently, many.
           | 
           | It looks like gettext may be containing a part of their
           | attack infrastructure.
           | 
           | https://github.com/microsoft/vcpkg/pull/37199#pullrequestrev.
           | ..
           | 
           | https://github.com/microsoft/vcpkg/pull/37356/files#diff-e16.
           | ..
           | 
           | https://github.com/MonicaLiu0311
        
             | collinfunk wrote:
             | Are you referencing the '-unsafe' suffix in the second
             | link? That is not something to worry about.
             | 
             | This is from Gnulib, which is used by Gettext and other GNU
             | projects. Using 'setlocale (0, NULL)' is not thread-safe on
             | all platforms. Gnulib has modules to work around this, but
             | not all projects want the extra locking. Hence the name
             | '-unsafe'. :)
             | 
             | See: https://lists.gnu.org/archive/html/bug-
             | gnulib/2024-02/msg001...
        
               | everybackdoor wrote:
               | They may be right:
               | https://git.alpinelinux.org/aports/log/main/gettext
               | 
               | Timeline matches and there is a sudden switch of
               | maintainer. And they add dependency to xz!
        
               | kaathewise wrote:
               | psykose was a prolific contributor to Alpine's aports,
               | with thousands of commits over the past few years[0]. So,
               | I doubt They're involved.
               | 
               | [0]:
               | https://git.alpinelinux.org/aports/stats/?period=y&ofs=10
        
               | everybackdoor wrote:
               | JiaT75 was also a prolific contributor to xz over the
               | past few years, so your assumptions are generally invalid
               | at this point.
        
         | mrb wrote:
         | I would presume it's a state actor. Generally in the blackhat
         | world, attackers have very precise targets. They want to attack
         | this company or this group of individuals. But someone who
         | backdoors such a core piece of open source infrastructure wants
         | to cast a wide net to attack as many as possible. So that fits
         | the profile of a government intelligence agency who is
         | interested in surveilling, well, everything.
         | 
         | Or it could in theory be malware authors (ransomware, etc).
         | However these guys tend to aim at the low hanging fruits. They
         | want to make a buck _quickly_. I don 't think they have the
         | patience and persistence to infiltrate an open source project
         | for 2 long years to finally gain enough trust and access to
         | backdoor it. On the other hand, a state actor is in for the
         | long term, so they would spend that much time (and more) to
         | accomplish that.
         | 
         | So that's my guess: Jia Tan is an employee of some intelligence
         | agency. He chose to present an asian persona, but that's not
         | necessarily who he truly represents. Could be anyone, really:
         | Russia, China, Israel, or even the US, etc.
         | 
         | Edit: given that Lasse Collin was the only maintainer of xz
         | utils in 2022 before Jia Tan, I wouldn't be surprised if the
         | state actor interfered with Lasse somehow. They could have done
         | anything to distract him from the project: introduce a mistress
         | in his life, give him a high-paying job, make his spouse sick
         | so he has to care for her, etc. With Lasse not having as many
         | hours to spend on the project, he would have been more likely
         | to give access to a developer who shows up around the same time
         | and who is highly motivated to contribute code. I would be
         | interested to talk to Lasse to understand his circumstances
         | around 2022.
        
           | janc_ wrote:
           | Or they have just one or a small number of targets, but don't
           | want the target(s) to know that they were the only target(s),
           | so they backdoor a large number of victims to "hide in the
           | crowd".
           | 
           | I agree that this is likely a state actor, or at least a very
           | large & wealthy private actor who can play the long game...
        
           | dist-epoch wrote:
           | According to top comment he committed multiple binary files
           | to xz for the last two years.
           | 
           | Most likely this is not the first backdoor, just the first
           | one to be discovered, so it wasn't two years of work until
           | there were results.
           | 
           | But I still agree that he's probably a state actor.
        
             | bombcar wrote:
             | Don't forget that you could have state actors who are
             | otherwise interested in open source code, and working to
             | actually improve it.
             | 
             | In fact, that'd be the best form of deep cover. It'll be
             | interested to watch as people more knowledgable than I pour
             | over every single commit and change.
        
               | hobobaggins wrote:
               | (not to be overly pedantic, but you probably meant pore,
               | not pour: https://www.merriam-webster.com/grammar/pore-
               | over-vs-pour-ov... )
        
             | apitman wrote:
             | If you have a backdoor in a specific piece of software
             | already, what is the purpose of trying to introduce another
             | backdoor (and risk it getting caught)?
        
               | dist-epoch wrote:
               | This backdoor targeted only sshd.
               | 
               | There could be other backdoors for other targets.
        
               | dgacmu wrote:
               | There are two general attack targets I'd use if I had
               | access to a library/binary like xz:
               | 
               | (1) A backdoor like this one, which isn't really about
               | its core functions, but about the fact that it's a
               | library linked into critical code, so that you can use it
               | to backdoor _other things_. Those are complex and tricky
               | because you have to manipulate the linking/GOT
               | specifically for a target.
               | 
               | (2) Insert an exploitable flaw such as a buffer overflow
               | so that you can craft malicious .xz files that result in
               | a target executing code if they process your file. This
               | is a slightly more generic attack vector but that
               | requires a click/download/action.
               | 
               | Not every machine or person you want to compromise has an
               | exposed service like ssh, and not every target will
               | download/decompress a file you send to them. These are
               | decently orthogonal attack vectors even though they both
               | involve a library.
               | 
               | (Note that there's as yet no evidence for #2 - I'm just
               | noting how I'd try to leverage this to maximum effect if
               | I wanted to.)
        
             | Bulat_Ziganshin wrote:
             | xz is a data compression tool, so it's natural to have
             | compressed files for (de)compression tests.
             | 
             | these files are also useful to check that the library we
             | just built works correctly. but they aren't necessary for
             | installation.
             | 
             | we may have more sophisticated procedures that will allow
             | us to use some parts of distribution only for tests. This
             | may significantly reduce an attack vector - many projects
             | have huge, sophisticated testing infrastructure where you
             | can hide the entire Wikipedia.
        
           | fullstop wrote:
           | If anyone here happens to know Lasse, it might be good to
           | check up on him and see how he's doing.
        
           | Delk wrote:
           | > I wouldn't be surprised if the state actor interfered with
           | Lasse somehow
           | 
           | People could also just get tired after years of active
           | maintainership or become busier with life. Being the sole
           | maintainer of an active open source project on top of work
           | and perhaps family takes either a lot of enthusiasm or a lot
           | of commitment. It's not really a given that people want to
           | (or can) keep doing that forever at the same pace.
           | 
           | Someone then spots the opportunity.
           | 
           | I have no idea what the story is here but it might be
           | something rather mundane.
        
           | hk__2 wrote:
           | > I haven't lost interest but my ability to care has been
           | fairly limited mostly due to longterm mental health issues
           | but also due to some other things. Recently I've worked off-
           | list a bit with Jia Tan on XZ Utils and perhaps he will have
           | a bigger role in the future, we'll see.
           | 
           | https://www.mail-archive.com/xz-
           | devel@tukaani.org/msg00567.h...
        
             | mrb wrote:
             | Dated June 2022. Good find!
        
           | jnxx wrote:
           | > They want to attack this company or this group of
           | individuals. But someone who backdoors such a core piece of
           | open source infrastructure wants to cast a wide net to attack
           | as many as possible.
           | 
           | The stuxnet malware, which compromised Siemens industrial
           | controls to attack specific centrifuges in uranium enrichment
           | plants in Iran, is a counterexample to that.
        
             | mrb wrote:
             | Stuxnet wasn't similar to this xz backdoor. The Stuxnet
             | creators researched (or acquired) four Windows zero-days, a
             | relatively short-term endeavor. Whereas the xz backdoor was
             | a long-term 2.5 years operation to slowly gain trust from
             | Lasse Collin.
             | 
             | But, anyway, I'm sure we can find other counter-examples.
        
               | Repulsion9513 wrote:
               | If a government wants to cast a wide nest and catch what
               | they can, they'll just throw a tap in some IXP.
               | 
               | If a government went to this much effort to plant this
               | vulnerability, they absolutely have targets in mind -
               | just like they did when they went to the effort of
               | researching (or acquiring) four separate Windows zero-
               | days, combining them, and delivering them...
        
           | 2OEH8eoCRo0 wrote:
           | It's ridiculous to think it's the US as it would be an attack
           | on Red Hat a US company and an attack on Americans. It's a
           | good way to be dragged in front of Congress.
        
             | guinea-unicorn wrote:
             | The US has backdoored RSA's RNG and thus endangered the
             | security of American companies. It is naive to think that
             | US intelligence agencies will act in the best interest of
             | US citizens or companies.
        
               | 2OEH8eoCRo0 wrote:
               | That is speculation and has never been confirmed.
        
               | occamsrazorwit wrote:
               | What type of confirmation do you want? The documents
               | aren't going to be declassified in the next couple of
               | decades, if ever.
               | 
               | I've never heard anyone claim that Dual_EC_DRBG is most
               | likely _not_ intentionally backdoored, but there 's
               | literally no way to confirm because of how its written.
               | If we can't analyze intention from the code, we can look
               | at the broader context for clues. The NSA spent an
               | unusual amount of effort trying to push forward an
               | algorithm that kept getting shot down because it was
               | slower than similar algorithms with no additional
               | benefits (the $10 million deal specified it as a
               | requirement [1]). If you give the NSA the benefit of the
               | doubt, they spent a lot of time and money to...
               | intentionally slow down random number generation?!
               | 
               | As an American, I'd prefer a competent NSA than an
               | incompetent NSA that spends my tax dollars to make
               | technology worse for literally no benefit...
               | 
               | [1] https://www.reuters.com/article/us-usa-security-rsa-
               | idUSBRE9...
        
               | hex4def6 wrote:
               | You are understating the level of evidence that points to
               | the NSA being fully aware of what it was doing.
               | 
               | To be clear, the method of attack was something that had
               | been described in a paper years earlier, the NSA
               | literally had a program (BULLRUN) around compromising and
               | attacking encryption, and there were security researchers
               | at NIST and other places that raised concerns even before
               | it was implemented as a standard. Oh, and the NSA paid
               | the RSA $10 million to implement it.
               | 
               | Heck, even the chairman of the RSA implies they got used
               | by the NSA:
               | 
               | In an impassioned speech, Coveillo said RSA, like many in
               | industry, has worked with the NSA on projects. But in the
               | case of the NSA-developed algorithm which he didn't
               | directly name, Coviello told conference attendees that
               | RSA feels NSA exploited its position of trust. In its
               | job, NSA plays two roles, he pointed out. In the
               | information assurance directorate (IAD) arm of NSA, it
               | decides on security technologies that might find use in
               | the government, especially the military. The other side
               | of the NSA is tasked with vacuuming up data for cyber-
               | espionage purposes and now is prepared to take an
               | offensive role in cyber-attacks and cyberwar.
               | 
               | "We can't be sure which part of the NSA we're working
               | with," said Coviello with a tone of anguish. He implied
               | that if the NSA induced RSA to include a secret backdoor
               | in any RSA product, it happened without RSA's consent or
               | awareness.
               | 
               | https://www.networkworld.com/article/687628/security-rsa-
               | chi...
        
               | fragmede wrote:
               | What about the time it was shown they did the reverse
               | (hardened security using math only they knew at the time)
               | for DSA
        
               | Dylan16807 wrote:
               | What about it?
               | 
               | There's an implicit "always" in their second sentence, if
               | you're confused by the wording. They aren't positing the
               | equivalent of the guard that only lies.
        
               | fragmede wrote:
               | It's an interesting story for those who haven't heard
               | about that an think the NSA could only be up to evil. You
               | may not have read it as the guard only ever lies, but
               | that doesn't stop people from thinking that anyway.
        
               | Dylan16807 wrote:
               | It's an interesting story, but I still don't know what
               | you wanted as an answer to "What about".
        
               | 2OEH8eoCRo0 wrote:
               | They were responding to:
               | 
               | > It is naive to think that US intelligence agencies will
               | act in the best interest of US citizens or companies.
               | 
               | With an example of them doing exactly that.
        
               | Dylan16807 wrote:
               | This is addressed very directly by the second paragraph
               | of my first comment. Please adjust your response to take
               | that into account.
        
               | tveita wrote:
               | Notably that was a "no-one-but-us" backdoor, that
               | requires a specific secret key to exploit. We'll see when
               | someone analyzes the payload further, but presumably this
               | backdoor also triggers on a specific private key. If not
               | there are ways to do it that would look far more like an
               | innocent mistake, like a logic bug or failed bounds
               | check.
               | 
               | I can see some arguments that might persuade the NSA to
               | run an attack like this                 - gathers real
               | world data on detection of supply attacks       - serves
               | as a wake-up call for a software community that has grown
               | complacent on the security impact of dependencies       -
               | in the worst case, if no one finds it then hey, free
               | backdoor
        
             | AimHere wrote:
             | Hardly ridiculous.
             | 
             | You say that as if members of US government agencies didn't
             | plot terror attacks on Americans (Operation Northwood),
             | steal the medical records of American whistleblowers
             | (Ellsberg), had to be prevented from assassinating American
             | journalists (Gordon Liddy, on Jack Anderson), collude to
             | assassinate American political activists (Fred Hampton),
             | spy on presidential candidates (Watergate), sell weapons to
             | countries who'd allegedly supported groups who'd launched
             | suicide bombing attacks on American soldiers (Iran-Contra),
             | allow drug smugglers to flood the USA with cocaine so that
             | they could supply illegal guns to terrorists abroad on
             | their return trip (Iran-Contra again) and get caught
             | conducting illegal mass-surveillance on American people as
             | a whole (Snowden). Among others.
             | 
             | It's super-naive to suggest that government agencies
             | wouldn't act against the interest of American citizens and
             | companies because there might be consequences if they were
             | caught. Most of the instances above actually were instances
             | where the perpetrators did get caught, which is why we know
             | about them.
        
               | 2OEH8eoCRo0 wrote:
               | I love being called naive.
        
               | QuantumG wrote:
               | Whisper it to me lover.
        
               | SadTrombone wrote:
               | Seems like an appropriately used descriptor here.
        
               | jonathankoren wrote:
               | You don't even have to be this conspiratorially minded to
               | believe the NSA is a legitimate suspect here. (For the
               | record, I think literally every intelligence agency on
               | Earth is plausible here.)
               | 
               | You kind of lost the thread when you say, "act against
               | the interests of American citizens and companies". Bro,
               | literally anyone could be using xz, and anyone could be
               | using Red Hat. You're only "acting against Americans" if
               | you _use it against Americans_. I don't know who was
               | behind this, but a perfectly plausible scenario would be
               | the NSA putting the backdoor in with an ostensibly
               | Chinese login and then activating on machines hosted and
               | controlled by people outside of the US.
               | 
               | Focusing on a specific distro is myopic. Red Hat is
               | _popular_.
        
               | cesarb wrote:
               | > but a perfectly plausible scenario would be the NSA
               | putting the backdoor in with an ostensibly Chinese login
               | and then activating on machines hosted and controlled by
               | people outside of the US.
               | 
               | There's a term for that: NOBUS
               | (https://en.wikipedia.org/wiki/NOBUS). It won't surprise
               | me at all if this backdoor can only be exploited if the
               | attacker has the private key corresponding to a public
               | key contained in the injected code. It also won't
               | surprise me if this private key ends up being stolen by
               | someone else, and used against its original owner.
        
               | jonathankoren wrote:
               | >It also won't surprise me if this private key ends up
               | being stolen by someone else, and used against its
               | original owner.
               | 
               | And that is exactly why backdoored encryption is bad.
        
               | eastern wrote:
               | 100%.
               | 
               | The HN crowd has come a long way from practically hero-
               | worshipping Snowden to automatically assuming that 'state
               | actor' must mean the countries marked evil by the US.
        
               | onthecanposting wrote:
               | Caught and, more importantly, nothing bad typically
               | happened to anyone involved. Also worth noting that there
               | is probably a survivorship bias in play.
        
             | mrb wrote:
             | Have you forgotten about the Snowden leaks exposing the
             | surveillance on Americans by the American govt?
        
               | threeseed wrote:
               | Every country spies on its own citizens.
               | 
               | By comparison America is actually quite timid compared to
               | other countries e.g. UK and the widespread CCTV network.
        
               | bpye wrote:
               | I'd say that CCTV is quite different to wiretapping. You
               | (generally) wouldn't have the expectation of privacy in a
               | public place, most people would expect that phone calls,
               | messages, etc do remain private.
               | 
               | Now, GCHQ is no better than the NSA for that either, but
               | I don't think CCTV is a good comparison.
        
               | silpol wrote:
               | While his leaks expose surveillance, he was useful idiot
               | https://en.wikipedia.org/wiki/Useful_idiot in hands of
               | Assange club. And it might be event of his saving was
               | trigger for Putin to start war. So no, I'd better see
               | whole camaraderie before court and sentenced. Regardless
               | of 'heroism'.
               | 
               | And yes, most of modern supporters of Wikileaks / Assange
               | / Snowden / etc, chanting 'release Assange' and 'pardon
               | Snowden' are useful idiots in hands of tyrannies like
               | BRICS club.
        
             | asveikau wrote:
             | I'm not very inclined to think this is the US govt,
             | however, you should better acquaint yourself with the
             | morals of some members of Congress.
             | 
             | I think the best reason to doubt USG involvement is the
             | ease with which somebody discovered this issue, which is
             | only a month or two old. I feel like NSA etc. knows not to
             | get caught doing this so easily.
        
             | mardifoufs wrote:
             | Yeah as we know, intelligence agencies are very often held
             | accountable in the US. As witnessed by all the individuals
             | that got charged or punished for uh... nevermind.
        
           | occamsrazorwit wrote:
           | Given the details from another comment [1], it sounds like
           | both maintainers are suspicious. Lasse's behavior has changed
           | recently, and he's been pushing to get Jia Tan's changes into
           | the Linux kernel. It's possible both accounts aren't even run
           | by the original Lasse Collin and Jia Tan anymore.
           | 
           | Edit: Also, Github has suspended both accounts. Perhaps they
           | know something we don't.
           | 
           | [1] https://news.ycombinator.com/item?id=39865810#39866275
        
             | gamer191 wrote:
             | Where does that comment mention the other maintainer (Lasse
             | Collin)?
        
               | occamsrazorwit wrote:
               | Whoops, I linked the wrong comment. I meant to link this
               | one [1]. Anyway, seems like there's potentially a whole
               | trail of compromised and fake accounts [2]. Someone in a
               | government agency somewhere is pretty disappointed right
               | now.
               | 
               | [1] https://news.ycombinator.com/item?id=39867593
               | 
               | [2] https://news.ycombinator.com/item?id=39866936
        
             | salamandar wrote:
             | According to Webarchive, https://tukaani.org/contact.html
             | changed very recently (between 11/02/2024 and 29/02/2024)
             | to add Lasse Collin's PGP key fingerprint. That timing is
             | weird, considering his git activity at that time is almost
             | non existent. Although, i checked, this key existed back in
             | 2012.
        
           | Repulsion9513 wrote:
           | > Generally in the blackhat world, attackers have very
           | precise targets
           | 
           | Lol, what
           | 
           | > wants to cast a wide net to attack as many as possible. So
           | that fits the profile of a government intelligence agency
           | 
           | That's quite backwards. Governments are far more likely to
           | deploy a complex attack against a single target (see also:
           | Stuxnet); other attackers (motivated primarily by money) are
           | far more likely to cast a wide net.
        
             | gamer191 wrote:
             | > That's quite backwards. Governments are far more likely
             | to deploy a complex attack against a single target (see
             | also: Stuxnet); other attackers (motivated primarily by
             | money) are far more likely to cast a wide net.
             | 
             | Governments are well known to keep vulnerabilities hidden
             | (see EternalBlue). Intentionally introducing a
             | vulnerability doesn't seem that backwards tbh
        
               | Repulsion9513 wrote:
               | Oh for sure. I'm not suggesting that this wasn't a
               | government actor, although I'd only give you 50/50 odds
               | on it myself. It coulda just been someone with a bunch of
               | time, like phreakers of old.
        
           | pyrolistical wrote:
           | Literally this https://xkcd.com/2347/
        
           | Havoc wrote:
           | Bit much speculating about mistresses and poisoned spouses
           | with well anything to go on...
        
         | jpalomaki wrote:
         | Seems to be a perfect project to hijack. Not too much
         | happening, widely used, long history, single maintainer who no
         | longer has time to manage the project and wants to pass it
         | over.
        
         | dang wrote:
         | We detached this subthread from
         | https://news.ycombinator.com/item?id=39866275. (It's fine; I'm
         | just trying to prune the top-heavy subthread.)
        
         | AviationAtom wrote:
         | Not a developer but reading the changelogs and commit history
         | from this person seem interesting, as they appear to be some
         | effort consolidate control and push things in the direction of
         | supporting wider dissemination of their backdoor code:
         | 
         | Discussing commits that the other author has since reverted,
         | IFUNC change with Project Zero tests, a focus on embedded,
         | etc.:
         | 
         | https://www.mail-archive.com/xz-devel@tukaani.org/msg00642.h...
         | 
         | Trimming security reporting details:
         | 
         | https://git.tukaani.org/?p=xz.git;a=commitdiff;h=af071ef7702...
        
         | electronwill wrote:
         | "crazytan" is the LinkedIn profile of a security software
         | engineer named Jia Tan in Sunnyvale working at Snowflake, who
         | attended Shanghai Jiao Tong University from 2011 to 2015 and
         | Georgia Institute of Technology from 2015 to 2017. However,
         | this Jia Tan on LinkedIn might not be the same Jia Tan who
         | worked on XZ Utils. Also, the person who inserted the malicious
         | code might be someone else who hijacked the account of the Jia
         | Tan who worked on XZ Utils.
        
       | ParetoOptimal wrote:
       | If you have a recently updated NixOS unstable it has the affected
       | version:                   $ xz --version         xz (XZ Utils)
       | 5.6.1         liblzma 5.6.1
       | 
       | EDIT: I've been informed on the NixOS matrix that they are 99%
       | sure NixOS isn't affected, based on conversations in
       | #security:nixos.org
        
       | 20after4 wrote:
       | > "Docs: Simplify SECURITY.md."
       | 
       | https://github.com/tukaani-project/xz/commit/af071ef7702debe...
       | 
       | Removes instructions about details relevant to security reports.
       | Heh, nice one.
        
       | PedroBatista wrote:
       | Given the recent ( not so recent ) attacks/"bugs" I feel there is
       | a need to do more than the already hard task of investigating and
       | detecting attacks but also to bring IRL consequences to these
       | people.
       | 
       | My understanding is that right now it's pretty much a name and
       | shame of people who most of the time aren't even real "people"
       | but hostile agents either working for governments or criminal
       | groups ( or both )
       | 
       | Getting punched in the face is actually a necessary human
       | condition for a healthy civilization.
        
         | buildbot wrote:
         | In the article it says CISA was notified - that sounds like
         | it's going to be a federal investigation if nothing else. If I
         | was this person, I wouldn't be in the USA (or any US friendly
         | nation) ASAP.
        
           | graemep wrote:
           | One of Jia Tan's recent contributions is "Speed up CRC32
           | calculation on LoongArch" I would guess the odds are that
           | this is not someone in the US.
        
             | buildbot wrote:
             | Yeah I saw that - I wouldn't bet on them being in the US
             | but who knows. Maybe they just really love CRC32 ;) And
             | introducing backdoors (if it that was them not an account
             | takeover).
        
               | kevin_b_er wrote:
               | Those tarballs are PGP signed, too..
        
             | rrix2 wrote:
             | That was a review of someone else's work?
             | https://github.com/tukaani-project/xz/pull/86
        
               | Fnoord wrote:
               | Since that repo is disabled: here is a mirror of the
               | discussion [1]
               | 
               | [1] https://archive.is/tksCR
        
             | hangonhn wrote:
             | It's also very possible that the account was compromised
             | and taken over. A two years long con with real useful work
             | is a lot of patience and effort vs. just stealing a weakly
             | protected account. I wonder if MFA shouldn't be a
             | requirement for accounts that contribute to important OSS
             | projects.
        
               | dist-epoch wrote:
               | This is most likely not his first backdoor, but the first
               | which was detected.
               | 
               | So most likely he didn't wait two years to benefit.
        
               | dralley wrote:
               | >A two years long con with real useful work is a lot of
               | patience and effort vs. just stealing a weakly protected
               | account.
               | 
               | The long-con theory seems a bit more plausible at the
               | moment
               | 
               | https://github.com/google/oss-fuzz/pull/10667
        
               | IncreasePosts wrote:
               | It might not even be a long time. He might have just been
               | approached exactly because of his history to insert the
               | back door. And offered either money, or blackmailed or
               | threatened
        
               | hangonhn wrote:
               | Oh man. The was a scenario that didn't cross my mind. I
               | was too narrowly focused on the technical aspects rather
               | than the social aspects of security. Great point.
        
               | Arrath wrote:
               | What if this contributor was a member of a state
               | actor/persistent threat group and, like some totally
               | legit software dev houses, they encourage their people to
               | contribute to OSS projects for the whole personal
               | pursuit/enjoyment/fulfillment angle?
               | 
               | With the added bonus that sometimes they get to pull off
               | a longcon like this.
        
               | wyldberry wrote:
               | If you really step back and think about it, this type of
               | behavior is perfectly aligned with any number of well
               | resourced criminal groups and state actors. Two years of
               | contributing in less visible software with the goal of
               | gaining trust and then slowly pushing your broken fix in.
               | 
               | To me that's way more plausible than losing control of
               | your account and the person who compromised it then
               | having someone over a long time insert the backdoor that
               | took a long time to develop and then obfuscate it.
               | 
               | Likely someone at GH is talking to some government
               | agencies right now about the behavior of the private
               | repos of that user and their associated users.
        
               | sonicanatidae wrote:
               | This would be the smarter attack vector, but I've noticed
               | over time that these people are just assholes. They
               | aren't patient. They are in for the smash/grab.
               | 
               | I would not be surprised if there was a group using this
               | approach, but I doubt most of them are/would. If they
               | were that dedicated, they'd just have a fucking job,
               | instead of being dicks on the internet for a living.
        
               | cjbprime wrote:
               | I think you are confusing non-state e.g. ransomware
               | groups, which are usually not part of a government
               | (although some exceptions like North Korea likely exist)
               | with state-sponsored hackers who are often directly
               | working under military command. Soldiers are not "dicks
               | on the internet".
        
               | willdr wrote:
               | As someone who has been in a fair few discord chats with
               | soldiers, I'd beg to differ...
        
               | wyldberry wrote:
               | For some groups they certainly are.
               | 
               | However at this point: every developed nation has a
               | professional offensive security group that have varying
               | degrees of potency. All are more resourced than 99.9% of
               | organizations defending and enjoy legal autonomy in their
               | country and allied countries for their work.
               | 
               | If you're getting salaried comfortably, and you have near
               | infinite resources, a two year timeline is trivial. As an
               | American, I always like to point to things we know our
               | own services have done first[0].
               | 
               | Each actor group have their own motivations and
               | tactics[1]. As someone who spent a lot of time dealing
               | with a few state actors, you learn your adversaries
               | tricks of the trade and they are patient for the long-con
               | because they can afford to be.
               | 
               | [0] -
               | https://www.npr.org/2020/03/05/812499752/uncovering-the-
               | cias... [1] - https://learn.microsoft.com/en-
               | us/microsoft-365/security/def...
        
               | stephc_int13 wrote:
               | This is not that costly. Growing bonsai trees also takes
               | a lot of patience, decades, but you don't have to grow
               | only one at a time, the pros are growing them in large
               | numbers, with minimal work on each individual trees once
               | in a while.
        
               | teddyh wrote:
               | There is a survivorship bias problem there; what if the
               | stupid criminals are the only ones which you _notice_?
        
               | jnxx wrote:
               | I am thinking more in so-called rubberhose
               | cryptoanalysis.
               | 
               | https://xkcd.com/538/
        
               | ranger_danger wrote:
               | > It's also very possible that the account was
               | compromised and taken over
               | 
               | Or they WERE legit and simply went rogue, perhaps due to
               | external factors.
        
               | fmajid wrote:
               | 2 years of one engineer's time is very cheap, compared to
               | e.g. the NSA's CryptoAG scam. I'd say most likely a
               | Chinese intelligence plant, _kindly_ offering to relieve
               | the burden of the original author of xz.
        
               | rdtsc wrote:
               | I got the same idea. On XZ dev mailing list there were a
               | few discussions about "is there a maintainer?" 2-3 years
               | ago. It's not hard to find these types discussions and
               | then dedicate a few years of effort to start "helping
               | out" and eventually be the one signing releases for the
               | project. That's peanuts for a state actor.
        
               | throwaway384638 wrote:
               | This right here. This is exactly what I would be doing -
               | find small broke maintainers offer them a few hundred
               | grand - with a target in mind.
        
             | dreamingincode wrote:
             | The full name "Jia Cheong Tan" doesn't sound like Mainland
             | China. The name and actions could be intentionally
             | misleading though.
             | 
             | https://news.ycombinator.com/item?id=39867737
        
               | viraptor wrote:
               | We're way too global now for this to be more than a tiny
               | extra signal. People move around, families preserve
               | names.
               | 
               | Also nobody checked that person's id, so "Jia" is only
               | slightly more meaningful than "ghrssbitrvii".
        
               | graemep wrote:
               | Names can be faked, and even real names are not a great
               | indicator.
               | 
               | Unless you have some very specific cultural knowledge you
               | could not make even vaguely useful deductions about my
               | location, nationality, culture, ethnicity etc. from my
               | name. I get a lot of wrong guesses though!
        
           | sneak wrote:
           | What law do you think is being broken here?
        
             | AlexCoventry wrote:
             | Maybe https://www.law.cornell.edu/uscode/text/18/1030#a_5 ?
             | 
             | > knowingly causes the transmission of a program,
             | information, code, or command, and as a result of such
             | conduct, intentionally causes damage without authorization,
             | to a protected computer;
        
               | jethro_tell wrote:
               | How does posting an exploit POC differ here?
        
               | schlauerfox wrote:
               | Intent. It's a big part of law and prosecution.
        
               | cjbprime wrote:
               | No, freedom of speech (as far as I know) protects even
               | exploit code. The statutes being linked would cover
               | _using_ the backdoor to gain unauthorized entry to a
               | system. I think the question of whether anything illegal
               | has occurred from the public facts is unclear, at least
               | to me, and interesting.
        
               | jethro_tell wrote:
               | I see a dev on the project has just posted that it has
               | been seen in the wild, so I guess you'd have standing
               | there.
               | 
               | https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee
               | 78b...
        
               | wtallis wrote:
               | The first amendment might overrule the cited law if that
               | law didn't already include a requirement for intentional
               | harm. But since the law _does_ already have that
               | requirement, there 's not really an opportunity for a
               | freedom of speech justification to be what protects a
               | non-malicious publication of a proof of concept. The law
               | isn't trying to infringe on freedom of speech.
        
           | computerfriend wrote:
           | From their Git commits, they're in China's time zone.
        
             | stephenr wrote:
             | I assume you mean UTC+8... that covers about 20% of the
             | earth's population, besides China it includes parts of
             | Russia, a bunch of SEA and Western Australia.
        
               | fmajid wrote:
               | China _is_ 20 & of the world's population...
        
             | jcgrillo wrote:
             | My git commits are sometimes in UTC, depending on which
             | computer I make them from. Sometimes my laptop just
             | switches timezones depending on whether I'm using wifi or
             | LTE. I wouldn't put much weight on the timezone.
        
             | jnxx wrote:
             | The time stamp of a git commit depends on the system clock
             | of the computer the commit was checked in. This cannot be
             | checked by github & co (except that they could reject
             | commits which have time stamps in the future).
        
             | supposemaybe wrote:
             | Remember that agencies like NSA, GCHQ etc will always use
             | false flags in their code, even when it doesn't have as
             | high risk of exposure as a backdoor in public has.
             | 
             | Looking at the times of commits shouldn't be given much
             | value at all. A pretty pointless endeavour.
        
               | astrange wrote:
               | State actors are actually known for not doing that; after
               | all, there's no need to hide when what you're doing is
               | legal. They also tend to work 9-5 in their own timezones.
        
             | berdario wrote:
             | But the actual interactions with Github are done between
             | 12.00 UTC and 18.00 UTC
             | 
             | https://news.ycombinator.com/item?id=39870925
             | 
             | https://play.clickhouse.com/play?user=play#U0VMRUNUIHRvSG91
             | c...
        
               | zone411 wrote:
               | But note this: https://twitter.com/birchb0y/status/177387
               | 1381890924872/phot...
        
               | berdario wrote:
               | Interesting!
               | 
               | As some of the Tweet replies mentioned, they shipped
               | releases that contained the backdoor, and committed other
               | questionable changes at the "usual" times. For sure we're
               | almost certainly not dealing with a compromised
               | workstation, so I don't think that would explain the
               | different times for the worst offending changes.
               | 
               | Maybe he has some technical experts/handlers/managers
               | that had to oversee when they introduced the actual
               | malicious changes, and thus this reflects when he got the
               | go-ahead signal from these other people (and thus that
               | reflects their working hours?)
               | 
               | Or maybe they were just travelling at that time? (maybe
               | travelling to visit the aforementioned handlers? Or
               | travel to visit family... even criminals have a mom and
               | dad)
               | 
               | Also, keep in mind that my Clickhouse query includes all
               | of the Github interactions (for example, timestamp of
               | issue comments)... and unlike a Git commit timestamp,
               | it's hard to fake those (because you'd need to schedule
               | the posting of such comments, probably via the API. Not
               | impossible, but easier to think that JiaT75 just used the
               | Gitub UI to write comments), the Tweet mentions just
               | "commit history"
               | 
               | Usually the simpler explanation has less chance of being
               | wrong... thinking of some possibilities:
               | 
               | - Chinese/Taiwanese state actor, who employs people 9-5
               | (but somehow, their guy worked 20.00 - 02.00 local time)
               | 
               | - Chinese/Taiwanese rogue group/lone wolf... moonlighting
               | on this exploit after their day job (given that to
               | interact with Lasse they'd be forced to work late, this
               | is not outside of the realm of possibilities)
               | 
               | - Non-Chinese state actor, employing someone 9-5
               | (consistent with most of the Github interactions),
               | wanting to pin responsibility on China/Taiwan (+0800
               | timezone for commits), which for some unexplained reason
               | pushed the worst offending changes at really weird times.
               | 
               | - Chinese/Taiwanese state actor, that wanted to pin the
               | blame on western state actors (by making all of the
               | changes at times compatible with someone working in
               | Europe), and somehow they slipped up when pushing the
               | worst offending changes.
               | 
               | - Chinese/Taiwanese state actor, employing someone in
               | Europe (if they need to get approval of changes/gain the
               | trust of the previous maintainer Lasse, it might make
               | sense to have better/more timezone overlap)... which for
               | some weird (yet "innocent") reason, kept the device that
               | they worked on, configured with a +0800 timezone
               | 
               | - Non-Chinese state actor, pretending to be a Chinese
               | entity that wanted to pin the blame on a western entity
               | and slip up by making the worst offending changes at 3am
               | (i.e. it was not a slip up, but it's part of the
               | misdirection efforts.)
               | 
               | Some of these hypotheses are a bit farfetched, but
               | reality is stranger than fiction
        
           | rubymamis wrote:
           | We shouldn't rule out the probability that this account is
           | from a U.S. agency as well.
        
             | wannacboatmovie wrote:
             | Just so I understand, you're alleging that a U.S. agency
             | was, among other things, submitting patches for a mainland
             | Chinese home-grown CPU architecture (Loongson)?
        
               | yorwba wrote:
               | Aren't you confusing JiaT75 and xry111?
               | 
               | And if someone wanted to attack a target running on
               | Loongson, they would certainly have to make sure the code
               | can actually run there in the first place.
        
               | boutique wrote:
               | No, they're not. They are saying that due to the
               | extraordinary circumstances with this case US agencies
               | cannot be excluded from suspicion. At this time no actor
               | seems to be a more likely perpetrator than the next.
               | (Keep in mind that false-flag operations are a very
               | common occurrence in cyber warfare and this cannot be
               | ruled out yet.)
        
               | FLT8 wrote:
               | It doesn't seem out of the question that the U.S. or
               | allied nations might want to be involved in the
               | development effort around these CPUs. Even if initially
               | it's just to build some credibility for this account so
               | future adversarial patches are accepted with less
               | suspicion? If you think that's implausible, I'm
               | interested why?
        
             | EthanHeilman wrote:
             | We shouldn't rule it out, but it seems unlikely to me.
             | 
             | This is more reckless than any backdoor I can think of by a
             | US agency . NSA backdoored Dual EC DRBG, which was
             | extremely reckless, but this makes that look careful and
             | that was the Zenith of NSA recklessness. The attackers here
             | straight up just cowboy'd the joint. I can't think of any
             | instance in which US intelligence used sock puppets on
             | public forums and mailinglists to encourage deployment of
             | the backdoored software and I maintain a list of NSA
             | backdoors: https://www.ethanheilman.com/x/12/index.html
             | 
             | It just doesn't seem like their style.
        
               | oceanplexian wrote:
               | The CIA had plans to commit terrorist acts against
               | American civilians to start a war against Cuba in the
               | 60s. This is quite literally their style. For example,
               | perhaps they were planning to blame the hack of a power
               | plant or critical infrastructure on this exploit, then
               | use the "evidence" that was leaked to prove it was China,
               | and from there carry out an offensive operation against
               | Chinese infrastructure. There are lots of subversive
               | reasons they would want to do this.
        
               | astrange wrote:
               | The CIA in 2024 really doesn't have any continuity with
               | itself in 1960. Things like the Church Commission changed
               | how it was governed.
        
               | EthanHeilman wrote:
               | You are referring to Operation Northwoods [0], a set of
               | plans from the 1960s, all of which were rejected.
               | 
               | Operation Northwoods came about because Brig. Gen. Edward
               | Lansdale, asked the CIA to come up with a list of
               | pretexts that might be used to justify an invasion of
               | Cuba. This request had a number of planners at the CIA
               | enumerate possible false flags that could be used as a
               | pretext. One of those plans was a terror attack against
               | US citizens. Operation Northwoods was rejected and never
               | implemented.
               | 
               | The US has plans for nearly everything, but there is a
               | massive difference between a plan that some CIA analyst
               | is pitching and something the US is likely or even able
               | to do. The US had all sorts of plans for how to handle a
               | pandemic, but then when one actually happened, the plans
               | couldn't be implemented because the US didn't actually
               | have the capabilities the plans called for.
               | 
               | > example, perhaps they were planning to blame the hack
               | of a power plant or critical infrastructure on this
               | exploit, then use the "evidence" that was leaked to prove
               | it was China, and from there carry out an offensive
               | operation against Chinese infrastructure.
               | 
               | Backdooring OpenSSH would in no way function as a pretext
               | for attacks on Chinese infrastructure. No one outside the
               | tech companies cares about this. The US also doesn't need
               | to invent hacking pretexts, you could just point to one
               | of many exposed Chinese hacking incidents.
               | 
               | [0] : https://en.wikipedia.org/wiki/Operation_Northwoods
        
           | offmycloud wrote:
           | CISA Advisory: https://www.cisa.gov/news-
           | events/alerts/2024/03/29/reported-...
           | 
           | Note that it say "Fedora 41" in the CISA page link to Red
           | Hat, but Red Hat changed the blog title to "Fedora 40" and
           | left the HTML page title as "Fedora 41".
        
           | weinberg wrote:
           | And I bet if it ended up on a NATO system, things escalate
           | quickly for the person / nation states being scrutinized
           | (https://www.nato.int/cps/en/natohq/topics_78170.htm)
        
           | oceanplexian wrote:
           | A federal investigation into what, itself? The primary actors
           | doing this type of thing are the US Government.
        
         | progbits wrote:
         | > Getting punched in the face is actually a necessary human
         | condition for a healthy civilization.
         | 
         | Aside from signed commits, we need to bring back GPG key
         | parties and web of trust. When using a project you would know
         | how many punches away from the committers you are.
        
           | woodruffw wrote:
           | PGP is more famous for "web of trust" topologies, not chains
           | of trust.
           | 
           | For all of their nerd cred, key parties didn't accomplish
           | very much (as evidenced by the fact that nothing on the
           | Internet really broke when the WoT imploded a few years
           | ago[1]). The "real" solution here is mostly cultural:
           | treating third-party software like the risky thing it
           | actually is, rather than a free source of pre-screened labor.
           | 
           | [1]: https://inversegravity.net/2019/web-of-trust-dead/
        
             | progbits wrote:
             | Chain/web was typo, corrected, thanks.
             | 
             | I know of the key party issues. But there is some value to
             | knowing how far removed from me and people I trust the
             | project authors are.
        
               | woodruffw wrote:
               | > But there is some value to knowing how far removed from
               | me and people I trust the project authors are
               | 
               | That's true!
        
               | msm_ wrote:
               | Nowadays i achieve this with linkedin[1] connections.
               | Less nerd cred, but achieves roughly the same purpose
               | (most of the people I care about in my niche are at most
               | a 3rd degree connection - a friend of a friend of a
               | friend).
               | 
               | [1] formerly also twitter, at least partially.
        
             | weinzierl wrote:
             | Yes, but there was also little pressure to really build the
             | WOT. People, like myself, did it because it was fun, but no
             | one really relied on it. This _could_ change, but it is
             | still far from certain if it 'd work given enough pressure.
        
           | EthanHeilman wrote:
           | The web of punches?
        
       | bagels wrote:
       | Is this a crime? Has anyone been prosecuted for adding a backdoor
       | like this?
        
         | pvg wrote:
         | _Has anyone been prosecuted for adding a backdoor_
         | 
         | Google up Randal Schwartz. Caution: clickhole.
        
           | bagels wrote:
           | Seems a little different. Based on a quick read, he gained
           | unauthorized access to systems.
           | 
           | In this case, backdoor code was offered to and accepted by xz
           | maintainers.
        
             | pvg wrote:
             | _Seems a little different. Based on a quick read_
             | 
             | It is a little different but a thing that you might have
             | missed in the quick read is that one of the things he was
             | accused of was installing and using a backdoor.
        
               | bagels wrote:
               | One involves making unauthorized access, the other does
               | not.
        
             | ptx wrote:
             | Lots of things are crimes even though they're just offering
             | something to a victim who willingly accepts it, e.g.
             | phishing attacks, fraudulent investment schemes,
             | contaminated food products.
        
               | bagels wrote:
               | Sure. I'm wondering if there is a specific law that was
               | broken here. It seems to me that it might be beneficial
               | if there were some legal protection against this sort of
               | act.
        
           | amiga386 wrote:
           | As far as I remember, he added no backdoors.
           | 
           | He was a consultant/sysadmin for Intel, and he did 3 things
           | which he thought his employer would support, and was
           | astonished to find that not only did his employer not
           | support, but actively had him prosecuted for doing it. Ouch.
           | 
           | 1. He ran a reverse-proxy on two machines so he could check
           | in on them from home.
           | 
           | 2. He used the crack program to find weak passwords.
           | 
           | 3. He found a weak password, and used it to log into a
           | system, which he copied the /etc/shadow file from to look for
           | additional weak passwords.
           | 
           | https://www.giac.org/paper/gsec/4039/intel-v-randal-l-
           | schwar...
           | 
           | https://web.archive.org/web/20160216204357/http://www.lightl.
           | ..
           | 
           | He didn't try and hide his activities, and didn't do anything
           | else untoward, it was literally just these things which most
           | people wouldn't bat an eyelid at. These days, it is
           | completely normal for a company to provide VPNs for their
           | employees, and completely normal to continually scan for
           | unexpected user accounts or weak passwords. But... because he
           | didn't explain this to higher-ups and get their buy-in, they
           | prosecuted him instead of thanking him.
        
             | uniformlyrandom wrote:
             | To be fair, it is perfectly normal for a surgeon to cut
             | people with a sharp knife with their permission while in
             | the hospital.
             | 
             | It is kinda sus when they do it at home without consent.
        
               | amiga386 wrote:
               | I find it useful to compare the reactions of O'Reilly and
               | Intel. Schwartz worked for both (he wrote _Learning Perl_
               | and co-authored _Programming Perl_ for O 'Reilly and made
               | them plenty of money). He cracked the passwords of both
               | companies without first getting permission.
               | 
               | O'Reilly's sysadmin told him off for not getting
               | permission, and told him not to do it again, but used his
               | results to let people with weak passwords know to change
               | them.
               | 
               | Intel's sysadmin started collecting a dossier on Schwartz
               | and ultimately Intel pushed for state criminal charges
               | against him.
               | 
               | O'Reilly's sysadmin testified in Schwartz's defense that
               | he was an overly eager guy with no nefarious intent. So -
               | kinda-sus or not - Intel could have resolved this with a
               | dressing down, or even termination if they were really
               | unhappy. Intel _chose_ to go nuclear, and invoke the
               | Oregon computer crime laws, and demand the state
               | prosecute him.
        
       | move-on-by wrote:
       | Fascinating. Just yesterday the author added a `SECURITY.md` file
       | to the `xz-java` project.
       | 
       | > If you discover a security vulnerability in this project please
       | report it privately. *Do not disclose it as a public issue.* This
       | gives us time to work with you to fix the issue before public
       | exposure, reducing the chance that the exploit will be used
       | before a patch is released.
       | 
       | Reading that in a different light, it says give me time to adjust
       | my exploits and capitalize on any targets. Makes me wonder what
       | other vulns might exist in the author's other projects.
        
         | xyst wrote:
         | 90 day dark window for maintainers is SOP though. Then after 90
         | days, it's free game for public disclosure
        
         | szundi wrote:
         | How many of people like this one exist?
        
           | ldayley wrote:
           | If this question had a reliable (and public) answer then the
           | world would be a very different place!
           | 
           | That said, this is an important question. We, particularly
           | those us who work on critical infrastructure or software,
           | should be asking ourselves this regularly to help prevent
           | this type of thing.
           | 
           | Note that it's also easy (and similarly catastrophic) to
           | swing too far the other way and approach all unknowns with
           | automatic paranoia. We live in a world where we have to trust
           | strangers every day, and if we lose that option completely
           | then our civilization grinds to a halt.
           | 
           | But-- vigilance is warranted. I applaud these engineers who
           | followed their instincts and dug into this. They all did us a
           | huge service!
           | 
           | EDIT: wording, spelling
        
             | josephg wrote:
             | Yeah thanks for saying this; I agree. And as cliche as it
             | is to look for a technical solution to a social problem, I
             | also think better tools could help a lot here.
             | 
             | The current situation is ridiculous - if I pull in a
             | compression library from npm, cargo or Python, why can that
             | package interact with my network, make syscalls (as me) and
             | read and write files on my computer? Leftpad shouldn't be
             | able to install crypto ransomware on my computer.
             | 
             | To solve that, package managers should include capability
             | based security. I want to say "use this package from cargo,
             | but refuse to compile or link into my binary any function
             | which makes any syscall except for _read_ and _write_. No
             | _open_ - if I want to compress or decompress a file, I'll
             | open the file myself and pass it in." No messing with my
             | filesystem. No network access. No raw asm, no trusted build
             | scripts and no exec. What I allow is all you get.
             | 
             | The capability should be transitive. All dependencies of
             | the package should be brought in under the same
             | restriction.
             | 
             | In dynamic languages like (server side) JavaScript, I think
             | this would have to be handled at runtime. We could add a
             | capability parameter to all functions which issue syscalls
             | (or do anything else that's security sensitive). When the
             | program starts, it gets an "everything" capability. That
             | capability can be cloned and reduced to just the
             | capabilities needed. (Think, _pledge_ ). If I want to talk
             | to redis using a 3rd party library, I pass the redis
             | package a capability which only allows it to open network
             | connections. And only to this specific host on this
             | specific port.
             | 
             | It wouldn't stop all security problems. It might not even
             | stop this one. But it would dramatically reduce the attack
             | surface of badly behaving libraries.
        
               | Guvante wrote:
               | Doesn't this exact exploit not fixed by your capability
               | theory?
               | 
               | It is hijacking a process that has network access at
               | runtime not build time.
               | 
               | The build hack grabs files from the repo and inspects
               | build parameters (in a benign way, everyone checks
               | whether you are running on X platform etc)
        
               | josephg wrote:
               | The problem we have right now is that any linked code can
               | do anything, both at build time and at runtime. A good
               | capability system should be able to stop xz from issuing
               | network requests even if other parts of the process do
               | interact with the network. It certainly shouldn't have
               | permission to replace crc32_resolve() and crc64_resolve()
               | via ifunc.
               | 
               | Another way of thinking about the problem is that right
               | now every line of code within a process runs with the
               | same permissions. If we could restrict what 3rd party
               | libraries can do - via checks either at build time or
               | runtime - then supply chain attacks like this would be
               | much harder to pull off.
        
               | im3w1l wrote:
               | I'm not convinced this is such a cure-all as any library
               | must necessarily have the ability to "taint" its output.
               | Like consider this library. It's a compression library.
               | You would presumably trust it to decompress things right?
               | Like programs? And then you run those programs with full
               | permission? Oops..
        
               | josephg wrote:
               | It's not a cure-all. I mean, we're talking about infosec
               | - so nothing is. But that said, barely any programs need
               | the ability to execute arbitrary binaries. I can't
               | remember the last time I used eval() in JavaScript.
               | 
               | I agree that it wouldn't stop this library from injecting
               | backdoors into decompressed executables. But I still
               | think it would be a big help anyway. It would stop this
               | attack from working.
               | 
               | At the big picture, we need to acknowledge that we can't
               | implicitly trust opensource libraries on the internet.
               | They are written by strangers, and if you wouldn't invite
               | them into your home you shouldn't give them permission to
               | execute arbitrary code with user level permissions on
               | your computer.
               | 
               | I don't think there are any one size fits all answers
               | here. And I can't see a way to make your "tainted output"
               | idea work. But even so, cutting down the trusted surface
               | area from "leftpad can cryptolocker your computer" to
               | "Leftpad could return bad output" sounds like it would
               | move us in the right direction.
        
               | fauigerzigerk wrote:
               | This approach could work for dynamic libraries, but a lot
               | of modern ecosystems (Go, Rust, Swift) prefer to
               | distribute packages as source code that gets compiled
               | with the including executable or library.
        
               | josephg wrote:
               | Yes, and?
               | 
               | The goal is to restrict what included libraries can do.
               | As you say, in languages like Rust, Go or Swift, the
               | mechanism to do this would also need to work with
               | statically linked code to work. And thats quite tricky,
               | because there are no isolation boundaries between
               | functions in executables.
               | 
               | It should still be possible to build something like this.
               | It would just be inconvenient. In rust, swift and go
               | you'd probably want to implement something like this at
               | compile time.
               | 
               | In rust, I'd start by banning unsafe in dependencies. (Or
               | whitelisting which projects are allowed to use unsafe
               | code.) Then add special annotations on all the methods in
               | the standard library which need special permissions to
               | run. For example, File::open, fork, exec, networking, and
               | so on. In cargo.toml, add a way to specify which
               | permissions your child libraries get. "Import serde, but
               | give it no OS permissions". When you compile your
               | program, the compiler can look at the call tree of each
               | function to see what actually gets called, and make sure
               | the permissions match up. If you call a function in serde
               | which in turn calls File::open (directly or indirectly),
               | and you didn't explicitly allow that, the program should
               | fail to compile.
               | 
               | It should be fine for serde to contain some utility
               | function that calls the banned File::open, so long as the
               | utility function isn't called.
               | 
               | Permissions should be in a tree. As you get further out
               | in the dependency tree, libraries get fewer permissions.
               | If I pass permissions {X,Y} to serde, serde can pass
               | permission {X} to one of its dependencies in turn. But
               | serde can't pass permission {Q} to its dependency - since
               | it doesn't have that capability itself.
               | 
               | Any libraries which use unsafe are sort of trusted to do
               | everything. You might need to insist that any package
               | which calls unsafe code is actively whitelisted by the
               | cargo.toml file in the project root.
        
               | saagarjha wrote:
               | Do you understand how ifuncs work? They are in the
               | address space in the application that they run in.
               | liblzma is resolving _its own_ pointers!
        
               | Bulat_Ziganshin wrote:
               | if I got it right, the attack uses glibc IFUNC mechanism
               | to patch sshd (and only sshd) to directly run some code
               | in liblzma when sshd verifies logins.
               | 
               | so the problem is IFUNC mechanism, which has its valid
               | uses but can be EASILY misused for any sort of attacks
        
               | josephg wrote:
               | Honestly, I don't have a lot of hope that we can fix this
               | problem for C on linux. There's just _so much_ historical
               | cruft in present, spread between autotools, configure,
               | make, glibc, gcc and C itself that would need to be
               | modified to support capabilities.
               | 
               | The rule we need is "If I pull in library X with some
               | capability set, then X can't do anything not explicitly
               | allowed by the passed set of capabilities". The problem
               | in C is that there is currently no straightforward way to
               | firewall off different parts of a linux process from each
               | other. And dynamic linking on linux is done by gluing
               | together compiled artifacts - with no way to check or
               | understand what assembly instructions any of those parts
               | contain.
               | 
               | I see two ways to solve this generally:
               | 
               | - Statically - ie at compile time, the compiler annotates
               | every method with a set of permissions it (recursively)
               | requires. The program fails to compile if a method is
               | called which requires permissions that the caller does
               | not pass it. In rust for example, I could imagine cargo
               | enforcing this for rust programs. But I think it would
               | require some changes to the C language itself if we want
               | to add capabilities there. Maybe some compiler extensions
               | would be enough - but probably not given a C program
               | could obfuscate which functions call which other
               | functions.
               | 
               | - Dynamically. In this case, every linux system call is
               | replaced with a new version which takes a capability
               | object as a parameter. When the program starts, it is
               | given a capability by the OS and it can then use that to
               | make child capabilities passed to different libraries. I
               | could imagine this working in python or javascript. But
               | for this to work in C, we need to stop libraries from
               | just scanning the process's memory and stealing
               | capabilities from elsewhere in the program.
        
               | estebarb wrote:
               | Or take the Chrome / original Go approach: load that code
               | in a different process, use some kind of RPC. With all
               | the context switch penalty... sigh, I think it is the
               | only way, as the MMU permissions work at a page level.
        
               | josephg wrote:
               | Firefox also has its solution of compiling dependencies
               | to wasm, then compiling the wasm back into C code and
               | linking that. It's super weird, but the effect is that
               | each dependency ends up isolated in bounds checked
               | memory. No context switch penalty, but instead the code
               | runs significantly slower.
        
               | AgentME wrote:
               | A process can do little to defend itself from a library
               | it's using which has full access to its same memory.
               | There is no security boundary there. This kind of
               | backdoor doesn't hinge on IFUNC's existence.
        
               | saagarjha wrote:
               | The problem is that the attacker has code execution in
               | sshd, not ifuncs
        
             | saalweachter wrote:
             | Assume 3% of the population is malicious.
             | 
             | Enough to be cautious, enough to think about how to catch
             | bad actors, not so much as to close yourself off and become
             | a paranoid hermit.
        
               | pizzafeelsright wrote:
               | Huh. I never really thought of it as a percentage.
               | 
               | I've been evil, been wonderful, and indifferent at
               | different stages in life.
               | 
               | I have known those who have done similar for money, fame,
               | and boredom.
               | 
               | I think, given a backstory, incentive, opportunity, and
               | resources it would be possible to most people to flip
               | from wouldn't to enlisted.
               | 
               | Leverage has shown to be the biggest lever when it comes
               | to compliance.
        
               | XorNot wrote:
               | It's doubtful you've been evil, or at least, you are
               | really lacking in imagination of the true scope of what
               | that word implies.
        
               | mlsu wrote:
               | _The line between good and evil cuts through the heart of
               | every person_
        
               | saulpw wrote:
               | "Assume that 3% of the people you encounter will act
               | maliciously."
        
               | heresie-dabord wrote:
               | We live in a time of populous, _wealthy_ dictatorships
               | that have computer-science expertise are openly hostile
               | to the US and Canada.
               | 
               | North America is only about 5% of the world's population.
               | [1] (We can assume that malicious actors are in North
               | America, too, but this helps to adjust our perspective.)
               | 
               | The percentage of maliciousness _on the Internet_ is much
               | higher.
               | 
               | [1] _ See continental subregions. https://en.wikipedia.or
               | g/wiki/List_of_continents_and_contine...
        
               | richrichie wrote:
               | Huh? The empirical evidence we have - thanks to Snowden
               | leaks - paints a different picture. NSA is the biggest
               | malicious actor with nearly unlimited resources at hand.
               | They even insert hardware backdoors and intercept
               | shipment to do that.
        
               | heresie-dabord wrote:
               | > NSA is the biggest malicious actor
               | 
               | I'm curious, how do you rank CN, RU, and IR?
        
             | heresie-dabord wrote:
             | Threat actors create personas. We will need strong social
             | trust to protect our important projects and dependencies.
        
           | hulitu wrote:
           | > How many of people like this one exist?
           | 
           | I guess every 3 letter agency has at least one. You can do
           | the math. They havent't learned anything after Solar Winds.
        
         | ncr100 wrote:
         | _Security Researchers_ : Is this request-for-private-disclosure
         | + "90-days before public" reasonable?
         | 
         | It's a SEVERE issue, to my mind, and 90 days seems too long to
         | me.
        
           | bawolff wrote:
           | Whether its reasonable is debatable, but that type of time
           | frame is pretty normal for things that aren't being actively
           | exploited.
           | 
           | This situation is perhaps a little different as its not an
           | accidental bug waiting to be discovered but an intentionally
           | placed exploit. We know that a malicious person already knows
           | about it.
        
             | londons_explore wrote:
             | If you were following Google Project Zero's policy (which
             | many researchers do), any in-the-wild exploits would
             | trigger an immediate reveal.
        
             | larschdk wrote:
             | Detecting a security issue is one thing. Detecting a
             | malicious payload is something completely different. The
             | latter has intent to exploit and must be addressed
             | immediately. The former has at least some chance of noone
             | knowing about it.
        
           | cjbprime wrote:
           | In this particular case, there is a strong reason to expect
           | exploitation in the wild to already be occurring (because
           | it's an intentional backdoor) and this would change the risk
           | calculus around disclosure timelines.
           | 
           | But in the general case, it's normal for 90 days to be given
           | for the coordinated patching of even very severe
           | vulnerabilities -- you are giving time not just to the
           | project maintainers, but to the users of the software to
           | finish updating their systems to a new fixed release, before
           | enough detail to easily weaponize the vulnerability is
           | shared. Google Project Zero is an example of a team with many
           | critical impact findings using a 90-day timeline.
        
             | ang_cire wrote:
             | As someone in security who doesn't work at a major place
             | that get invited to the nice pre-notification
             | notifications, I hate this practice.
             | 
             | My customers and business are not any less important or
             | valuable than anyone else's, and I should not be left being
             | potentially exploited, and my customers harmed, for 90 more
             | days while the big guys get to patch their systems
             | (thinking of e.g. Log4J, where Amazon, Meta, Google, and
             | others were told privately how to fix their systems, before
             | others were even though the fix was simple).
             | 
             | Likewise, as a customer I should get to know as soon as
             | someone's software is found vulnerable, so I can then make
             | the choice whether to continue to subject myself to the
             | risk of continuing to use it until it gets patched.
        
               | cjbprime wrote:
               | OpenSSL's "notification of an upcoming critical release"
               | is public, not private.
               | 
               | You do get to know that the vulnerability exists quickly,
               | and you could choose to stop using OpenSSL altogether
               | (among other mitigations) once that email goes out.
        
               | sidewndr46 wrote:
               | if your system has already been compromised at the root
               | level, it does not matter in the least bit
        
               | Thorrez wrote:
               | Well if you assume everyone has already been exploited,
               | disclosing quickly vs slowly won't prevent that.
               | 
               | Also, if something is being actively exploited, usually
               | there's no or very little embargo.
        
               | wyldberry wrote:
               | I empathize with this as I've been in the same boat, but
               | all entities are not equal when performing triage.
        
               | freedomben wrote:
               | Being in a similar boat, I heartily agree.
               | 
               | But I don't want anyone else to get notified immediately
               | because the odds that somebody will start exploiting
               | people before a patch is available is pretty high. Since
               | I can't have both, I will choose the 90 days for the
               | project to get patches done and all the packagers to
               | include them and make them available, so that by the time
               | it's public knowledge I'm already patched.
               | 
               | I think this is a Tragedy of the Commons type of problem.
               | 
               | Caveat: This assume the vuln is found by a white hat. If
               | it's being exploited already or is known to others, then
               | I fully agree the disclosure time should be eliminated
               | and it's BS for the big companies to get more time than
               | us.
        
               | oceanplexian wrote:
               | Yeah I worked in FAANG when we got the advance notice of
               | a number of CVEs. Personally I think it's shady, I don't
               | care how big Amazon or Google is, they shouldn't get
               | special privileges because they are a large corporation.
        
               | kelnos wrote:
               | I don't think the rationale is that they are a large
               | corporation or have lots of money. It's that they have
               | many, many, many more users that would be affected than
               | most companies have.
        
               | sdenton4 wrote:
               | I imagine they also have significant resources to
               | contribute to dealing with breaches - eg, analysing past
               | cookouts by the bad actor, designing mitigations, etc.
        
               | hatter wrote:
               | > My ... business are not any less ... valuable than
               | anyone else's,
               | 
               | Plainly untrue. The reason they keep distribution minimal
               | is to maximise the chance of keeping the vuln secret.
               | Your business is plainly less valuable than google, than
               | walmart, than godaddy, than BoA. Maybe you're some big
               | cheese with a big reputation to keep, but seeing as
               | you're feeling excluded, I guess these orgs have no more
               | reason to trust you than they have to trust me, or
               | hundreds of thousands of others who want to know. If they
               | let you in, they'd let all the others in, and odds are
               | greatly increased that now your customers are at risk
               | from something one of these others has worked out, and
               | either blabbed about or has themselves a reason to
               | exploit it.
               | 
               | Similarly plainly, by disclosing to 100 major companies,
               | they protect a vast breadth of consumers/customer-
               | businesses of these major companies at a risk of
               | 10,000,000/100 (or even less, given they may have more
               | valuable reputation to keep). Changing that risk to
               | 12,000,000/10,000 is, well, a risk they don't feel is
               | worth taking.
        
               | squeaky-clean wrote:
               | > Your business is plainly less valuable than google,
               | than walmart, than godaddy, than BoA.
               | 
               | The company I work for has a market cap roughly 5x that
               | of goDaddy and we're responsible for network connected
               | security systems that potentially control whether a
               | person can physically access your home, school, or
               | business. We were never notified of this until this HN
               | thread.
               | 
               | If your BofA account gets hacked you lose money. If your
               | GoDaddy account gets hacked you lose your domain. If
               | Walmart gets hacked they lose... What money and have
               | logistics issues for a while?
               | 
               | Thankfully my company's products have additional
               | safeguards and this isn't a breach for us. But what if it
               | was? Our customers can literally lose their lives if
               | someone cracks the security and finds a way to remotely
               | open all the locks in their home or business.
               | 
               | Don't tell me that some search engine profits or
               | someone's emails history is "more valuable" than 2000
               | schoolchildren's lives.
               | 
               | How about you give copies of the keys to your apartment
               | and a card containing your address to 50 random people on
               | the streets and see if you still feel that having your
               | Gmail account hacked is more valuable.
        
               | SonOfLilit wrote:
               | Sorry, but that's not a serious risk analysis. The
               | average person would be hurt a _lot_ more by a godaddy
               | breach by a state actor than by a breach of your service
               | by a state actor.
        
               | jpc0 wrote:
               | I think from an exposure point of view, I'm less likely
               | to worry about the software side of my physical security
               | being exploited that the actual hardware side.
               | 
               | None of the points you make are relevant since I have yet
               | to see any software based entry product whose software
               | security can be concidered more than lackluster at best,
               | maybe your company is better since you didn't mention a
               | name I can't say otherwise.
               | 
               | What I'm saying is your customers are more likely to have
               | their doors physically broken than remotely opened by
               | software and you are here on about life and death because
               | of a vuln in xz?
               | 
               | If your companies market cap is as high as you say and
               | they are as security aware as you say why aren't they
               | employing security researchers and actively on the
               | forefront of finding vulns and reporting them? That would
               | get them an invite to the party.
        
               | maerF0x0 wrote:
               | > Your business is plainly less valuable than google,
               | than walmart, than godaddy, than BoA.
               | 
               | Keep in mind it's the EROI not market cap.
               | 
               | A company is worth attacking if their reward:effort ratio
               | is right. Smaller companies have a much lower effort
               | required.
        
               | umanwizard wrote:
               | > My customers and business are not any less important or
               | valuable than anyone else's
               | 
               | Of course they are. If Red Hat has a million times more
               | customers than you do then they are collectively more
               | valuable almost by definition.
        
               | InvertedRhodium wrote:
               | If OP is managing something that is critical to life -
               | think fire suppression controllers, or computers that are
               | connected to medical equipment, I think it becomes very
               | difficult to compare that against financial assets.
        
               | jen20 wrote:
               | Such systems should be airgapped...
        
               | solarengineer wrote:
               | I can think of two approaches for such companies:
               | 
               | a. Use commercial OS vendors who will push out fixes.
               | 
               | b. Set up a Continuous Integration process where
               | everything is open source and is built from the ground
               | up, with some reliance on open source platforms such as
               | distros.
               | 
               | One needs different types of competence and IT
               | Operational readiness in each approach.
        
               | squeaky-clean wrote:
               | > b. Set up a Continuous Integration process where
               | everything is open source and is built from the ground
               | up, with some reliance on open source platforms such as
               | distros.
               | 
               | How would that have prevented this backdoor?
        
               | ajdlinux wrote:
               | At a certain scale, "economic" systems become critical to
               | life. Someone who has sufficiently compromised a
               | systemically-important bank can do things that would
               | result in riots breaking out on the street all over a
               | country.
        
               | codedokode wrote:
               | Something that is critical to life should not be
               | connected to Internet.
        
               | AlexandrB wrote:
               | And yet it seems like every new car is.
        
               | richrichie wrote:
               | Sshhh now you are starting to talk like a rightwinger.
               | Alex Jones has been saying this for a long time ;)
        
               | voidfunc wrote:
               | > My customers and business are not any less important or
               | valuable than anyone else's
               | 
               | Hate to break it to you but yes they are.
        
             | hulitu wrote:
             | > but to the users of the software to finish updating their
             | systems to a new fixed release,
             | 
             | Is there "a new fixed release" ?
        
           | sterlind wrote:
           | I think you have to take the credibility of the maintainer
           | into account.
           | 
           | If it's a large company, made of people with names and faces,
           | with a lot to lose by hacking its users, they're unlikely to
           | abuse private disclosure. If it's some tiny library, the
           | maintainers might be in on it.
           | 
           | Also, if there's evidence of exploitation in the wild, the
           | embargo is a gift to the attacker. The existence of a
           | vulnerability in that case should be announced, even if the
           | specifics have to be kept under embargo.
        
             | fmajid wrote:
             | In this case the maintainer is the one who deliberately
             | introduced the backdoor. As Andres Freund puts it deadpan,
             | "Given the apparent upstream involvement I have not
             | reported an upstream bug."
        
           | decoy78 wrote:
           | imho it depends on the vuln. I've given a vendor over a year,
           | because it was a very low risk vuln. This isn't a vuln though
           | - this is an attack.
        
           | sidewndr46 wrote:
           | I've always laughed my ass off at the idea of a disclosure
           | window. It takes less than a day to find RCE that grants root
           | privileges on devices that I've bothered to look at. Why on
           | earth would I bother spending months of my time trying to
           | convince someone to fix something?
        
           | BartjeD wrote:
           | The fraudulent author must have enjoyed the 'in joke' -- He's
           | the one create vulnerabilities..
        
         | tw04 wrote:
         | Honestly it seems like a state-based actor hoping to get
         | whatever high value target compromised before it's made public.
         | Reporting privately buys them more time, and allows them to let
         | handlers know when the jig is up.
        
       | returningfory2 wrote:
       | A couple of years ago I wrote a Go library that wraps the xz C
       | code and allows you to do xz compression in Go:
       | https://github.com/jamespfennell/xz
       | 
       | About a week ago I received the first PR on that repo, to upgrade
       | to 5.6.1. I thought it was odd to get such a random PR...it's not
       | the same GitHub account as upstream though.
        
         | pinko wrote:
         | > it's not the same GitHub account as upstream
         | 
         | This is valuable information, and a sign that this may be the
         | tip of an iceberg.
        
         | Bromeo wrote:
         | I don't want to read too much into it, but the person
         | (supposedly) submitting the PR seems to work at 1Password since
         | December last year, as per his Linkedin. (And his Linkedin page
         | has a link to the Github profile that made the PR).
        
           | returningfory2 wrote:
           | Yeah the GitHub account looks really really legitimate. Maybe
           | it was compromised though?
        
             | jethro_tell wrote:
             | What looks legit about a gmail address and some stock art
             | for a profile?
        
               | gpm wrote:
               | [Deleted per below]
        
               | TeMPOraL wrote:
               | Can you stay in that org after leaving Google?
        
               | bananapub wrote:
               | whoever is in charge of removing people from the Google
               | github org has the itchiest trigger finger in the whole
               | exiting-the-company process tree.
        
               | fooker wrote:
               | No
        
               | Jyaif wrote:
               | You are not looking at the right profile. This is the
               | profile that people are talking about:
               | https://github.com/jaredallard
        
               | gpm wrote:
               | Oops, you're absolutely correct. Deleted (via edit) my
               | comment above. Thanks.
        
               | ncr100 wrote:
               | He was just (50 minutes ago) removed from the oss fuzz
               | repo.
               | 
               | I hope this also (at least temporarily until verification
               | of 'bad/good') remove him from the org?
        
               | buildbot wrote:
               | Plus the README.md that is just a rickroll
        
             | ncr100 wrote:
             | The 2 GMail accounts are 85% / mainly associated with XZ
             | work, since 2021, per searching for them explicitly via
             | Google.
        
             | computerfriend wrote:
             | The PR's two commits are signed by a key that was also used
             | to sign previous commits belonging to that author.
        
               | dralley wrote:
               | Hold up, are you saying that
               | https://github.com/jaredallard and the accounts
               | affiliated with this XZ backdoor share a PGP key? Or
               | something else?
        
               | computerfriend wrote:
               | No, this account made a PR and their commits were signed
               | [1]. Take a look at their other repositories, e.g. they
               | did AoC 2023 in Rust and published it, the commits in
               | that repository are signed by the same key. So this is
               | not (just) a GitHub account compromise.
               | 
               | I find this aspect to be an outlier, the other attacker
               | accounts were cutouts. So this doesn't quite make sense
               | to me.
               | 
               | [1] https://github.com/jamespfennell/xz/pull/2/commits
        
           | bombcar wrote:
           | If I were trying to compromise supply chains, getting into
           | someplace like 1Password would be high up on the list.
           | 
           | Poor guy, he's probably going to get the third degree now.
        
           | switch007 wrote:
           | As a 1Password user, I just got rather nervous.
        
             | bombcar wrote:
             | Yubikeys starting to look kinda yummy.
        
               | wiml wrote:
               | Hardware gets backdoored too, remember Crypto AG?
        
           | lelandbatey wrote:
           | They're definitely a real person. I know cause that
           | "1Password employee since December" is a person I know IRL
           | and worked with for years at their prior employer. They're
           | not a no-name person or a fake identity just FYI. Please
           | don't be witch hunting; this genuinely looks like an
           | unfortunate case where Jared was merely proactively doing
           | their job by trying to get an externally maintained golang
           | bindings of XZ to the latest version of XZ. Jared's pretty
           | fantastic to work with and is definitely the type of person
           | to be filing PRs on external tools to get them to update
           | dependencies. I think the timing is comically bad, but I can
           | vouch for Jared.
           | 
           | https://github.com/jamespfennell/xz/pull/2
        
         | arp242 wrote:
         | As a bit of an aside, I would never accept a PR like this, and
         | would always update $large_vendored_dependency myself. This is
         | unreviewable, and trivial to insert any backdoor (unless you go
         | through the motions of updating it yourself and diffing, at
         | which point the PR becomes superfluous). I'd be wary even from
         | a well-known author unless I knew them personally on some level
         | (real-life or via internet). Not that I wouldn't trust them,
         | but people's machines or accounts can get compromised, people
         | can have psychotic episodes, things like that. At the very
         | least I'd like to have some out-of-band "is this really you?"
         | signal.
         | 
         | This is how I once inserted a joke in one of our (private)
         | repos that would randomly send cryptic messages to our chat
         | channel. This was pretty harmless and just a joke (there's some
         | context that made it funny), but it took them years to find it
         | - and that was only because I told them after I quit.
         | 
         | That said, looking at the GitHub account I'd be surprised if
         | there's anything nefarious going on here. Probably just someone
         | using your repo, seeing it's outdated, and updating it.
        
           | LVB wrote:
           | The (most?) popular SQLite driver for Go often gets PRs to
           | update the SQLite C amalgamation, which the owner politely
           | declines (and I appreciate him for that stance, and for
           | taking on the maintenance burden it brings).
           | 
           | e.g., https://github.com/mattn/go-
           | sqlite3/pull/1042#issuecomment-1...
        
             | astrange wrote:
             | Meanwhile SQLite itself doesn't accept any patches for
             | anything; if you show the author one he will at best
             | rewrite it.
        
           | creatonez wrote:
           | In this case, the project is using Git submodules for its
           | vendored dependencies, so you can trivially cryptographically
           | verify that they have vendored the correct dependency just by
           | checking the commit hash. It looks really crazy on Github but
           | in most git clients it will just display the commit hash
           | change.
        
         | 5kg wrote:
         | The backdoor (test binary blob and autoconf) is not part of the
         | pull request.
        
         | cbmuser wrote:
         | There was also a bug report in Debian which requested updating
         | xz-utils to 5.6.1: https://bugs.debian.org/cgi-
         | bin/bugreport.cgi?bug=1067708
        
           | roflmaostc wrote:
           | That's the same Hans Jansen mentioned here:
           | https://boehs.org/node/everything-i-know-about-the-xz-
           | backdo...
        
         | icambron wrote:
         | IMO your prior on this should be that it's most likely just
         | someone innocently updating a dependency.
        
         | jaredallard2 wrote:
         | Hey all, I'm the author of that PR. Just posted to Github with
         | additional context:
         | https://github.com/jamespfennell/xz/pull/2#issuecomment-2027...
        
           | blueflow wrote:
           | That sucks to have people write mails to your employer...
        
             | jaredallard2 wrote:
             | To be honest, I probably wouldn't have noticed the comments
             | on the PR if it wasn't for that since my Github
             | notifications are an absolute mess. Thankfully, my employer
             | has been super supportive throughout this :D
        
           | SheinhardtWigCo wrote:
           | I appreciated your detailed update!
        
           | ikekkdcjkfke wrote:
           | The dopamine hits from updating stuff should come to an end,
           | it should be thought of as adding potentially new bugs or
           | exploits, unless the update fixes a CVE. Also Github needs to
           | remove the green colors and checkmarks in PR's to prevent
           | these dopamine traps from overriding any critical thinking
        
             | btown wrote:
             | Counterpoint: if you wait to keep things up to date until
             | there's a CVE, there's a higher likelihood that things will
             | break doing such a massive upgrade, and this may slow down
             | a very time-sensitive CVE response. Allowing people to feel
             | rewarded for keeping things up to date is not inherently a
             | bad thing. As with all things, the balance point will vary
             | from project to project!
        
               | theptip wrote:
               | Exactly. You don't want to be bleeding edge (churn, bugs)
               | but in general you usually don't want to be on the oldest
               | supported version either (let alone unsupported).
               | 
               | Risk/reward depends on the usecase of course. For a
               | startup I'd be on the .1 version of the newest major
               | version (never .0) if there are new features I want. For
               | enterprise, probably the oldest LTS I can get away with.
        
             | aardvark179 wrote:
             | I strongly disagree. If you don't update your dependencies
             | then it's easy to lose the institutional knowledge of how
             | to update them, and who actually owns that obscure area of
             | your code base that depends on them. Then you get a real
             | CVE and have to work out everything in a hurry.
             | 
             | If you have a large code base and organisation then keep
             | doing those upgrades so it won't be a problem when it
             | really matters. If it's painful, or touches too many areas
             | of the code you'll be forced to refactor things so that
             | ceases to be a problem, and you might even manage to
             | contain things so well that you can swap implementations
             | relatively easily when needed.
        
         | squigz wrote:
         | Internet detectives at work in this thread!
        
         | baxtr wrote:
         | Suddenly anything like that becomes super suspicious.
         | 
         | I wonder how this will affect the OS community in general.
        
       | notyoutube wrote:
       | Is the solution against such attacks in the future only to
       | scrutinize more, or are there other reasonable options in terms
       | of hardening?
        
         | JanisErdmanis wrote:
         | The lesson here seems to not depend on tools written in
         | languages that have complex, obscure build systems and no one
         | is either able or interested to read. Using tools rewritten in
         | Rust, Go or any other languege which resolves dependencies
         | within project seems the only way to do hardening here.
        
           | Lichtso wrote:
           | Once somebody actually does this people are gonna complain
           | the same as always: "The sole purpose of your project is to
           | rewrite perfectly fine stuff in Rust for the sake of it" or
           | something along these lines.
        
           | blcknight wrote:
           | I agree there's safer languages than C, but nobody reads the
           | 50,000 lines changed when you update the vendoring in a
           | random golang project. It would be easy to introduce
           | something there that nobody notices too.
        
             | JanisErdmanis wrote:
             | It is generally harder to introduce vulnerabilities in
             | readable language even more when it is memory safe. Sure
             | life is not perfect and bad actors would have found a ways
             | to inject vulnerabilities also in Rust, Go codebase. The
             | benefit of modern languages is that there is one way to
             | build things and the source code is the only thing that
             | needs to be auditted.
        
               | mstef wrote:
               | this backdoor had nothing at all to do with memory
               | safety.
        
           | ok123456 wrote:
           | Wouldn't a supply chain attack like this be much worse with
           | Rust and Cargo because of the fact it's not just a single
           | dynamic library that needs to be reinstalled system-wise,
           | but, instead, every binary would require a new release?
        
             | gpm wrote:
             | It would mean rebuilding more packages. I don't think
             | that's meaningfully "much worse", package mangers are
             | perfectly capable of rebuilding the world and the end-user
             | fix is the same "pacman -Syu"/"apt-get update && apt-get
             | upgrade"/...
             | 
             | On the flip side the elegant/readable build system means
             | that the place this exploit was hidden wouldn't exist.
             | Though I wouldn't confidently say that 'no hiding places
             | exist' (especially with the parts of the ecosystem that
             | wrap dependencies in other languages).
        
               | ok123456 wrote:
               | It's much worse because it requires repackaging every
               | affected system package instead of a single library.
               | Knowing which packages are affected is difficult because
               | that information isn't exposed to the larger system
               | package manager. After all, it's all managed by the build
               | system.
        
               | packetlost wrote:
               | In the era of modern CI and build infrastructure, I don't
               | really think that's materially an issue.
        
               | ok123456 wrote:
               | Those CI and build infrastructures rely on the Debian and
               | RedHat being able to build system packages.
               | 
               | How would an automated CI or build infrastructure stop
               | this attack? It was stopped because the competent package
               | maintainer noticed a performance regression.
               | 
               | In this case, this imagined build system would have to
               | track every rust library used in every package to know
               | which packages to perform an emergency release for.
        
               | packetlost wrote:
               | I... don't see your point. Tracking the dependencies a
               | static binary is built with is already a feature for
               | build systems, just maybe not the ones Debian and RH are
               | using now, but I imagine they would if they were shipping
               | static binaries.
               | 
               | Rust isn't really the point here, it's the age old static
               | vs dynamic linking argument. Rust (or rather, Cargo)
               | already tracks which version of a dependency a library
               | depends on (or a pattern to resolve one), but it's
               | besides the point.
        
               | ok123456 wrote:
               | Rust is the issue here because it doesn't give you much
               | of an option. And that option is the wrong one if you
               | need to do an emergency upgrade of a particular library
               | system-wide.
        
               | packetlost wrote:
               | It's really not, it's not hard to do a reverse search of
               | [broken lib] <= depends on <= [rust application] and then
               | rebuild everything that matches. You might have to
               | rebuild more, but that's not really _hard_ with modern
               | build infrastructure.
               | 
               | Not to mention if you have a Rust application that
               | depends on C libraries, it already dynamically links on
               | most platforms. You only need to rebuild if a Rust crate
               | needs to be updated.
        
               | steveklabnik wrote:
               | > imagined
               | 
               | Cargo already has this information for every project it
               | builds. That other systems do not is their issue, but
               | it's not a theoretical design.
        
               | ok123456 wrote:
               | So, I know that librustxz has been compromised. I'm
               | Debian. I must dive into each rust binary I distribute as
               | part of my system and inspect their Cargo.toml files.
               | Then what? Do I fork each one, bump the version, hope it
               | doesn't break everything, and then push an emergency
               | release!??!
        
               | steveklabnik wrote:
               | > I must dive into each rust binary I distribute as part
               | of my system and inspect their Cargo.toml
               | 
               | A few things:
               | 
               | 1. It'd be Cargo.lock
               | 
               | 2. Debian, in particular, processes Cargo's output here
               | and makes individual debs. So they've taken advantage of
               | this to already know via their regular package manager
               | tooling.
               | 
               | 3. You wouldn't dive into and look through these by hand,
               | you'd have it as a first-class concept. "Which packages
               | use this package" _should_ be table stakes for a package
               | manager.
               | 
               | > Then what? Do I fork each one, bump the version, hope
               | it doesn't break everything, and then push an emergency
               | release!??!
               | 
               | The exact same thing you do in this current situation? It
               | depends on what the issue is. Cargo isn't magic.
               | 
               | The point is just that "which libraries does the binary
               | depend on" isn't a problem with actual tooling.
               | 
               | People already run tools like cargo-vet in CI to catch
               | versions of packages that may have issues they care
               | about.
        
               | ok123456 wrote:
               | > The exact same thing you do in this current situation?
               | It depends on what the issue is. Cargo isn't magic.
               | 
               | False. In the current situation, you just release a new
               | shared library that is used system-wide.
        
               | steveklabnik wrote:
               | Okay, so the analogous situation here is that you release
               | a new version of the library, and rebuild. Done.
        
               | ok123456 wrote:
               | Except that's not the case at all with Rust.
        
               | packetlost wrote:
               | Except it _is_. The system package maintainers release a
               | new build of the package in question and then you install
               | it. There 's not really anything else to do here. There's
               | nothing special about Rust in this context, it would be
               | exactly the same scenario on, for example, Musl libc
               | based distros with any C application.
        
               | ok123456 wrote:
               | And Alpine Linux is largely a mistake.
        
               | packetlost wrote:
               | That's not an argument, nor is it productive. Nobody even
               | mentioned Alpine. Go away.
        
               | uecker wrote:
               | Fundamentally there is no difference. In practice Rust
               | makes things a lot worse. It encourages the use of
               | dependencies from random (i.e. published with cargo)
               | sources without much quality control. It is really a
               | supply chain disaster to happen. A problem like this
               | would propagate much faster. Here the threat actor had to
               | work hard to get his library updated in distributions and
               | at each step there was a chance that this is detected.
               | Now think about a Rust package automatically pulling in
               | transitively 100s of crates. Sure, a distribution can
               | later figure out what was affected and push upgrades to
               | all the packages. But fundamentally, we should minimize
               | dependencies and we should have quality control at each
               | level (and ideally we should not run code at build time).
               | Cargo goes into the full opposite direction. Rust got
               | this wrong.
        
               | pcwalton wrote:
               | Whether a hypothetical alternate world in which Rust
               | didn't have a package manager or didn't make sharing code
               | easy would be better or worse than the world we live in
               | isn't an interesting question, because in that world
               | nobody would use Rust to begin with. Developers have
               | expected to be able to share code with package managers
               | ever since Perl 5 and CPAN took off. Like it or not,
               | supply chain attacks are things we have to confront and
               | take steps to solve. Telling developers to avoid
               | dependencies just isn't realistic.
        
               | packetlost wrote:
               | > It encourages the use of dependencies from random (i.e.
               | published with cargo) sources without much quality
               | control. It is really a supply chain disaster to happen.
               | 
               | Oh I 100% agree with this, but that's not what was being
               | talked about. That being said, I don't think the
               | distribution model is perfect either: it just has a
               | different set of tradeoffs. Not all software has the same
               | risk profile, not all software is a security boundary
               | between a system and the internet. I 100% agree that the
               | sheer number of crates that the average Rust program
               | pulls in is... not good, but it's also _not_ the only
               | language /platform that does this (npm, pypi, pick-your-
               | favorite-text-editor, etc.), so soloing out Rust in that
               | context doesn't make sense either, it only makes sense
               | when comparing it to the C/C++ "ecosystem".
               | 
               | I'm also somewhat surprised that the conclusion people
               | come to here is that dynamic linking is a solution to the
               | problem at hand or even a strong source of mitigation:
               | it's really, really not. The ability to, at almost any
               | time, swap out what version of a dependency something is
               | running is what allowed this exploit to happen in the
               | first place. The fact that there was dynamic linking _at
               | all_ dramatically increased the blast radius of what was
               | effected by this, not decreased it. It only provides a
               | benefit once discovered, and that benefit is mostly in
               | terms of less packages need to be rebuilt and updated by
               | distro maintainers and users. Ultimately, supply-chain
               | security is an incredibly tough problem that is far more
               | nuanced than valueless  "dynamic linking is better than
               | static linking" statements can even come close to
               | communicating.
               | 
               | > A problem like this would propagate much faster. Here
               | the threat actor had to work hard to get his library
               | updated in distributions and at each step there was a
               | chance that this is detected.
               | 
               | It wouldn't though, because programs would have had to
               | have been rebuilt with the backdoored versions. The book
               | keeping would be harder, but the blast radius would have
               | probably been smaller with static linking _except_ in the
               | case where the package is meticulously maintained by
               | someone who bumps their dependencies constantly or if the
               | exploit goes unnoticed for a long period of time. That 's
               | trouble no matter what.
               | 
               | > Now think about a Rust package automatically pulling in
               | transitively 100s of crates.
               | 
               | Yup, but it only happens _at build time_. The blast
               | radius has different time-domain properties than with
               | shared libraries. See above. 100s of crates is
               | ridiculous, and IMO the community could (and should) do a
               | lot more to establish which crates are maintained
               | appropriately and are actually being monitored.
               | 
               | > Sure, a distribution can later figure out what was
               | affected and push upgrades to all the packages.
               | 
               | This is trivial to do with build system automation and a
               | small modicum of effort. It's also what already happens,
               | no?
               | 
               | > But fundamentally, we should minimize dependencies and
               | we should have quality control at each level
               | 
               | Agreed, the Rust ecosystem has it's own tooling for
               | quality control. Just because it's not maintained by the
               | distro maintainers doesn't mean it's not there. There is
               | a lot of room for improvement though.
               | 
               | > (and ideally we should not run code at build time).
               | Cargo goes into the full opposite direction. Rust got
               | this wrong.
               | 
               | Hard, hard, hard disagree. Nearly every language requires
               | executing arbitrary code at compile time, yes, even a
               | good chunk of C/C++. A strong and consistent build system
               | is a positive in this regard: it would be much harder to
               | obfuscate an attack like this in a Rust build.rs because
               | there's not multiple stages of abstraction with an
               | arbitrary number of ways to do it. As it stands, part of
               | the reason the xz exploit was even _possible_ was because
               | of the disaster that is autotools. I would argue the Rust
               | build story is significantly better than the average C
               | /C++ build story. Look at all the comments here
               | describing the "autotools gunk" that is used to obfuscate
               | what is actually going on. Sure, you could do something
               | similar for Rust, but it would look _weird_ , not "huh, I
               | don't understand this, but that's autotools for ya, eh?"
               | 
               | To be clear, I agree with you that the state of Rust and
               | it's packaging is not ideal, but I don't think it
               | necessarily made _wrong_ decisions, it 's just immature
               | as a platform, which is something that can and will be
               | addressed.
        
               | steveklabnik wrote:
               | Ok well have a nice day I guess.
        
             | JanisErdmanis wrote:
             | This seems to be an orthogonal issue. Rust could build the
             | same dynamic library with cargo which could then be
             | distributed. The diference is that there would be a single
             | way to build things.
        
               | ok123456 wrote:
               | Most Rust libraries are not dynamically linked; instead,
               | versions are pinned and included statically during the
               | build process. This is touted as a feature.
               | 
               | Only a few projects are built as system-wide libraries
               | that expose a C-compatible abi interface; rsvg comes to
               | mind.
        
               | timschmidt wrote:
               | It's not touted as a feature by any Rust developers I
               | know of. The Rust ABI is merely still stabilizing. See:
               | https://github.com/rust-lang/rust/pull/105586
        
             | YetAnotherNick wrote:
             | I am not completely sure about this exploit, but seems like
             | a binary needed to be modified for the exploit to work[1]
             | which was later picked up by build system.
             | 
             | https://github.com/tukaani-
             | project/xz/commit/6e636819e8f0703...
        
               | ok123456 wrote:
               | The binary was an xz test file that contained a script
               | that patched the c-code.
        
           | klysm wrote:
           | People are going to be upset with this perspective but I
           | completely agree. The whole autoconf set of tools is a
           | complete disaster.
        
           | arp242 wrote:
           | You don't need a complex obscure build system for most C
           | code. There's a lot of historical baggage here, but many
           | projects (including xz, I suspect) can get away with a fairly
           | straight-forward Makefile. Double so when using some GNU make
           | extensions.
        
             | buserror wrote:
             | Thanks for that post, I wish people stopped pushing ever so
             | more complicated build systems, opaque, non-backward
             | compatible between their own versions when a 2 pages
             | Makefile would work just fine, and still work in 20 years
             | time.
        
           | bonzini wrote:
           | Rust is the worst in terms of build system transparency. Ever
           | heard of build.rs? You can hide backdoors in any crate, or in
           | any crate's build.rs, or the same recursively.
        
             | timschmidt wrote:
             | Most build systems are turing-complete. Rust, at least,
             | drastically reduces the need for custom build scripts (most
             | of my projects have empty build.rs files or lack one
             | entirely), and build.rs being in the same language as the
             | rest of the codebase aids transparency immensely.
        
               | bonzini wrote:
               | That doesn't make build.rs any less of a juicy target for
               | a supply chain attack.
               | 
               | Arbitrary code downloaded from the internet and run at
               | build time? That's a nightmare scenario for auditing,
               | much worse than anything Autotools or CMake can offer.
        
               | timschmidt wrote:
               | You're not wrong about arbitrary code execution. It's
               | just that your statement applies to most of the packages
               | on any linux distribution, Autotools and Cmake included,
               | regardless of language. Many moreso than Rust due to the
               | aforementioned features of Cargo and build.rs not
               | requiring me to be an expert in a second language just to
               | audit it.
        
               | bonzini wrote:
               | Packages in a Linux distro are not built on my machine,
               | they are built by the distro in a sandbox. Every time I
               | type "cargo build" I am potentially running arbitrary
               | code downloaded from the internet. Every time I type
               | "make" in an Autotools program only my code runs.
               | 
               | > not requiring me to be an expert in another language
               | just to audit it.
               | 
               | Do you do that every time your Cargo.lock changes?
        
               | timschmidt wrote:
               | > Every time I type "make" in an Autotools program only
               | my code runs.
               | 
               | Says who? Make is just as good at calling arbitrary code
               | as Cargo. Including code that reaches out over the
               | network. Have you audited every single makefile to ensure
               | that isn't the case?
        
               | bonzini wrote:
               | I am talking about _my_ makefiles. They don 't
               | automatically build dependencies that I have no control
               | on.
               | 
               | Whereas building _my_ crate can run code locally that no
               | one has ever audited.
        
               | timschmidt wrote:
               | So... you're complaining about what could happen in a
               | Rust build if you include a library without examining
               | that library first? How do you think that is different
               | from doing the same in any other language?
        
               | bonzini wrote:
               | The difference is that in another language the build step
               | is delegated to someone else who has packaged the code,
               | and every version has presumably gone through some kind
               | of audit. With Rust I have no idea what new transitive
               | dependencies could be included any time I update one of
               | my dependencies, and what code could be triggered _just
               | by building my program_ without even running it.
               | 
               | Again, we're not talking about the dependencies that I
               | choose, but the whole transitive closure of dependencies,
               | including the most low-level. Did you examine serde the
               | first time _you used a dependency that used it_? serde
               | did have in the past a slightly sketchy case of using a
               | pre-built binary. Or the whole dependency tree of Bevy?
               | 
               | I mean, Rust has many advantages but the cargo supply
               | chain story is an absolute disaster---not that it's
               | alone, pypi or nodejs or Ruby gems are the same.
        
               | timschmidt wrote:
               | > The difference is that in another language the build
               | step is delegated to someone else who has packaged the
               | code
               | 
               | Fedora packages a large number of Rust libraries, just as
               | you describe. Nothing prevents you from using the
               | packaged libraries if you prefer them.
               | 
               | You may find helpful information here:
               | https://docs.fedoraproject.org/en-US/packaging-
               | guidelines/Ru...
        
               | fragmede wrote:
               | seems trivial for a configure script to call curl/wget
               | somewhere in the depths of it, no?
        
               | timschmidt wrote:
               | Exactly. And at least Cargo will refuse to download a
               | crate which has been yanked. So any crate which has been
               | discovered to be compromised can be yanked, preventing
               | further damage even when someone has already downloaded
               | something which depends on it.
               | 
               | Building packages with up-to-date dependencies is also
               | vastly preferable to building against ancient copies of
               | libraries vendored into a codebase at some point in the
               | past, a situation I see far too often in C/C++ codebases.
        
               | Hackbraten wrote:
               | Debian's rules files often deliberately sinkhole the
               | entire network during the build. It's not the worst idea.
        
               | fragmede wrote:
               | I wonder if you could do it inside the config script
               | without the network.
        
           | msm_ wrote:
           | Is this _really_ the lesson here? We are talking about a
           | maintainer here, who had access to signing keys and a full
           | access to the repository. Deb packages which were distributed
           | are also different than the source code. Do you honestly
           | believe that the (arguably awful) autotools syntax is the
           | single root cause of this mess, Rust will save us from
           | everything, and this is what we should take away from this
           | situation?
        
           | delfinom wrote:
           | I call bullshit.
           | 
           | The fundamental problem here was a violation of chain of
           | trust. Open source is only about the source being open. But
           | if users are just downloading blobs with prebuilt binaries or
           | even _pre-generated scripts_ that aren't in the original
           | source, there is nothing a less-obscure build system will
           | save you from as you are putting your entire security on the
           | chain of trust being maintained.
        
       | perihelions wrote:
       | Imagine a more competent backdoor attempt on xz(1)--one that
       | wouldn't have been noticed this quickly. xz is everywhere. They
       | could pull off a "reflections on trusting trust": an xz which
       | selectively modifies a tiny subset of the files it sees, like
       | .tar.xz software tarballs underlying certain build processes. Not
       | source code tarballs (someone might notice)--tarballs
       | distributing pre-compiled binaries.
       | 
       | edit to add: Arch Linux' entire package system used to run on
       | .tar.xz binaries (they switched to Zstd a few years ago [0]).
       | 
       | [0] https://news.ycombinator.com/item?id=19478171 ( _" Arch Linux
       | propose changing compression method from xz to zstd
       | (archlinux.org)"_)
        
         | nolist_policy wrote:
         | deb packages are xz compressed...
        
           | 1oooqooq wrote:
           | my freaking kernels/initrd are xz or zstd compressed!
        
           | nolist_policy wrote:
           | ... and Debian is very serious about it:
           | https://fulda.social/@Ganneff/112184975950858403
        
         | joeyh wrote:
         | A backdoored xz could also run payloads hidden inside other xz
         | files, allowing targeted attacks.
        
         | Phenylacetyl wrote:
         | The same authors have also contributed to Zstd
        
           | joeyh wrote:
           | details please? I do not see any such contributions to
           | https://github.com/facebook/zstd
        
             | delfinom wrote:
             | They are probably getting confused.
             | 
             | Jia had a zstd fork on github, but when things kicked off,
             | it appears they may have sanitized the fork.
        
       | alright2565 wrote:
       | https://github.com/tukaani-project/tukaani-project.github.io...
       | 
       | > Note: GitHub automatically includes two archives Source code
       | (zip) and Source code (tar.gz) in the releases. These archives
       | cannot be disabled and should be ignored.
       | 
       | The author was thinking ahead! Latest commit hash for this repo:
       | 8a3b5f28d00ebc2c1619c87a8c8975718f12e271
        
         | o11c wrote:
         | For a long time, there was one legitimately annoying
         | disadvantage to the git-generated tarballs though - they lost
         | tagging information. However, since git 2.32 (released June
         | 2021; presumably available on GitHub by August 2021 when they
         | blogged about it) you can use `$Format:%(describe)$` ...
         | limited to once per repository for performance reasons.
        
           | rany_ wrote:
           | Except this change was made in 2023, it is just scary how
           | good this threat actor was.
        
           | legobmw99 wrote:
           | I believe they also do not include sub modules, which is a
           | big disadvantage for some projects
        
             | xvilka wrote:
             | Yes. Also, GitHub recently made some upgrades that forced
             | checksum changes on the autogenerated archives:
             | https://github.blog/changelog/2023-01-30-git-archive-
             | checksu...
        
         | rom1v wrote:
         | Btw, this is not the only project providing a source tarball
         | different from the git repo, for example libusb also does this
         | (and probably others):
         | 
         | -
         | https://github.com/libusb/libusb/issues/1468#issuecomment-19...
         | 
         | - https://github.com/orgs/community/discussions/6003
        
           | cryptonector wrote:
           | It's very common in autoconf codebases because the idea is
           | that you untar and then run `./configure ...` rather than
           | `autoreconf -fi && ./configure ...`. But to do that either
           | you have to commit `./configure` or you have to make a
           | separate tarball (typically with `make dist`). I know because
           | two projects I co-maintain do this.
        
             | Too wrote:
             | Whats the problem of running "autoreconf -fi" though?
             | 
             | Very strange argument. It's like saying our source release
             | only contains a prebuilt binary, otherwise the user has to
             | run "make".
             | 
             | If that's such a big hassle for your downstream consumers,
             | maybe one should use something better than autoconf in the
             | first place.
        
               | bhaak wrote:
               | For running autoreconf you need to have autotools
               | installed and even then it can fail.
               | 
               | I have autotools installed and despite that autoreconf
               | fails for me on the xz git repository.
               | 
               | The idea of having configure as a convoluted shell script
               | is that it runs everywhere without any additional. If it
               | isn't committed to the repository you're burdening your
               | consumers with having compilation dependencies installed
               | that are not needed for running your software.
        
               | Too wrote:
               | Yes...For running gcc you need to have gcc installed.
               | 
               | You don't need gcc to run the software. It's not
               | burdening anyone that gcc was needed to build the
               | software.
               | 
               | It's very standard practice to have development
               | dependencies. Why should autoconf be treated
               | exceptionally?
               | 
               | If they fail despite being available it's either a sign
               | of using a fragile tool or a badly maintained project.
               | Both can be fixed without shipping a half-pre-compiled-
               | half-source repo.
        
               | bhaak wrote:
               | The configure script is not a compilation artifact.
               | 
               | The more steps you add to get final product the more
               | errors are possible. It's much easier for you as the
               | project developer to generate the script so you should do
               | it.
               | 
               | If it's easier for you to generate the binary, you should
               | do it as well (reproducible binaries of course). That's
               | why Windows binaries are often shipped. With Linux
               | binaries this is much harder (even though there are
               | solutions now). With OSX it depends if you have the
               | newest CPU architecture or not.
        
               | cryptonector wrote:
               | > If it's easier for you to generate the binary, you
               | should do it as well (reproducible binaries of course).
               | 
               | I think that's the crux of what you're saying. But
               | consider that if Fedora, Debian, etc. accepted released,
               | built artifacts from upstreams then it would be even
               | easier to introduce backdoors!
               | 
               | Fedora, Debian, Nix -all the distros- need to build _from
               | sources_ , preferably from sources taken from upstreams'
               | version control repositories. Not that that would prevent
               | backdoors -it wouldn't!- but that it would at least make
               | it easier to investigate later as the sources would all
               | be visible to the distros (assuming non-backdoored build
               | tools).
        
               | meinersbur wrote:
               | Autotools are not backwards-compatible. Often only a
               | specific version of autotools works. Only the generated
               | configure is supposed to be portable.
               | 
               | It's also not the distribution model for an Autotools
               | project. Project distributions would include a
               | handwritten configure file that users would run: The
               | usual `./configure && make && make install`. Since those
               | configure scripts became more and more complex for
               | supporting diverse combinations of compiler and OS, the
               | idea of autotools was for maintainers to generate it. It
               | was not meant to be executed by the user:
               | https://en.wikipedia.org/wiki/GNU_Autotools#Usage
        
             | bhaak wrote:
             | It's common but it's plain wrong. A "release" should allow
             | to build the project without installing dependencies that
             | are only there for compilation.
             | 
             | Autotools are not guaranteed to be installed on any system.
             | For example they aren't on the OSX runners of GitHub
             | Action.
             | 
             | It's also an issue with UX. autoreconf fails are pretty
             | common. If you don't make it easy for your users to
             | actually use your project, you lose out on some.
        
               | GrayShade wrote:
               | > A "release" should allow to build the project without
               | installing dependencies that are only there for
               | compilation.
               | 
               | Like a compiler or some -devel packages?
        
               | bhaak wrote:
               | If the compiler is some customized or hard to build
               | version then yes, they should be included.
               | 
               | The more steps you add to get to the final product the
               | more likely it is to run into problems.
        
               | cryptonector wrote:
               | > [...] A "release" should allow to build the project
               | without installing dependencies that are only there for
               | compilation.
               | 
               | Built artifacts shouldn't require build-time dependencies
               | to be installed, yes, but we're talking about source
               | distributions. Including `./configure` is just a way of
               | reducing the configuration-/build-time dependencies for
               | the user.
               | 
               | > Autotools are not guaranteed to be installed on any
               | system. [...]
               | 
               | Which is why this is common practice.
               | 
               | > It's common but it's plain wrong.
               | 
               | Strong word. I'm not sure it's "plain wrong". We could
               | just require that users have autoconf installed in order
               | to build from sources, or we could commit `./configure`
               | whenever we make a release, or we could continue this
               | approach. (For some royal we.)
               | 
               | But stopping this practice won't prevent backdoors. I
               | think a lot of people in this thread are focusing on this
               | as if it was the source of all evils, but it's really
               | not.
        
       | londons_explore wrote:
       | I think the lesson here for packagers is that binary testdata
       | should not be present while doing the build.
       | 
       | It is too easy to hide things in testdata.
        
         | yencabulator wrote:
         | Nice idea, but then you just hide the attack in logo.png that
         | gets embedded in the binary. Less useful for libraries, works
         | plenty good for web/desktop/mobile.
        
           | consumer451 wrote:
           | This entire thread is above my pay grade, but isn't
           | minimizing the attack surface always a good thing?
        
             | ReflectedImage wrote:
             | It's all irrelevant. The attacker social engineered their
             | way to being the lead maintainer for the project.
        
             | StressedDev wrote:
             | The problem with the parent's suggestion is you end up
             | banning lots of useful techniques while not actually
             | stopping hackers from installing back doors or adding
             | security exploits. The basic problem is once an attacker
             | can submit changes to a project, the attacker can do a lot
             | of damage. The only real solution is to do very careful
             | code reviews. Basically, having a malicious person get code
             | into a project is always going to be a disaster. If they
             | can get control of a project, it is going to be even worse.
        
               | consumer451 wrote:
               | > The only real solution is to do very careful code
               | reviews.
               | 
               | Are there any projects that are well resourced enough to
               | do this consistently, including all dependencies?
        
       | wood_spirit wrote:
       | A lot of eyes will be dissecting this specific exploit, and
       | investigating this specific account, but how can we find the same
       | kind of attack in a general way if it's being used in other
       | projects and using other contributor names?
        
         | londons_explore wrote:
         | Note that the malicious binary is fairly long and complex.
         | 
         | This attack can be stopped by disallowing any binary testdata
         | or other non-source code to be on the build machines during a
         | build.
         | 
         | You could imagine a simple process which checks out the code,
         | then runs some kind of entropy checker over the code to check
         | it is all unminified and uncompressed source code, before
         | finally kicking off the build process.
         | 
         | autogenerated files would also not be allowed to be in the
         | source repo - they're too long and could easily hide bad stuff.
         | Instead the build process should generate the file during the
         | build.
        
           | trulyrandom wrote:
           | This requires a more comprehensive redesign of the build
           | process. Most Linux distributions also run the tests of the
           | project they're building as part of the build process.
        
             | uecker wrote:
             | The code that runs during testing should not be allowed to
             | affect the package though. If this is possible, this is
             | misdesigned.
        
               | saulrh wrote:
               | Profile guided optimization is, unfortunately, wildly
               | powerful. And it has a hard requirement that a casual
               | link exists from test data (or production data!) to the
               | build process.
        
         | treffer wrote:
         | 1. Everything must be visible. A diff between the release
         | tarball and tag should be unacceptable. It was hidden from the
         | eyes to begin with.
         | 
         | 2. Build systems should be simple and obvious. Potentially not
         | even code. The inclusion was well hidden.
         | 
         | 3. This was caught through runtime inspection. It should be
         | possible to halt any Linux system at runtime, load debug
         | symbols and map _everything_ back to the source code. If
         | something can't map back then regard it as a potentially
         | malicious blackbox.
         | 
         | There has been a strong focus and joint effort to make
         | distributions reproducible. What we haven't managed though is
         | prove that the project compromises only of freshly compiled
         | content. Sorta like a build time / runtime "libre" proof.
         | 
         | This should exist for good debugging anyway.
         | 
         | It wouldn't hinder source code based backdoors or malicious
         | vulnerable code. But it would detect a backdoor like this one.
         | 
         | Just an initial thought though, and probably hard to do, but
         | not impossibly hard, especially for a default server
         | environment.
        
         | Avamander wrote:
         | More reproducible builds, maybe even across distributions?
         | Builds based on specific commits (no tarballs like in this
         | case), possibly signed (just for attribution, not for security
         | per se)? Allow fewer unsafe/runtime modifications The way oss-
         | fuzz ASAN was disabled should've been a warning on its own, if
         | these issues weren't so common.
         | 
         | I'm not aware of any efforts towards it, but libraries should
         | also probably be more confined to only provide intended
         | functionality without being able to hook elsewhere?
        
         | JonChesterfield wrote:
         | The Guix full source bootstrap is looking less paranoid as time
         | goes on
        
         | mac-chaffee wrote:
         | Build-related fixes are only treating the symptoms, not the
         | disease. The real fix would be better sandboxing and
         | capability-based security[1] built into major OSes which make
         | backdoors a lot less useful. Why does a compression library
         | have the ability to "install an audit hook into the dynamic
         | linker" or anything else that isn't compressing data? No amount
         | of SBOMs, reproducible builds, code signing, or banning
         | binaries will change the fact that one mistake anywhere in the
         | stack has a huge blast radius.
         | 
         | [1]: https://en.wikipedia.org/wiki/Capability-based_security
        
           | mkleczek wrote:
           | That's why I always raise concerns about JEP 411 - removal of
           | SecurityManager from Java without any replacement.
        
           | someguydave wrote:
           | Just ban autotools
        
         | afiodorov wrote:
         | We should be able to produce a tar and a proof that tar was
         | produced from a specific source code.
         | 
         | Quote from the article:                   That line is not in
         | the upstream source of build-to-host, nor is build-to-host used
         | by xz in git.
         | 
         | Zero Knowledge virtual machines, like cartesi.io, might help
         | with this. Idea is to take the source, run a bunch of
         | computational steps (compilation & archiving) and at the same
         | time produce some kind of signature that certain steps were
         | executed.
         | 
         | The verifiers can then easily check that the signature and
         | indeed be convinced that the code was executed as it is claimed
         | and source code wasn't tampered with.
         | 
         | The advantage of Zero-Knowledge technology in this case is that
         | one doesn't need to repeat the computational steps themselves
         | nor rely on a trusted party to do it for them (like automated
         | build - that can also be compromised by the state actors). Just
         | having the proof solves this trust problem mathematically: if
         | you have the proof & the tar, you can quickly check source code
         | that produced the tar wasn't modified.
        
           | renonce wrote:
           | I don't think zero knowledge systems are practical at the
           | moment. It will take over around 8 orders of magnitude more
           | compute and memory to produce a ZKP proof of generic
           | computation like compilation. Even 2 orders of magnitude is
           | barely acceptable.
        
             | afiodorov wrote:
             | I've been told verifiable builds are possible already, I
             | don't know how practical though:
             | 
             | twitter.com/stskeeps/status/1774019709739872599
        
       | Tenobrus wrote:
       | It looks like the person who added the backdoor is in fact the
       | current co-maintainer of the project (and the more active of the
       | two): https://tukaani.org/about.html
        
         | kzrdude wrote:
         | Makes me wonder if he's an owner of the github organization,
         | and what happens with it now?
        
         | kzrdude wrote:
         | In various places they say Lasse Collin is not online right
         | now, but he did make commits a week ago
         | https://git.tukaani.org/?p=xz.git;a=summary
        
       | Scaevolus wrote:
       | It's wild that this could have laid dormant for far longer if the
       | exploit was better written-- if it didn't spike slow down logins
       | or disturb valgrind.
        
       | jiripospisil wrote:
       | Now consider that your average Linux distribution pulls in tens
       | of thousands of packages, each of which can be similarly
       | compromised. Pretty scary to think about.
        
         | RGamma wrote:
         | The terrible desktop software security model of
         | weak/essentially non-existent security boundaries at run and
         | compile time makes this all the more spicy.
         | 
         | Computer security for billions runs on the simultaneous
         | goodwill of many thousand contributors. Optimistically said
         | it's actually a giant compliment to the programming community.
         | 
         | And this is not even talking about hardware backdoors that are
         | a million times worse and basically undetectable when done
         | well. The myriad ways to betray user trust at any level of
         | computation make me dizzy...
        
         | afh1 wrote:
         | I have exactly 719 packages on my Gentoo box, just rebuilt
         | everything as part of the profile 23 upgrade.
        
       | Luker88 wrote:
       | @people who write github scanners for updates and security issues
       | (dependabot and the like)
       | 
       | Can we start including a blacklist of emails and names of
       | contributors (with reasons/links to discussions)?
       | 
       | I can't track them and I don't want them in my projects.
       | 
       | Might not be very helpful as it is easy to create new identities,
       | but I see no reason to make it easier for them. Also, I might
       | approach differently someone with lots of contributions to known
       | projects than a new account, so it still helps.
        
         | arp242 wrote:
         | It takes a minute to create a new email address. And you can
         | change or fake an email address on a git commit trivially. You,
         | too, can writing code as anyone you want by just doing "git
         | commit --author='Joe Biden <icecream@whitehouse.gov>'". On the
         | internet nobody knows you're Joe Biden.
        
         | nine_k wrote:
         | You can write a rather simple GitHub action that would do that:
         | look at a PR and reject / close it if you don't like it for
         | some reason. AFAIK open-source projects have a free quota of
         | actions.
         | 
         | OTOH sticking to the same email for more than one exploit might
         | be not as wise for a malicious agent.
        
         | the8472 wrote:
         | github already suspended the account
        
       | n3uman wrote:
       | https://github.com/tukaani-project/tukaani-project.github.io...
       | Does this mean anything that it changed to a parameter??
        
         | danielhlockard wrote:
         | no. unlikely.
        
       | Aissen wrote:
       | Looks like one of the backdoor authors even went and disabled the
       | feature the exploit relied on directly on oss-fuzz to prevent
       | accidental discovery:
       | https://social.treehouse.systems/@Aissen/112180302735030319
       | https://github.com/google/oss-fuzz/pull/10667
       | 
       | But luckily there was some serendipity: "I accidentally found a
       | security issue while benchmarking postgres changes."
       | https://mastodon.social/@AndresFreundTec/112180083704606941
        
         | miduil wrote:
         | This is getting addressed here: https://github.com/google/oss-
         | fuzz/issues/11760
        
         | nialv7 wrote:
         | This in of itself can be legitimate. ifunc has real uses and it
         | indeed does not work when sanitizer is enabled. Similar change
         | in llvm: https://github.com/llvm/llvm-
         | project/commit/1ef3de6b09f6b21a...
        
           | kzrdude wrote:
           | Because of the exploit, so, why should we use configurations
           | in production that were not covered by these tests?
        
         | throwaway290 wrote:
         | and that was in mid 2023. Very funny that Wikipedia on this
         | issue says
         | 
         | > It is unknown whether this backdoor was intentionally placed
         | by a maintainer or whether a maintainer was compromised
         | 
         | Yeah, if you've been compromised for a year your attacker is
         | now your identity. Can't just wave hands, practice infosec
         | hygiene
        
       | mdip wrote:
       | Anyone keeping current with OpenSUSE Tumbleweed got a
       | update...downgrade. Prior to `zypper dup --no-allow-vendor-
       | change` I had 5.6.0, now I'm at 5.4.6.
        
         | intel_brain wrote:
         | I see `5.6.1.revertto5.4-3.2`
        
       | weinzierl wrote:
       | The backdoor is not in the C source directly, but a build script
       | uses data from files in the test dir to only create the backdoor
       | in the release tars. Did I summarize that correctly?
        
         | soneil wrote:
         | That's how I understand it. A build script that's in the
         | releases tarballs but not the git repo, checks to see if it's
         | being run as part of the debian/build or rpm build processes,
         | and then injects content from one of the "test" files.
        
           | bombcar wrote:
           | I could imagine another similar attack done against an image
           | processing library, include some "test data" of corrupted
           | images that should "clean up" (and have it actually work!)
           | but the corruption data itself is code to be run elsewhere.
        
       | sylware wrote:
       | This is why the less the better... even if it means less
       | comfortable... to a certain point obviously. And that includes
       | SDKs...
        
         | hgs3 wrote:
         | I don't understand why you were downvoted. Having fewer moving
         | parts does make it easier to catch issues.
        
           | sylware wrote:
           | Everything which is not engaging in licking Big Tech balls
           | (open source or not) on HN is served with severe downvoting,
           | that most of the time (probably real trash human beings or AI
           | trolls with headless blink/geeko|webkit).
        
       | crispyambulance wrote:
       | I am not embarrassed to say... is there anything in there that
       | someone who runs a server with ssh needs to know?
       | 
       | I literally can't make heads or tails of the risk here. All I see
       | is the very alarming and scary words "backdoor" and "ssh server"
       | in the same sentence.
       | 
       | If I am keeping stuff up to date, is there anything at all to
       | worry about?
        
         | pxx wrote:
         | You should probably not be running your own publicly-accessible
         | ssh servers if this email is not sufficient to at least start
         | figuring out what your next actions are.
         | 
         | The email itself comes with an evaluation script to figure out
         | if anything is currently vulnerable to specifically this
         | discovery. For affected distributions, openssh servers may have
         | been backdoored for at least the past month.
        
           | crispyambulance wrote:
           | Yet here I am, getting up every morning and getting dressed
           | and tying my shoes all by myself, and then maintaining a
           | small number of servers that have openssh on them!
           | 
           | Thanks, though, for pointing out the little script at the
           | very end of that technical gauntlet of an email intended for
           | specialists. I had gotten through the first 3 or 4 paragraphs
           | and had given up.
           | 
           | What I should have done is just googled CVE-2024-3094,
           | whatever, still glad I asked.
        
           | ShamelessC wrote:
           | > You should probably not be running your own publicly-
           | accessible ssh servers if this email is not sufficient to at
           | least start figuring out what your next actions are.
           | 
           | That seems like a fairly unreasonable stance.
        
             | frenchman99 wrote:
             | Not at all. For instance, I don't know what the next steps
             | are, but I run SSH servers behind Wireguard, exactly to
             | prevent them being accessible in the case of such events.
             | Wireguard is simple to setup, even if I lack the expertise
             | to understand exactly how to go forward.
        
         | dualbus wrote:
         | > I literally can't make heads or tails of the risk here. All I
         | see is the very alarming and scary words "backdoor" and "ssh
         | server" in the same sentence.
         | 
         | From what I've read, there is still lots of unknowns about the
         | scope of the problem. What has been uncovered so far indicates
         | it involves bypassing authentication in SSH.
         | 
         | In https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78b
         | ..., Sam James points out
         | 
         | > If this payload is loaded in openssh sshd, the
         | RSA_public_decrypt function will be redirected into a malicious
         | implementation. We have observed that this malicious
         | implementation can be used to bypass authentication. Further
         | research is being done to explain why.
         | 
         | Thus, an attacker maybe could use this to connect to vulnerable
         | servers without needing to authenticate at all.
        
           | crispyambulance wrote:
           | Thanks, that gist is a really lucid explanation for normal
           | folks.
        
       | sschueller wrote:
       | So much for a quiet Easter holiday. Fuck
        
       | multimoon wrote:
       | It seems like based on the (very well written) analysis that this
       | is a way to bypass ssh auth, not something that phones out which
       | would've been even scarier.
       | 
       | My server runs arch w/ a LTS kernel (which sounds dumb on the
       | surface, but was by far the easiest way to do ZFS on Linux that
       | wasn't Ubuntu) and it seems that since I don't have SSH exposed
       | to the outside internet for good reason, and my understanding is
       | Arch never patched shhd to begin with that I and most people who
       | would be in similar situations to me are unaffected.
       | 
       | Still insane that this happened to begin with, and I feel bad for
       | the Archlinux maintainers who are now going to feel more pressure
       | to try to catch things like this.
        
         | NekkoDroid wrote:
         | Being included via libsystemd isn't the only way ssh can load
         | liblzma, it can come as an indirect dependency of Selinux (and
         | its PAM stack) IIUC. Which makes it even a bit more funny (?)
         | since Arch also doesn't officially support any Selinux stuff.
         | 
         | There might be other ways sshd might pull in lzma, but those
         | are the 2 ways I saw commonly mentioned.
         | 
         | On a different note, pacman/makepkg got the ability to checksum
         | source repository checkouts in 6.1.
        
       | LeoPanthera wrote:
       | xz is just a horribly designed format, and always has been. If
       | you use it, please switch to Lzip. Same compression level, but
       | designed by someone competent.
       | 
       | https://www.nongnu.org/lzip/
       | 
       | https://www.nongnu.org/lzip/xz_inadequate.html
        
         | someguydave wrote:
         | Thanks for that link, lzip sounds useful
        
       | gmnon wrote:
       | Funny how Lasse Collin started to ccing himself and Jia Tan from
       | 2024-03-20 (that was a day of tons of xz kernel patches), he
       | never did that before. :)
       | 
       | https://lore.kernel.org/lkml/20240320183846.19475-2-lasse.co...
        
         | ncr100 wrote:
         | Also interesting, to me, how the GMail account for the backdoor
         | contributor ONLY appears in the context of "XZ" discussions.
         | Google their email address. Suggests a kind of focus, to me,
         | and a lack of reality / genuineness.
        
           | fullstop wrote:
           | This also means that Google might know who they are, unless
           | they were careful to hide behind VPN or other such means.
        
         | bombcar wrote:
         | This is extremely suspicious.
         | 
         | It looks like someone may have noticed a unmaintained or
         | lightly maintained project related to various things, and moved
         | to take control of it.
         | 
         | Otherwhere in the discussion here someone mentions the domain
         | details changed; if you have control of the domain you have
         | control of all emails associated with it.
        
         | ui2RjUen875bfFA wrote:
         | those pipe usages are quite suspicious
         | 
         | https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-n...
         | 
         | https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-n...
         | 
         | pipeing into this shell script which now uses "eval"
         | 
         | https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-n...
         | 
         | i guess this will be revisited and removed soon
        
           | Hackbraten wrote:
           | > pipeing into this shell script which now uses "eval"
           | 
           | I don't actually see an issue with that `eval`. Why would one
           | consider running `xz` followed by `eval`-ing its output more
           | insecure than just running `xz`? If `xz` wants to do
           | shenanigans with the privileges it already has, then it
           | wouldn't need `eval`'s help for that.
        
             | ui2RjUen875bfFA wrote:
             | just take a closer look at the analysis
             | https://www.openwall.com/lists/oss-security/2024/03/29/4
             | 
             | then try to understand the pattern. they backdoored by
             | modifying the build process of packages. now consider the
             | $XZ is also from a backdoored build and the call recognizes
             | in the same way with parameters --robot --version and the
             | shell environment with the hint "xz_wrap.sh" from the piped
             | process. a lot stuff to recognize for the $XZ process that
             | it run as part of a kernel build.
             | 
             | Maybe they put advanced stuff in a backdoored $XZ binary to
             | modify the kernel in a similar way they modified lzma based
             | packages in the build process.
        
             | Bulat_Ziganshin wrote:
             | because in order to put backdoor into xz executable, you
             | need to infect its sources. and in order to infect the
             | sources, you need to use a similar technique to hide the
             | modification
        
         | bonzini wrote:
         | "started to cc himself" seems to be simply "contributing to a
         | new project and not having git-send-email fully set up". By
         | default git-send-email Cc the sender, though in practice it's
         | one of the first options one changes.
        
       | xyst wrote:
       | Time for another OS wipe. Glad I keep bleeding edge versions VMd
        
       | pushedx wrote:
       | Mirror of the report, since the Openwall servers appear to be
       | down.
       | 
       | https://web.archive.org/web/20240329182300/https://www.openw...
        
       | markus_zhang wrote:
       | Keeps one wonder how many similar backdoors are there in the
       | wild. What is the best way to execute such a move? This is
       | sophisticated enough, but not good enough to stay unnoticed for a
       | long while. If I were a state actor I'd think about at least 6-12
       | months.
        
       | wannacboatmovie wrote:
       | Really disappointed in the number of posters here who are playing
       | down rushing to judgement and suggesting perhaps a legitimate
       | developer was compromised, when it's very clear this is
       | sophisticated and not the work of a single person.
       | 
       | I'm recalling bad memories of the Juniper backdoor years ago.
       | 
       | Whoever did this, was playing the long game. As the top post
       | pointed out, there was an effort to get this into Fedora....
       | which eventually makes its way into RHEL (read: high value
       | targets). This was not for short term payoffs by some rogue
       | developer trying to mine crypto or other such nonsense. What you
       | are seeing here is the planting of seeds for something months or
       | a year down the road.
        
       | jcalvinowens wrote:
       | Oof, this is on my Sid laptop:
       | {0}[calvinow@mozart ~] dpkg-query -W liblzma5
       | liblzma5:amd64  5.6.0-0.2       {0}[calvinow@mozart ~] hexdump
       | -ve '1/1 "%.2x"' /lib/x86_64-linux-gnu/liblzma.so.5 | grep -c f30
       | f1efa554889f54c89ce5389fb81e7000000804883ec28488954241848894c2410
       | 1
       | 
       | Glad I stopped running sshd on my laptop a long time ago... still
       | probably going to reinstall :/
        
         | msm_ wrote:
         | No obvious need to reinstall if you didn't use ssh _and_ expose
         | it publicly _and_ are not a politically important person. All
         | signs suggest that it was a nation state attack, and you are
         | likely not a target.
        
           | jcalvinowens wrote:
           | We'll see... given that sshd is just one of many possible
           | argv[0] it may have choosen to act on, I'm going to be a
           | little paranoid until it's been fully analyzed. It just takes
           | half an hour to reinstall, I have some shows to catch up on
           | anyway :)
        
             | frenchman99 wrote:
             | I was thinking about reinstalling, because I'm on Manjaro
             | Linux, which has the version in question.
             | 
             | But it's unclear if earlier versions are also vulnerable.
             | 
             | And if it did nasty things to your machine, how do you make
             | sure that the backups you have do not include ways for the
             | backdoor to reinstate itself?
        
               | jcalvinowens wrote:
               | Sure, the backdoor could have e.g. injected an libav
               | exploit into a video file to re-backdoor my system when I
               | watch it... that's too paranoid for me.
               | 
               | I don't backup the whole system, just a specific list of
               | things in /home.
        
       | liveoneggs wrote:
       | The best part is everyone disabling security tests that started
       | failing
        
       | c_rrodriguez wrote:
       | Everybody here In jumping into the pure malice bandwagon, I have
       | a better hypothesis.
       | 
       | Abandonment and inaction, the actual developers of these tools
       | are elsewhere, oblivious to this drama, trying to make living
       | because most of the time you are not compensated nor any
       | corporation cares about making things sustainable at all. This is
       | the default status of everything your fancy cloud depends on
       | underneath.
       | 
       | An attacker took over of the project slowly and stayed dormant
       | until recently.
        
         | johnklos wrote:
         | Except that doesn't match reality.
         | 
         | Someone has worked on xz for several years. Are you saying that
         | this somewhat active contributor was likely actively
         | contributing, then all of a sudden stopped, also stopped paying
         | attention, and also allowed their account to be compromised or
         | otherwise handed it over to a nefarious party?
         | 
         | That fails the sniff test.
        
           | c_rrodriguez wrote:
           | See, people drop dead from OSS projects pretty frecuently,
           | usually because they take on other life responsabilities and
           | there is no cushion or guard against a bus factor. Then it is
           | very easy to get credentials compromised or have your project
           | took over by someone else.
        
         | dkarras wrote:
         | funding model of OSS work is obviously a problem, but these
         | problems are deeper than that. even a very well compensated OSS
         | developer can get a knock on the door from a government agency
         | (or anyone with a "$5 wrench")[1] and they might feel
         | "compelled" to give up their maintainer creds.
         | 
         | [1]: https://xkcd.com/538/
        
         | ColonelPhantom wrote:
         | Well, yeah. The attacker, operating largely under the name Jia
         | Tan, has successfully manipulated the original author (Lasse
         | Collin) to become a maintainer.
         | 
         | The attacker indeed laid dormant for two years, pretending to
         | just be maintaining xz.
         | 
         | I really don't see any way how this wasn't malice on Jia's
         | part. But I do think your hypothesis applies to Lasse, who was
         | just happy someone could help him maintain xz.
        
       | elchief wrote:
       | "Amazon Linux customers are not affected by this issue, and no
       | action is required. AWS infrastructure and services do not
       | utilize the affected software and are not impacted. Users of
       | Bottlerocket are not affected."
       | 
       | https://aws.amazon.com/security/security-bulletins/AWS-2024-...
        
       | ikekkdcjkfke wrote:
       | Github should probably remove the dopamine hits of green
       | checkmarks etc. like in serious stock broker apps
        
         | Nathanba wrote:
         | They should also remove the emojis, there is no need to have
         | people feel good about upvotes. I've long felt uncomfortable
         | with emojis on Slack as well. Responding to a coding or
         | infrastructure issue should not be a social activity, I respond
         | because it's my job and if the issue is worth it, not because a
         | human being should feel appreciated (either them or me).
        
           | Jonnax wrote:
           | Many people write code for fun and slack is a social
           | communications platform.
           | 
           | If you can't imagine people using these tools for other
           | reasons than pure unemotional business value then you don't
           | understand their market.
           | 
           | Your suggestions would lose those platforms users and
           | revenue.
        
           | dpkirchner wrote:
           | The emojis reduce (but not eliminate) the number of "me
           | too!"s PRs will get, which IMO is a good thing.
        
       | elintknower wrote:
       | Candidly how would someone protect against a vulnerability like
       | this?
        
         | anononaut wrote:
         | Compile all your packages from source would be a start.
        
           | Hackbraten wrote:
           | You're not wrong. However, building from source wouldn't have
           | protected you against this specific backdoor. The upstream
           | source tarball itself was compromised in a cleverly sneaky
           | way.
        
             | ui2RjUen875bfFA wrote:
             | You might read https://www.openwall.com/lists/oss-
             | security/2024/03/29/4
             | 
             | "However, building from source wouldn't have protected you
             | against this specific backdoor." Depends on how exactly you
             | build from source. A generic build was not the target.
             | Andres Freund showed that the attack was targeted against a
             | specific type of build system.
        
             | zamalek wrote:
             | Building from git, or the github automatic tarball would
             | have. The larger issue here is authenticating tarballs
             | against the source.
        
         | devttyeu wrote:
         | Build from source AND run an Ai agent that reviews every single
         | line of code you compile (while hoping that the any potential
         | exploit doesn't also fool / exploit your AI agent)
        
       | kosolam wrote:
       | Jesus! Does anyone know if Debian stable is affected?
        
         | ValdikSS wrote:
         | It's not. Neither Ubuntu.
        
           | anononaut wrote:
           | Do you have a source my friend? I thought Ubuntu was built
           | off of Debian testing or unstable
        
             | ValdikSS wrote:
             | The latest version in 23.10 is 5.4.1-0.2
             | 
             | https://packages.ubuntu.com/mantic/liblzma5
             | 
             | And in unreleased 24.04 is 5.4.5-0.3
             | 
             | https://packages.ubuntu.com/noble/liblzma5
             | 
             | There are no changelog entries indicating that the package
             | was reverted.
        
         | djao wrote:
         | The stable releases don't have this particular backdoor, but
         | they're still using older versions of the library that were
         | released by the same bad actor.
        
       | bhaak wrote:
       | I looked at the differences between the GitHub repository and
       | released packages. About 60 files are in a release package that
       | are not in the repo (most are generated files for building) but
       | also some of the .po files have changes.
       | 
       | That's devastating.
       | 
       | If you don't build your release packages from feeding "git ls-
       | files" into tar, you are doing it wrong.
        
         | icommentedtoday wrote:
         | Why not `git archive`?
        
           | bhaak wrote:
           | Because I didn't know about it.
           | 
           | Although if I look at its documentation, it's already a
           | somewhat complicate invocation with unclear effects (lots of
           | commandline options). Git seems to not be able to do KISS.
           | 
           | git ls-files and tar is a simple thing everybody understands
           | and can do without much issues.
        
         | cryptonector wrote:
         | https://news.ycombinator.com/item?id=39872062
        
         | sesuximo wrote:
         | I think this is unfortunately very common practice
        
       | oxymoron290 wrote:
       | Jai Tan's commit history on his github profile suggests he took
       | off for Christmas, new years, and spring break. I smell an
       | American.
        
         | bloak wrote:
         | Interesting. Is there also a pattern in the times of day? (I
         | don't so much mean the times in commits done by the developer
         | because they can be fake. I'd be more interested in authentic
         | times recorded by GitHub, if any such times are publicly
         | accessible.)
         | 
         | Another thing would be to examine everything ever written by
         | the user for linguistic clues. This might point towards
         | particular native languages or a particular variant of English
         | or towards there being several different authors.
        
           | bombcar wrote:
           | Someone said commits lined up with Beijing time, but I've not
           | verified that.
           | 
           | But that wouldn't count for much, someone employed by anyone
           | could work any hours.
        
             | rany_ wrote:
             | Also git actually stores the timezone information. You
             | could see it is consistently China time (GMT+8).
             | 
             | P.S. could be Taiwanese as China and Taiwan share the same
             | timezone.
             | 
             | Below are links to the git mailbox files where you could
             | see the timezone.
             | 
             | From 2022:
             | 
             | - https://github.com/tukaani-
             | project/xz/commit/c6977e740008817...
             | 
             | - https://github.com/tukaani-
             | project/xz/commit/7c16e312cb2f40b...
             | 
             | From 2024:
             | 
             | - https://github.com/tukaani-
             | project/xz/commit/af071ef7702debe...
             | 
             | - https://github.com/tukaani-
             | project/xz/commit/a4f2e20d8466369...
        
               | Perenti wrote:
               | Maybe it's a Western Australian, or Indonesian, or
               | Thai...
               | 
               | GMT+8 covers a lot of places
        
         | rdtsc wrote:
         | Sometimes you smell an American because someone wanted you to
         | smell an American.
         | 
         | Operating on a target region schedule doesn't seem particularly
         | sophisticated, at least compared to the all the efforts put
         | into this exploit.
        
       | returningfory2 wrote:
       | Another interesting data point: about 2 years ago there was a
       | clear pressure campaign to name a new maintainer:
       | https://www.mail-archive.com/xz-devel@tukaani.org/msg00566.h...
       | 
       | At the time I thought it was just rude, but maybe this is when it
       | all started.
        
         | leosanchez wrote:
         | How many people are involved in this ?
        
           | masklinn wrote:
           | Could be just a single person with a bunch of identities.
        
             | IshKebab wrote:
             | I would put money on government hackers. They're the sort
             | of people that have the time to pull something like this
             | off. Frankly I'm really surprised it isn't more common,
             | though maybe it is and these guys were just super blatant.
             | I would have expected more plausible deniability.
        
           | JaDogg wrote:
           | Good cop bad cop play maybe.
        
         | jamespo wrote:
         | "Jigar Kumar" seems to have disappeared
        
           | Nathanba wrote:
           | true, that is suspicious as well. A person that hasn't even
           | created any bugs or issues suddenly has a big problem with
           | the speed of development? Especially the way this was
           | phrased: "You ignore the many patches bit rotting away on
           | this mailing list. Right now you choke your repo. Why wait
           | until 5.4.0 to change maintainer? Why delay what your repo
           | needs?"
           | 
           | "Why delay what your repo needs?" This sounds like scammer
           | lingo
        
         | matsemann wrote:
         | Wow, people suck. I almost hope it's fake profiles urging the
         | maintainer to take on a new member as a long con. Because I
         | sincerely hope Jigar Kumar is not a real person behaving like
         | that towards volunteers working for free.
        
       | MaximilianEmel wrote:
       | Has this affected OpenBSD at all?
        
         | ikmckenz wrote:
         | Seems the backdoor relied on Debian and others patching their
         | copies of openssh to support systemd notifications, and this
         | would obviously not be the case on OpenBSD.
         | 
         | To be sure the current ports version of xz is 5.4.5:
         | https://cvsweb.openbsd.org/cgi-bin/cvsweb/~checkout~/ports/a...
         | 
         | Although the maintainer was working on updating to 5.6.1, but
         | this news broke before the diff was landed:
         | https://marc.info/?l=openbsd-ports&m=171174441521894&w=2
        
       | port443 wrote:
       | I think its much more likely this was not a bad actor, given
       | their long history of commits.
       | 
       | It's a known fact that China will "recruit" people to operate
       | them. A quote:
       | 
       | > They talk to them, say my friend, I see you like our special
       | menu. Are you from China? Are you here on a VISA? Do you have
       | family back there? Would you like your family to stay alive? Is
       | your loyalty to this temporary employer or is your loyalty to
       | your motherland? You know, a whole bunch of stuff like that.
       | That's how Chinese intelligence operations acts...
       | 
       | This just gives feelings of less "compromised account" and more
       | "Your account is now our account"
        
         | Johnny555 wrote:
         | Isn't that still a "bad actor" even if they are coerced into
         | it?
        
           | foobiekr wrote:
           | Yes.
        
           | Terr_ wrote:
           | For the purposes of security discussions, I would say yes.
           | You often don't know their real identity let alone their
           | motivations and tribulations.
           | 
           | However if we were critiquing characters in a book--
           | especially ones where narrative voice tells us exactly their
           | true motivations--then maybe not, and they get framed as a
           | "dupe" or "manipulated" etc.
        
           | Almondsetat wrote:
           | "bad actor" doesn't mean "bad faith", it's not a value
           | judgement
        
           | ip26 wrote:
           | I believe your parent is trying to make a distinction that
           | the handle's history may not be suspect, only recent
           | activity, positing a rubber-hose type compromise.
        
         | zeroCalories wrote:
         | I think we should seriously consider something like a ts
         | clearance as mandatory for work on core technologies. Many
         | other projects, both open and closed, are probably compromised
         | by foreign agents.
        
           | Meetvelde wrote:
           | That's hard to do when the development of these libraries is
           | so international. Not to mention that it's already so hard to
           | find maintainers for some of these projects. Given that
           | getting a TS clearance is such a long and difficult process,
           | it would almost guarantee more difficulty in finding people
           | to do this thankless job.
        
             | zeroCalories wrote:
             | It doesn't need to be TS for open source(but for closed,
             | I'm leaning yes). But all code for these core technologies
             | need to be tied to a real person that can be charged in
             | western nations. Yes, it will make it harder to get people,
             | but with how important these technologies are, we really
             | should not be using some random guys code in the kernel.
        
               | guinea-unicorn wrote:
               | Don't forget that the NSA bribed RSA (the company) to
               | insert a backdoor into their RNG. Being in western
               | jurisdiction doesn't mean you won't insert backdoors into
               | code. It just changes whom you will target with these
               | backdoors. But they all equally make our technology less
               | trustworthy so they are all equally despicable.
        
               | zeroCalories wrote:
               | It will significantly cut down on Russian and Chinese
               | back doors, which is still an improvement, Mr. Just Made
               | an Account.
        
           | rwmj wrote:
           | That just means the bad actors will all have clearance while
           | putting in a bunch of hurdles for amateur contributors. The
           | only answer is the hard one, constant improvement in methods
           | to detect and mitigate bugs.
        
             | zeroCalories wrote:
             | "Constant improvement" sounds like "constantly playing
             | catch-up". Besides that, someone with TS can be arrested
             | and charged, and I don't want amateur contributors.
        
               | msm_ wrote:
               | >and I don't want amateur contributors.
               | 
               | And you're free to not accept amateur contributions to
               | the OS projects you maintain. Hell, you can require
               | security clearance for your contributors right now, if
               | you want.
        
               | zeroCalories wrote:
               | Software like that already exists. I'm saying open source
               | should do better.
        
           | cesarb wrote:
           | > I think we should seriously consider something like a ts
           | clearance as mandatory for work on core technologies.
           | 
           | Was xz/lzma a core technology when it was created? Is my tiny
           | "constant time equality" Rust crate a core technology? Even
           | though it's used by the BLAKE3 crate? By the way, is the
           | BLAKE3 crate a core technology? Will it ever become a core
           | technology?
           | 
           | With free software in general, things do not start a "core
           | technology"; they become a "core technology" over time due to
           | usage. At which point would a maintainer have to get a TS
           | clearance? Would the equivalent of a TS clearance from my
           | Latin America country be acceptable? And how would I obtain
           | it? Is it even available to people outside the military and
           | government (legit question, I never looked)?
        
             | zeroCalories wrote:
             | We probably shouldn't use your code at all, is the real
             | answer. You can get TS, it just costs a lot of money.
        
               | stackskipton wrote:
               | In United States, you cannot apply for a clearance. You
               | must get a job that requires a clearance, then start
               | application process and wait.
        
               | msm_ wrote:
               | Who is "we"? Are you from the US by any chance? Do you
               | mean that the US government should rewrite every piece of
               | core architecture (including Linux, Ssh, Nginx...) from
               | scratch? Because they are all "contaminated" and actually
               | were created by non-americans.
               | 
               | If that's the case, you do you. Do you also think that
               | all other countries should do the same, and rewrite
               | everything from scratch for their government use (without
               | foreign, for example American, influence)? And what about
               | companies? Should they be forced to switch to their
               | government's "safe" software, or can they keep using
               | Linux and ssh? What about multi-national companies? And
               | what even counts as a "core" software?
               | 
               | So yeah, I don't think it's a good idea.
        
               | zeroCalories wrote:
               | We can keep it between NATO plus friends.
        
               | AnonymousPlanet wrote:
               | Wow, I can't decide which is the bigger act of sabotage
               | to open source, your ideas or the actual backdoor.
        
           | csdreamer7 wrote:
           | The Linux kernel is complaining about a lack of funding for
           | CI-one of the highest visibility projects out there. Where
           | will the money come from for this?
           | 
           | Corps? Aside from Intel most of them barely pay to upstream
           | their drivers.
           | 
           | The govt? The US federal government cut so much of it's
           | support since the 70s and 80s.
        
             | zeroCalories wrote:
             | You're right, but accepting code from random Gmail accounts
             | can't be the solution. Honestly the Linux kernel is a
             | bloated mess, and will probably never be secured.
        
               | imiric wrote:
               | Accepting code from any source without properly reviewing
               | it is surely the actual problem, no? This person only
               | infiltrated this project because there was no proper
               | oversight.
               | 
               | Maintainers need to be more stringent and vigilant of the
               | code they ship, and core projects that many other
               | projects depend upon should receive better support,
               | financial and otherwise, from users, open source funds
               | and companies alike. This is a fragile ecosystem that
               | this person managed to exploit, and they likely weren't
               | the only one.
        
               | zeroCalories wrote:
               | Maintainers can't fully review all code that comes in.
               | They don't have the resources. Even if they could give it
               | a good review, a good programmer could probably still
               | sneak stuff in. That's assuming a maintainer wasn't
               | compromised, like in this case. We need a certain level
               | of trust that the contributors are not malicious.
        
               | Hackbraten wrote:
               | Definitely this.
               | 
               | I've been a package maintainer for a decade. I make it a
               | habit to spot check the source code of every update of
               | every upstream package, hoping that if many others do the
               | same, it might make a difference.
               | 
               | But this backdoor? I wouldn't have been able to spot it
               | to save my life.
        
               | imiric wrote:
               | This wasn't caused by not reviewing the code of a
               | dependency. This was a core maintainer of xz, who
               | gradually gained trust and control of the project, and
               | was then able to merge changes with little oversight. The
               | failure was in the maintenance of xz, which would of
               | course be much more difficult to catch in dependent
               | projects. Which is why it's so impressive that it was
               | spotted by an OpenSSH user. Not even OpenSSH maintainers
               | noticed this, which points to a failure in their
               | processes as well, to a lesser degree.
               | 
               | I do agree that it's unreasonable to review the code of
               | the entire dependency tree, but reviewing own code
               | thoroughly and direct dependencies casually should be the
               | bare minimum we should expect maintainers to do.
        
               | Hackbraten wrote:
               | > Not even OpenSSH maintainers noticed this, which points
               | to a failure in their processes as well, to a lesser
               | degree.
               | 
               | The OpenSSH project has nothing to do with xz. The
               | transitive dependency on liblzma was introduced by a
               | patch written by a third party. [1] You can't hold
               | OpenSSH project members accountable for something like
               | this.
               | 
               | [1]: https://bugs.debian.org/778913
        
               | imiric wrote:
               | Alright, that's fair. But I mentioned them as an example.
               | Surely liblzma is a dependency in many projects, and
               | _none_ of them noticed anything strange, until an end
               | user did?
               | 
               | This is a tragedy of the commons, and we can't place
               | blame on a single project besides xz itself, yet we can
               | all share part of the blame to collectively do better in
               | the future.
        
               | imiric wrote:
               | One of the primary responsibilities of a maintainer is to
               | ensure the security of the software. If they can't keep
               | up with the pace of development in order to ensure this
               | for their users, then this should be made clear to the
               | community, and a decision should be made about how to
               | proceed. Open source maintenance is an often stressful
               | and thankless role, but this is part of the problem that
               | allowed this to happen. Sure, a sophisticated attacker
               | would be able to fool the eyes of a single tired
               | maintainer, but the chances of that happening are much
               | smaller if there's a stringent high bar of minimum
               | quality, and at least one maintainer understands the code
               | that is being merged in. Change proposals should never be
               | blindly approved, regardless of who they come from.
               | 
               | At the end of the day we have to be able to answer why
               | this happened, and how we can prevent it from happening
               | again. It's not about pointing fingers, but about
               | improving the process.
               | 
               | BTW, there have been several attempts at introducing
               | backdoors in the Linux kernel. Some manage to go through,
               | and perhaps we don't know about others, but many were
               | thwarted due to the extreme vigilance of maintainers.
               | Thankfully so, as everyone is well aware of how critical
               | the project is. I'm not saying that all projects have the
               | resources and visibility of Linux, but clearly vigilance
               | is a requirement for lowering the chances of this
               | happening.
               | 
               | > That's assuming a maintainer wasn't compromised, like
               | in this case.
               | 
               | What makes you say that? Everything I've read about this
               | (e.g. [1]) suggests that this was done by someone who
               | also made valid contributions and gained gradual control
               | of the project, where they were allowed to bypass any
               | checks, if they existed at all. The misplaced trust in
               | external contributions, and the lack of a proper peer
               | review process are precisely what allowed this to happen.
               | 
               | [1]: https://boehs.org/node/everything-i-know-about-the-
               | xz-backdo...
        
               | zeroCalories wrote:
               | My understanding is that the attacker was the only
               | maintainer of xz, that was trusted by upstream
               | maintainers. They couldn't realistically check his work.
               | The defence against this can't be "do better, volunteer
               | maintainers". Maybe we could have better automated
               | testing and analysis, but OSS is allergic to those.
        
               | imiric wrote:
               | Sure, I'm not saying this is the only solution, or that
               | it's foolproof. But this should be a wake up call for
               | everyone in the OSS community to do better.
               | 
               | Projects that end up with a single maintainer should
               | raise some flags, and depending on their importance, help
               | and resources should be made available. We've all seen
               | that xkcd, and found it more amusing than scary.
               | 
               | One idea to raise awareness: a service that scans
               | projects on GitHub and elsewhere, and assigns maintenance
               | scores, depending on various factors. The bus factor
               | should be a primary one. Make a scoreboard, badges,
               | integrate it into package managers and IDEs, etc. GitHub
               | itself would be the ideal company to implement this, if
               | they cared about OSS as much as they claim to do.
        
               | zeroCalories wrote:
               | Okay, so instead of one random Gmail account taking over
               | a critical project, we need two or three?
        
           | isbvhodnvemrwvn wrote:
           | This only ensures the backdoors are coming from governments
           | that issued the clearances, nothing more. I prefer more
           | competition, at least there is incentive to detect those
           | issues.
        
             | zeroCalories wrote:
             | It will ensure that my OS doesn't have code from random
             | Gmail accounts. If someone with U.S clearance submits a
             | backdoor, they should either be charged in the U.S, or
             | extradited to somewhere that will charge them. We have no
             | idea who this person is, and even if we did we probably
             | could not hold them accountable.
        
           | vmladenov wrote:
           | This seems infeasible for projects like LLVM that depend on
           | international collaboration.
        
           | AnonymousPlanet wrote:
           | That's a very US centric view and would practically split the
           | open source community along the atlantic at best and fracture
           | it globally at worst. Be careful what you wish for.
        
             | zeroCalories wrote:
             | I trust NATO members.
        
               | AnonymousPlanet wrote:
               | Oh, how generous.
        
           | colinsane wrote:
           | how many people in PRISM had such clearance? and how many of
           | them would i trust? precisely zero.
        
           | failbuffer wrote:
           | Killing your pipeline for innovation and talent development
           | doesn't make you secure, it makes you fall behind. The Soviet
           | Union found this out the hard way when they made a policy
           | decision to steal chip technology instead of investing in
           | their own people. They were outpaced and the world came to
           | use chips, networks, and software designed by Americans.
        
             | zeroCalories wrote:
             | That's the exact opposite of what I'm saying we do. We need
             | to invest in engineers we can trust, and cut off those we
             | can't.
        
               | mardifoufs wrote:
               | Who's we? Americans? Sure that's fine for you, but
               | Americans aren't exactly trustworthy outside of the US
               | either and I say that as someone who's usually pro US.
               | This sort of mentality just shows a lack of understanding
               | of how most of the world sees the US. Even in places like
               | say, france, the us is seen as an ally but a very
               | untrustworthy one. Especially since out of all the
               | confirmed backdoors up until now, most of them were
               | actually US made.
               | 
               | If this backdoor turns out to be linked to the US, what
               | would your proposal even solve?
        
               | zeroCalories wrote:
               | "We" doesn't have to be the U.S. This is a false
               | dichotomy that I see people in this thread keep pushing.
               | I suspect in bad faith, by the people that want to insert
               | backdoors. As a baseline, we could keep the contributors
               | to NATO and friends. If a programmer is caught
               | backdooring, they can be charged and extradited to and
               | from whatever country.
        
               | arter4 wrote:
               | If it's just an extradition issue, the US has extradition
               | treaties with 116 countries. You'd still have to 1)
               | ensure that user is who they say they are (an ID?) and 2)
               | they are reliable and 3) no one has compromised their
               | accounts.
               | 
               | 1) and 3) (and, to an extent, 2) )are routinely done, to
               | some degree, by your average security-conscious employer.
               | Your employer knows who you are and probably put some
               | thought on how to avoid your accounts getting hacked.
               | 
               | But what is reliability? Could be anything from "this
               | dude has no outstanding warrants" to "this dude has been
               | extensively investigated by a law enforcement agency with
               | enough resources to dig into their life, finances,
               | friends and family, habits, and so on".
               | 
               | I might be willing to go through these hoops for an
               | actual, "real world" job, but submitting myself to months
               | of investigation just to be able to commit into a Github
               | repository seems excessive.
               | 
               | Also, people change, and you should be able to keep track
               | of everyone all the time, in case someone gets
               | blackmailed or otherwise persuaded to do bad things. And
               | what happens if you find out someone is a double agent?
               | Rolling back years of commits can be incredibly hard.
        
               | zeroCalories wrote:
               | Getting a TS equivalent is exactly what helps minimize
               | them chances that someone is compromised. Ideally, such
               | an investigation would be transferable between
               | jobs/projects, like normal TS clearance is. If someone is
               | caught, yes rolling back years isn't practical, but we
               | probably ought to look very closely at what they've done,
               | like is probably being done with xz.
        
               | arter4 wrote:
               | I guess it depends on the ultimate goal.
               | 
               | If the ultimate goal is to avoid backdoors _in critical
               | infrastructures_ (think government systems, financial
               | sector, transportation,...) you could force those
               | organizations to use forks managed by an entity like
               | CISA, NIST or whatever.
               | 
               | If the ultimate goal is to avoid backdoors _in random
               | systems_ (i.e. for  "opportunistic attacks"), you have to
               | keep in mind random people and non-critical companies can
               | and will install unknown OSS projects as well as unknown
               | proprietary stuff, known but unmaintained proprietary
               | stuff (think Windows XP), self-maintained code, and so
               | on. Enforcing TS clearances on OSS projects would not
               | significantly mitigate that risk, IMHO.
               | 
               | Not to mention that, as we now know, allies spy and
               | backdoor allies (or at least they try)... so an
               | international alliance doesn't mean intelligence agencies
               | won't try to backdoor systems owned by other countries,
               | even if they are "allies".
        
               | zeroCalories wrote:
               | The core systems of Linux should be secured, regardless
               | of who is using it. We don't need every single open
               | source project to be secured. It's not okay to me that
               | SSH is potentially vulnerable, just because it's my
               | personal machine. As for allies spying on each other,
               | that certainly happens, but is a lot harder to do without
               | significant consequences. It will be even harder if we
               | make sure that every commit is tied to a real person that
               | can face real consequences.
        
               | arter4 wrote:
               | The "core systems of Linux" include the Linux kernel,
               | openssh, xz and similar libraries, coreutils, openssl,
               | systemd, dns and ntp clients, possibly curl and wget
               | (what if a GET on a remote system leaks data?),... which
               | are usually separate projects.
               | 
               | The most practical way to establish some uniform
               | governance over how people use those tools would involve
               | a new OS distribution, kinda like Debian, Fedora,
               | Slackware,... but managed by NIST or equivalent, which
               | takes whatever they want from upstream and enrich it with
               | other features.
               | 
               | But it doesn't stop here. What about browsers (think
               | about how browsers protect us from XSS)? What about
               | glibc, major interpreters and compilers? How do you deal
               | with random Chrome or VS Code extensions? Not to mention
               | "smart devices"...
               | 
               | Cybersecurity is not just about backdoors, it is also
               | about patching software, avoiding data leaks or
               | misconfigurations, proper password management, network
               | security and much more.
               | 
               | Relying on trusted, TS cleared personnel for OS
               | development doesn't prevent companies from using 5-years
               | old distros or choosing predictable passwords or exposing
               | critical servers to the Internet.
               | 
               | As the saying goes, security is not a product, it's a
               | mindset.
        
               | zeroCalories wrote:
               | We wouldn't have to change the structure of the project
               | to ensure that everyone is trustworthy.
               | 
               | As for applications beyond the core system, that would
               | fall on the individual organizations to weigh the risks.
               | Most places already have a fairly limited stack and do
               | not let you install whatever you want. But given that the
               | core system isn't optional in most cases, it needs extra
               | care. That's putting aside the fact that most projects
               | are worked on by big corps that do go after rogue
               | employees. Still, I would prefer if some of the bigger
               | projects were more secure as well.
               | 
               | Your "mindset" is basically allowing bad code into the
               | Kernel and hoping that it gets caught.
        
         | okasaki wrote:
         | A quote from... your arse?
        
           | joveian wrote:
           | That is what I thought too but it wasn't hard to find:
           | 
           | https://darknetdiaries.com/transcript/21/
        
         | threeseed wrote:
         | It's also a known fact that China will coerce people by
         | threatening family and friends.
         | 
         | Seen this happen to friends here in Australia who were
         | attending pro-Taiwan protests.
        
         | dang wrote:
         | We detached this subthread from
         | https://news.ycombinator.com/item?id=39867106.
        
       | k8svet wrote:
       | Wait, I'm on mobile. Did this partially slip by because of the
       | ABSURD PRACTICE of publishing release.tarballs that do not 1:1
       | correspond with source?
       | 
       | Let me guess, autotools? I want to rage shit post but I guess
       | I'll wait for confirmation first.
       | 
       | EDIT: YUP, AT LEAST PARTIALLY. Fucking god damn autotools.
        
         | hypnagogic wrote:
         | Been saying this the whole day now, GitHub really needs an
         | automated diff / A/B check-up on tarballs against the actual
         | repo, flag everything with at least a warning (+[insert
         | additional scrutiny steps here]) when the tarball isn't
         | matching the repo.
        
       | 5p4n911 wrote:
       | The author (Jia Tan) also changed the xz.tukaani.org (actually
       | the github.io, where the main contributor is, surprise, also
       | them) release description to state all new releases are signed by
       | their OpenPGP key. I'd guess that was one of the first steps to a
       | complete project takeover.
       | 
       | I hope Lasse Collin still has control of his accounts, though the
       | CC on the kernel mailing list looks kind of suspicious to me.
        
       | 0xthr0w4 wrote:
       | Out of curiosity I looked at the list of followers of the account
       | who committed the backdoor.
       | 
       | Randomly picked https://github.com/Neustradamus and looked at all
       | their contributions.
       | 
       | Interestingly enough, they got Microsoft to upgrade ([0],[1])
       | `vcpkg` to liblzma 5.6.0 3 weeks ago.
       | 
       | [0] https://github.com/microsoft/vcpkg/issues/37197
       | 
       | [1] https://github.com/microsoft/vcpkg/pull/37199
        
         | sroussey wrote:
         | OMG: look at the other contributions. He is trying to take over
         | projects and pushing some change to sha256 in a hundred
         | projects.
         | 
         | Example: https://github.com/duesee/imap-flow/issues/96
        
           | masklinn wrote:
           | This guy's interactions seem weird but it might just be
           | because of the non-native english or a strange attitude, or
           | he's very good at covering his track e.g. found a cpython
           | issue where he got reprimanded for serially opening issues: h
           | ttps://github.com/python/cpython/issues/115195#issuecomment..
           | .
           | 
           | But clicking around he seems to mostly be interacting with
           | interest around these bits e.g. https://github.com/python/cpy
           | thon/issues/95341#issuecomment-... or pinging the entire
           | python team to link to the PR... of a core python developer: 
           | https://github.com/python/cpython/issues/95341#issuecomment-.
           | ..
           | 
           | If I saw that on a $dayjob project I'd pit him as an
           | innocuous pain in the ass (overly excited, noisy,
           | dickriding).
           | 
           | Here's a PR from 2020 where he recommends / requests the
           | addition of SCRAM to an SMTP client:
           | https://github.com/marlam/msmtp/issues/36 which is basically
           | the same thing as the PR you found. The linked documents seem
           | genuine, and SCRAM is an actual challenge/response
           | authentication method for a variety of protocols (in this
           | case mostly SMTP, IMAP, and XMPP): https://en.wikipedia.org/w
           | iki/Salted_Challenge_Response_Auth...
           | 
           | Although, and that's a bit creepy, he shows up in the edition
           | history for the SCRAM page, the edit mostly seem innocent
           | though he does plug his "state of play" github repository.
        
             | robocat wrote:
             | > dickriding
             | 
             | https://www.urbandictionary.com/define.php?term=Dickriding
             | 
             | I guess I'm not in the right demographic to know the term.
        
             | sroussey wrote:
             | True, it does seem innocent enough upon more reflection.
        
           | gowthamgts12 wrote:
           | reported the account to github, just in case.
        
           | arp242 wrote:
           | What? They're just asking for some features there?
           | 
           | Ya'll need to calm down; this is getting silly. Half the
           | GitHub accounts look "suspicious" if you start scrutinizing
           | everything down the the microscopic detail.
        
           | gaucheries wrote:
           | I appreciate the way that duesee handled that whole issue.
        
         | asmor wrote:
         | Hey, I remember this guy! Buddy of someone who tried to get a
         | bunch of low quality stuff into ifupdown-ng, including copying
         | code with an incompatible license and removing the notice. He's
         | in every PR, complaining the "project is dead". He even pushes
         | for the account to be made "team member".
         | 
         | https://github.com/ifupdown-ng/ifupdown-ng/pulls/easynetdev
         | 
         | He follows 54k accounts though, so it may indeed just be
         | coincidence.
        
         | neustradamus wrote:
         | Dear @0xthr0w4, do you attack me because I have requested the
         | XZ update?
         | 
         | Do not mix, I am not linked to the XZ project.
        
       | gouggoug wrote:
       | List of pull request requesting the updating to liblzma 5.6.0 [0]
       | 
       | I wonder what amount of scrutiny all the accounts that proposed
       | the upgrade should be put under.
       | 
       | [0] https://github.com/search?q=liblzma+5.6.0&type=pullrequests
        
       | mikolajw wrote:
       | Tukaani website states "jiatan" as the nickname of the malicious
       | code committer on Libera Chat.
       | 
       | WHOWAS jiatan provided me the following information:
       | 
       | jiatan ~jiatan 185.128.24.163 * :Jia Tan jiatan 185.128.24.163
       | :actually using host jiatan jiatan :was logged in as jiatan
       | tungsten.libera.chat :Fri Mar 14:47:40 2024
       | 
       | WHOIS yields nothing, the user is not present on the network at
       | the moment.
       | 
       | Given that 185.128.24.163 is covered with a range-block on the
       | English Wikipedia, it appears this is a proxy.
        
         | chrononaut wrote:
         | > it appears this is a proxy.
         | 
         | Yes, that IP address appears associated with witopia[.]net,
         | specifically vpn.singapore.witopia[.]net points to that IP
         | address.
        
       | circusfly wrote:
       | Waiting for the new YouTube videos on this. "Woah! Linux has a
       | back door dudes!". My distribution, Ubuntu (now Kubuntu) 2022
       | isn't affected.
        
         | fullstackchris wrote:
         | not sure why you're being downvoted. this is exactly what is
         | going to happen.
        
         | Lockal wrote:
         | Still better than TwoMinuteToiletPapers and other AI-bamboozled
         | channels hyping over proprietary OpenAI crap
         | (text/photo/video), what a time to be alive!
        
       | autoexecbat wrote:
       | I'm really curious about if the act of injecting a backdoor into
       | OSS software is legal/illegal ?
       | 
       | Are they somehow in the clear unless we can show they actively
       | exploited it?
        
         | mnau wrote:
         | Probably depends on criminal code a country. Mine does (EU
         | country):
         | 
         | > Section 231 Obtaining and Possession of Access Device and
         | Computer System Passwords and other such Data
         | 
         | > (1) Whoever with the intent to commit a criminal offence of
         | Breach of secrecy of correspondence [...] or a criminal offence
         | of Unauthorised access to computer systems and information
         | media [...] _produces_ , _puts into circulation_ , imports,
         | exports, transits, offers, provides, sells, or otherwise makes
         | available, obtains for him/herself or for another, or handles
         | 
         | > a) a device or _its component_ , process, instrument or any
         | other means, including _a computer programme designed or
         | adapted for unauthorised access to electronic communications
         | networks, computer system_ or a part thereof, or
         | 
         | > b) a computer password, access code, data, process or any
         | other similar means by which it is possible to gain access to a
         | computer system or a part thereof,
         | 
         | shall be sentenced .. (1 year as an individual, 3 years as a
         | member of a organized group)
        
         | Culonavirus wrote:
         | The way I see it: People are being charged for their speech all
         | the time. Especially outside the US, but even in the US. And
         | code is speech.
         | 
         | And that is even before all the hacking/cracking/espionage laws
         | get involved.
         | 
         | There's a reason all the (sane) people doing grey/black hat
         | work take their security and anonymity extremely seriously.
        
       | vhiremath4 wrote:
       | My favorite part was the analysis of "I'm not really a security
       | researcher or reverse engineer but here's a complete breakdown of
       | exactly how the behavior changes."
       | 
       | You only get this kind of humility when you're working with
       | absolute wizards on a consistent basis.
        
       | rpigab wrote:
       | I'd love to be at Microsoft right now and have the power to
       | review this user's connection history to Github, even though VPN
       | exists, many things can be learned from connection habits, links
       | to ISPs, maybe even guess if VPNs were used, roundtrip time on
       | connections can give hints.
       | 
       | I really don't think some random guy wants to weaken ssh just to
       | extract some petty ransomware cash from a couple targets.
        
         | qecez wrote:
         | > I really don't think some random guy wants to weaken ssh just
         | to extract some petty ransomware cash from a couple targets.
         | 
         | Which is why there's probably nothing remotely interesting in
         | them logs.
        
           | mhh__ wrote:
           | Intelligence agencies get caught red handed all the time so I
           | wouldn't be too sure.
           | 
           | If it was an organised group I'm sure they were careful, of
           | course, but it only takes one fuckup.
        
         | alpb wrote:
         | That'd be illegal for an employee to do.
        
         | optimalsolver wrote:
         | I'm guessing Microsoft just got a call from the Government
         | telling them not to look too deeply into it.
        
         | RockRobotRock wrote:
         | Nah. I'm sure Microsoft got a call from the alphabet boys and
         | nobody, not even internal employees are allowed to look at the
         | logs right now.
        
         | megous wrote:
         | Oh my, another reason not to use github. :D So many reasons
         | poping up just in this comment section alone.
        
       | bananapub wrote:
       | people are mis-reading the Debian bug report:
       | https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1067708
       | 
       | it wasn't the apparently newly-created identity "Hans Jansen"
       | just _asking_ for a new version to be uploaded, it was  "Hans
       | Jansen" _providing_ a new version to be uploaded as a non-
       | maintainer-upload - Debian-speak for  "the maintainer is AWOL,
       | someone else is uploading their package". if "Hans Jansen" is
       | another attacker then they did this cleverly, providing the new -
       | compromised - upstream tarballs in an innocent-looking way and
       | avoiding anyone examining the upstream diff.
        
       | vasili111 wrote:
       | Could anyone please tell me if current stable version of Debian
       | has that backdoor or not?
        
         | yabones wrote:
         | Debian stable has 5.4.1, the backdoored versions are
         | 5.6.0-5.6.1
         | 
         | https://packages.debian.org/bookworm/xz-utils
         | 
         | https://packages.debian.org/bookworm/liblzma5
        
         | anononaut wrote:
         | Debian Stable seems to be in the clear.
         | 
         | https://lists.debian.org/debian-security-announce/2024/msg00...
        
         | teddyh wrote:
         | It does _not_ contain the backdoor:  <https://security-
         | tracker.debian.org/tracker/CVE-2024-3094<
        
       | kazinator wrote:
       | Doesn't this call for criminal charges?
        
         | mnau wrote:
         | Good luck finding him/her.
         | 
         | GitHub probably already gave feds all logs and IPs, but I would
         | bet 100:1 that it's all going to be a VPN or something like
         | that.
        
       | Unfrozen0688 wrote:
       | Time for the west to take Linux back.
        
       | ozgune wrote:
       | I read through the entire report and it gradually got more
       | interesting. Then, I got to the very end, saw Andres Freund's
       | name, and it put a smile on my face. :)
       | 
       | Who else would have run a PostgreSQL performance benchmark and
       | discover a major security issue in the process?
        
       | mik1998 wrote:
       | Personally, I use lzip ever since I read
       | https://www.nongnu.org/lzip/xz_inadequate.html Seems like the
       | complexity of XZ has backfired severely, as expected.
        
         | bananapub wrote:
         | > Seems like the complexity of XZ has backfired severely, as
         | expected.
         | 
         | this is a very bad reading of the current situation.
        
           | ta8645 wrote:
           | This kind of shallow dismissal is really unhelpful to those
           | of us trying to follow the argument. You take a tone of
           | authoritative expert, without giving any illuminating
           | information to help outsiders judge the merit of your
           | assertion. Why is it a very bad reading of the current
           | situation? What is a better reading?
        
             | davispw wrote:
             | To summarize the article, the back door is introduced
             | through build scripts and binaries distributed as "test"
             | data. Very little to do with the complexity or simplicity
             | of xz; more that it was a dependency of critical system
             | binaries (systemd) and ripe for hostile takeover of the
             | maintainer role.
        
             | supriyo-biswas wrote:
             | Introducing a back door is not the same thing as a badly
             | designed file format.
        
             | bananapub wrote:
             | I am not sure I agree that every low quality post needs a
             | detailed rebuttal? HN couldn't function under such rules.
             | 
             | as to the specific comment:
             | 
             | > Seems like the complexity of XZ has backfired severely,
             | as expected.
             | 
             | to summarise: someone found a project with a vulnerable
             | maintenance situation, spent _years_ getting involved in a
             | project, then got commit rights, and then commited a
             | backdoor in some binaries and the build system, then got
             | sock puppets to agitate for OSes to adopt the backdoored
             | code.
             | 
             | the comment I replied to made a "shallow" claim of
             | complexity without any details, so let's look at some
             | possible interpretations:
             | 
             | - code complexity - doesn't seem super relevant - the
             | attacker hide a highly obfuscated backdoor in a binary test
             | file and committed it - approximately no one is ever going
             | to catch such things without a _process_ step of requiring
             | binaries be generatable in a reasonable-looking and
             | hopefully-hard-to-backdoor kind of way. cryptographers are
             | good at this: https://en.wikipedia.org/wiki/Nothing-up-my-
             | sleeve_number
             | 
             | - build complexity - sure, but it's auto*, that's very
             | common.
             | 
             | - organisational complexity - the opposite is the case. it
             | had one guy maintaining it, who asked for help.
             | 
             | - data/file format complexity - doesn't seem relevant
             | unless it turns out the obfuscation method used was
             | particularly easy for this format, but even in that case,
             | you'd think others would be vulnerable to something
             | equivalent
             | 
             | perhaps OP had some other thing in mind, but then they
             | could have said that, instead of making a crappy comment.
        
       | lacoolj wrote:
       | What a disappointment.
       | 
       | It's something always in the back of our minds as developers
       | using public libraries, but when something like this happens,
       | non-developers that hear about it start to associate it with the
       | rest of the open-source community.
       | 
       | It's essentially a terrorist attack on developer experience.
       | Thankfully, management doesn't follow the same approach as the
       | TSA.
        
       | Zigurd wrote:
       | "Lasse Collin," as other posters here have found, does not seem
       | to exist as an experienced coder. Oddly, there is a Swedish jazz
       | musician named Lasse Collin, which would otherwise be one of
       | those names, especially the last name, that would stick out.
       | Instead it is buried under a lot of mentions of a musician.
        
         | rany_ wrote:
         | Searching for my real name on Google doesn't return anything
         | either, I don't think this means anything.
        
           | Zigurd wrote:
           | Lasse Collin the contributor is findable, especially if you
           | add "tukaani" to the search. But not in any other context,
           | unless that's what old jazz musicians do in their retirement.
        
             | rany_ wrote:
             | I don't think that's what they meant. The idea is to find
             | information about their personal life, not OSS
             | contributions. Something that proves they're a real person.
        
         | akyuu wrote:
         | Lasse Collin has been working on xz for decades:
         | https://sourceforge.net/p/sevenzip/discussion/45797/thread/0...
         | 
         | Now, whether his GitHub account is currently being controlled
         | by him is another question.
         | 
         | Also, for some more context: In 2022, Lasse said he was
         | struggling to work on xz and was looking for maintainers, and
         | mentioned Jia Tan: https://www.mail-archive.com/xz-
         | devel@tukaani.org/msg00567.h...
        
       | nateskulic wrote:
       | Fairly deep bugs for a Bazaar.
        
       | alathers wrote:
       | Thank the gods I didn't plan on having a life this weekend
        
       | userbinator wrote:
       | Looking at how many requests to update to the backdoored version
       | have been made, I wonder if the fact that many people (including
       | developers) have been conditioned to essentially accept updates
       | as "always-good" is a huge contributing factor in how easy it is
       | to spread something like this.
       | 
       | The known unknowns can be better than the unknown unknowns.
        
         | frenchman99 wrote:
         | Totally agree. With things like Dependabot encouraged by
         | GitHub, people now get automated pull requests for dependency
         | updates, increasing the speed of propagation of such
         | vulnerabilities.
        
       | kapouer wrote:
       | Both https://github.com/tukaani-project members accounts have
       | been suspended. (to see that, you can list the followers of each
       | account).
        
       | dmarto wrote:
       | Kinda relevant, as I saw few comments about how safer languages
       | are the solution.
       | 
       | Here[0] is a very simple example, that shows how easy such supply
       | chain attacks are in Rust; and lets not forget that there was a
       | very large python attack just a few days ago[1].
       | 
       | [0] - https://github.com/c-skills/rust1
       | 
       | [1] - https://checkmarx.com/blog/over-170k-users-affected-by-
       | attac...
        
         | mrcus wrote:
         | I am very concerned about Rust.
         | 
         | Rust's "decision" to have a very slim standard library has
         | advantages, but it severely amplifies some other issues. In Go,
         | I have to pull in zero dependencies to make an HTTP request. In
         | Rust, pulling reqwest pulls in at least 30 distinct packages
         | (https://lib.rs/crates/reqwest). Date/time, "basic" base64,
         | common hashing or checksums, etc, they all become supply chain
         | vectors.
         | 
         | The Rust ecosystem's collective refusal to land stable major
         | versions is one of the amplifying issues. "Upgrade fatigue"
         | hits me, at least. "Sure, upgrade ring to 0.17" (which is
         | effectively the 16th major version). And because v0.X versions
         | are usually incompatible, it's not really possible to opt not
         | to upgrade, because it only takes a short while before some
         | other transitive dependency breaks because you are slow to
         | upgrade. I recently spent a while writing my code to support
         | running multiple versions of the `http` library, for example
         | (which, to be fair, did just land version 1.0). My NATS library
         | (https://lib.rs/crates/async-nats) is at version 34. My
         | transitive base64 dependency is at version 22
         | (https://lib.rs/crates/base64).
         | 
         | This makes it nearly impossible for me to review these
         | libraries and pin them, because if I pin foo@0.41.7, and bar
         | needs foo@0.42.1, I just get both. bar can't do =>0.41, because
         | the point of the 0.X series is that it is not backwards
         | compatible. It makes this process so time consuming that I
         | expect people will either just stop (as if they did) reviewing
         | their dependencies, or accept that they might have to reinvent
         | everything from URL parsing to constructing http headers or
         | doing CRC checks.
         | 
         | Combine this with a build- and compile-time system that allows
         | completely arbitrary code execution, which is routinely just a
         | wrapper for stuff like in the zx attack (look at a lot of the
         | low-level libs you inevitably pull in). Sure, the build scripts
         | and the macro system enables stuff like the amazing sqlx
         | library, but said build and macro code is already so hard to
         | read, it really takes proper wizardry to properly understand.
        
           | dmarto wrote:
           | You have perfectly put into words, all my thoughts.
           | 
           | I have been thinking about ways to secure myself, as it is
           | exhausting to think about it every time there is an update or
           | some new dependency.
           | 
           | After this attack, I think the only sure way is to unplug the
           | computer and go buy goats.
           | 
           | The next best thing? Probably ephemeral VMs or some
           | Codespaces/"Cloud Dev Env thingy". (except neither would save
           | me in the xz case)
        
           | Brian_K_White wrote:
           | Or you vendor everything.
           | 
           | You don't automatically download anything at build or install
           | time, you just update your local source copies when you want
           | to. Which to be clear I know means rarely.
           | 
           | It's 1970 all over again!
        
             | mrcus wrote:
             | Yes, but this doesn't prevent issues like the xz issue,
             | where the code looks fine, but the build scripts alter it.
        
             | dmarto wrote:
             | Vendoring is nice, and I usually prefer it, but you don't
             | always have the time or people for it.
             | 
             | Vendoring + custom build system (Bazel?) for everything is
             | basically googles approach, if what I have read is correct.
             | Definitely better than everything we have, but the
             | resources for it are not something most can afford.
             | 
             | P.S also what mrcus said, if we trust the upstream build
             | process, we may as well trust their binaries.
        
       | mrcoffee4u wrote:
       | can someone ELI5 ?
        
       | jchoksi wrote:
       | The two active maintainers seem to be: Lasse Collin
       | <lasse.collin@tukaani.org> and Jia Tan <jiat0218@gmail.com>
       | 
       | Searching DDG for "jiat0218" I came across a blog post which I
       | found weird. Seems to be dated: 2006-05-03
       | 
       | Blog post: "KusoPai Mai .You Ling Qi De Kuai Zi  - Que Xiao Hao "
       | <https://char.tw/blog/post/24397301>
       | 
       | Internet Archive link:
       | <https://web.archive.org/web/20240329182713/https://char.tw/b...>
       | 
       | The contents of the page when translated seems to be about
       | jiat0218 auctioning a pair of spiritual chopsticks as a prank.
       | 
       | The blog entry is basically a QA between jiat0218 and various
       | other people about these chopsticks.
       | 
       | If Jia Tan does turn out to be a compromised maintainer working
       | for a state actor then some of the content on the blog page can
       | be viewed in a more sinister way (i.e. spycraft / hacks for sale
       | etc.).
       | 
       | Example question 38:                   Question 38
       | accounta066 (3): Are these chopsticks really that good? I kind of
       | want to buy         them! But I recently sent money for online
       | shopping but didn't receive anything.         It's very risky;
       | currently jiat0218 you don't have any reviews, you can
       | interview me. Do you want to hand it over?! ... A sincere buyer
       | will keep it.                  Reply to         jiat0218 (4):
       | First of all, I would like to express my condolences to you for
       | your unfortunate experience! What can I say about this kind of
       | thing...My little         sister has always been trustworthy.
       | What's more, this is a pair of spiritual          chopsticks, so
       | I hope to have a good one. It's the beginning! As you can see,
       | my little sister is very careful and takes her time when
       | answering your         questions. Except for the two messages
       | that were accidentally deleted by her,         she always answers
       | your questions. If this still doesn't reassure you, then I
       | can only say that I still have room to work hard. You are still
       | welcome         to bid... ^_^
       | 
       | Note however, it could all just be what it purports to be which
       | is a prank auction of spiritual chopsticks.
        
         | fragmede wrote:
         | Chopsticks could also be a codeword for something. Maybe some
         | sort a backdoor into a system somewhere.
        
         | alwayslikethis wrote:
         | This is likely just a coincidence. 0218 looks like a birthday
         | and jiat is probably the name + initial. 18 years is also too
         | long of a time horizon for this.
        
         | dimgl wrote:
         | Crazy to think that the time horizon for these kinds of attacks
         | span decades. This absolutely does not read like a coincidence.
         | Chopsticks, little sister, "room to work hard", all sound like
         | codewords.
        
           | astrange wrote:
           | Do you say that about every word commonly used in Asia?
        
           | slowmotiony wrote:
           | Sounds to me like google translate gibberish
        
       | dboreham wrote:
       | Something about this I found surprising is that Linux distros are
       | pulling and packaging pre-built binaries from upstream projects.
       | I'd have expected them to build from source.
        
         | richardwhiuk wrote:
         | They were pulling a tarball from upstream and building it - the
         | tarball was compromised.
        
           | Lockal wrote:
           | The answer is not complete. There were 2 ways to pull
           | sources:
           | 
           | bad - https://github.com/tukaani-
           | project/xz/releases/download/...
           | 
           | or:
           | 
           | good - https://github.com/tukaani-
           | project/xz/archive/refs/tags/...
           | 
           | Specifically in Gentoo, there is a note in
           | https://github.com/gentoo/gentoo/blob/master/app-arch/xz-
           | uti...                 # Remember: we cannot leverage
           | autotools in this ebuild in order       #           to avoid
           | circular deps with autotools
           | 
           | Namely, to unpack autoconf-2.72e.tar.xz from gnu.org you need
           | xz-tools. And this is just the shortest circle. It is not
           | very common, but xz-utils was one of few rare cases where
           | regeneration of autohell files was considered as unnecessary
           | complication (it backfired).
        
             | dpkirchner wrote:
             | Unfortunately, those GitHub links are no longer valid, so
             | we randos can't use them to learn what went wrong here.
             | Hopefully GH will reverse this decision once the dust
             | settles.
        
               | NekkoDroid wrote:
               | The gist of it is: The "good" one is the auto generated
               | "Source code" releases made by github. The "bad" one is a
               | manually generated and uploaded source code release,
               | which can have whatever you want.
        
               | Lockal wrote:
               | GitHub should not just reverse and make repo public and
               | archived "as is", because there are many rolling
               | distributions (from Gentoo to LFS), submodule pullers, CI
               | systems, unaware users, which may pull and install the
               | latest backdoored commit of archived project.
               | 
               | However if you want to access exact copies of backdoored
               | tarballs, they are still available on every mirror, e. g.
               | in http://gentoo.mirror.root.lu/distfiles/9f/ . For
               | project of this level artifacts are checksummed and
               | mirrored across the world by many people, and nothing
               | wrong with that.
        
         | jiripospisil wrote:
         | Not in this case as the other commenter pointed out but for
         | example Vivaldi on Arch Linux is just a repackaged upstream
         | build.
         | 
         | https://gitlab.archlinux.org/archlinux/packaging/packages/vi...
        
       | haolez wrote:
       | I'm not trying to troll, but I'm wondering if a distro like
       | Gentoo is less susceptible to such attacks, since the source code
       | feels more transparent with their approach. But then again, it
       | seems that upstream was infected in this case, so I'm not sure if
       | a culture of compiling from source locally would help.
        
         | StressedDev wrote:
         | It is not going to make a difference. If you run malicious
         | code, you will get hacked. Compiling the code yourself does not
         | prevent the code from being malicious.
         | 
         | The one it might help is it might make it easier to find the
         | back door once you know there is one.
        
       | stephc_int13 wrote:
       | I guess that rewriting liblzma in Rust would _not_ have prevented
       | this backdoor. But would have likely increased the confidence in
       | its safety.
       | 
       | Using the build system (and potentially the compiler) to insert
       | malicious backdoors is far from a new idea, and I don't see why
       | this example would the only case.
        
         | anonymous-panda wrote:
         | Don't know all the details and rust isn't immune to a build
         | attack, but stuff like that tends to stand out a lot more I
         | think in a build.rs than it would in some m4 automake soup.
        
           | minetest2048 wrote:
           | There was a drama back then where serde tried to ship its
           | derive macro as a precompiled binary:
           | https://news.ycombinator.com/item?id=37189462
        
         | yencabulator wrote:
         | The backdoor hinged on hiding things in large shell scripts,
         | obscure C "optimizations", and sanitizer disabling. I'd expect
         | all of those would be a much bigger red flag in the Rust world.
        
         | nullifidian wrote:
         | It would have made it worse, because there would be 300 crates
         | with 250 different maintainers, all pulled in by several
         | trivial/baseline dependencies. More dependencies = higher the
         | probability that a malicious maintainer has gotten maintainer's
         | rights for one of them, especially because many original
         | authors/maintainers of rust style microdepencency crates move
         | on with their lives and eventually seek to exit their
         | maintainer role. At least for classic C/C++ software, by the
         | virtue of it being very inconvenient to casually pull 300
         | dependencies for something trivial, there are fewer
         | dependencies, i.e. separate projects/repos, and these tend to
         | be more self-contained. There are also "unserious"
         | distributions like Fedora and something like
         | stable/testing/unstable pipeline in Debian, which help with
         | catching the most egregious attempts. Crates.io and npm are
         | unserious by their very design, which is focused on maximizing
         | growth by eliminating as many "hindrances" as possible.
        
           | yogorenapan wrote:
           | Why is rust beginning to sound like JavaScript?
        
             | intelVISA wrote:
             | Modern coders have been conditioned to import random libs
             | to save 30mins work.
        
         | im3w1l wrote:
         | This hack exploited a fairly unique quirk in the linux C
         | ecosystem / culture. That packages are built from "tarballs"
         | that are not exact copies of the git HEAD as they also contain
         | generated scripts with arbitrary code.
         | 
         | It would not have happened in any modern language. It probably
         | wouldn't have even happened in a Vistual Studio C-project for
         | windows either.
        
           | everybackdoor wrote:
           | Funny you should say that, given they definitely have exploit
           | code in `vcpkg`
        
           | Denvercoder9 wrote:
           | > It would not have happened in any modern language.
           | 
           | It would. pip for example installs from tarballs uploaded to
           | PyPi, not from a git repository.
        
             | im3w1l wrote:
             | Pip and similar are their own can of worms yeah. They trade
             | convinience for an almost complete lack of oversight.
             | 
             | But in this case we are talking about people (distro
             | packagers) manually downloading the source and building it
             | which is not quite the same thing.
        
               | Denvercoder9 wrote:
               | `pip install` does do exactly the same thing: it
               | downloads and executes code from a tarball uploaded to
               | PyPi by its maintainer. There's no verification process
               | that ensures that tarball matches what's in the git
               | repository.
        
       | dang wrote:
       | Related ongoing threads:
       | 
       |  _Xz: Disable ifunc to fix Issue 60259_ -
       | https://news.ycombinator.com/item?id=39869718
       | 
       |  _FAQ on the xz-utils backdoor_ -
       | https://news.ycombinator.com/item?id=39869068
       | 
       |  _Everything I Know About the XZ Backdoor_ -
       | https://news.ycombinator.com/item?id=39868673
        
       | notmysql_ wrote:
       | Interestingly on of the accounts that the GitHub account who
       | introduced the backdoor follows was suspended very recently [1]
       | who is also part of the org who runs XZ
       | 
       | [1] https://github.com/JiaT75?tab=following
        
         | rany_ wrote:
         | That JiaT75 account is also suspended, if you check
         | https://github.com/Larhzu?tab=following you'll see that they're
         | suspended as well. It's pretty weird that it's that hard to
         | find out whether a user is suspended.
        
       | fullstackchris wrote:
       | pRoBaBlY a StaTe AcToR
       | 
       | zero definition of what that means...
       | 
       | egos of people who just like to say cool words they don't
       | understand
       | 
       | lol
       | 
       | this comment will probably get deleted, but let the action of
       | this comment being deleted stand that in 2024 we're all allowed
       | to use big words with no definition of what they mean -> bad
       | 
       | state actor? who? what motive? what country? all comments
       | involving "state actor" are very broad and strange... i would
       | like people to stop using words that have no meaning, as it
       | really takes away from the overall conversation of what is going
       | on.
       | 
       | i mean you're seriously going to say "state actor playing the
       | long game" to what end? the issue was resolved in 2 hours... this
       | is stupid
        
         | Hackbraten wrote:
         | For starters, the backdoor was technically really
         | sophisticated.
         | 
         | For example, the malicious code circumvents a hardening
         | technique (RELRO) in a clever way, which would otherwise have
         | blocked it from manipulating the sshd code in the same process
         | space at runtime. This is not something that script kiddies
         | usually cook up in an afternoon to make a quick buck. You need
         | experts and a lot of time to pull off feats like that.
         | 
         | This points to an organization with excellent funding. I'm not
         | surprised at all that people are attributing this to some
         | unknown nation-level group.
        
       | dlenski wrote:
       | A lot of software (including
       | https://gitlab.com/openconnect/openconnect of which I'm a
       | maintainer) uses libxml2, which in turn transitively links to
       | libzma, using it to load and store _compressed_ XML.
       | 
       | I'm not *too* worried about OpenConnect given that we use
       | `libxml2` only to read and parse _uncompressed_ XML...
       | 
       | But I am wondering if there has been any statement from libxml2
       | devs (they're under the GNOME umbrella) about potential risks to
       | libxml2 and its users.
        
         | bananapub wrote:
         | > only to read and parse uncompressed XML...
         | 
         | how does libxml2 know to decompress something?
         | 
         | does it require you, as the caller, to explicitly tell it to?
         | 
         | or does it look at the magic bytes or filename or mimetype or
         | something?
        
         | enedil wrote:
         | This doesn't matter, if libxml2 loads .so and the library is
         | malicious, you are already potentially compromised, as it is
         | possible to run code on library load.
        
       | 0x0 wrote:
       | Interesting commit in January where the actual OpenPGP key was
       | changed: https://github.com/tukaani-project/tukaani-
       | project.github.io...
        
         | illusive4080 wrote:
         | GitHub suspended this project
        
         | gertvdijk wrote:
         | They just signed each other's keys around that time, and one
         | needs to redistribute the public keys for that; nothing
         | suspicious about it I think. The key fingerprint
         | 22D465F2B4C173803B20C6DE59FCF207FEA7F445 remained the same.
         | 
         | before:                   pub   rsa4096/0x59FCF207FEA7F445
         | 2022-12-28 [SC] [expires: 2027-12-27]
         | 22D465F2B4C173803B20C6DE59FCF207FEA7F445         uid
         | Jia Tan <jiat0218@gmail.com>         sig
         | 0x59FCF207FEA7F445 2022-12-28   [selfsig]         sub
         | rsa4096/0x63CCE556C94DDA4F 2022-12-28 [E] [expires: 2027-12-27]
         | sig        0x59FCF207FEA7F445 2022-12-28   [keybind]
         | 
         | after:                   pub   rsa4096/0x59FCF207FEA7F445
         | 2022-12-28 [SC] [expires: 2027-12-27]
         | 22D465F2B4C173803B20C6DE59FCF207FEA7F445         uid
         | Jia Tan <jiat0218@gmail.com>         sig
         | 0x59FCF207FEA7F445 2022-12-28   [selfsig]         sig
         | 0x38EE757D69184620 2024-01-12   Lasse Collin
         | <lasse.collin@tukaani.org>         sub
         | rsa4096/0x63CCE556C94DDA4F 2022-12-28 [E] [expires: 2027-12-27]
         | sig        0x59FCF207FEA7F445 2022-12-28   [keybind]
         | 
         | Lasse's key for reference:                   pub
         | rsa4096/0x38EE757D69184620 2010-10-24 [SC] [expires:
         | 2025-02-07]
         | 3690C240CE51B4670D30AD1C38EE757D69184620         uid
         | Lasse Collin <lasse.collin@tukaani.org>         sig
         | 0x38EE757D69184620 2024-01-08   [selfsig]         sig
         | 0x59FCF207FEA7F445 2024-01-12   Jia Tan <jiat0218@gmail.com>
         | sub   rsa4096/0x5923A9D358ADF744 2010-10-24 [E] [expires:
         | 2025-02-07]         sig        0x38EE757D69184620 2024-01-08
         | [keybind]
        
       | MaximilianEmel wrote:
       | We need to get these complex & bloated build-systems under
       | control.
        
         | 77pt77 wrote:
         | What we need is to move away from 1970s build tools.
        
       | 0x0 wrote:
       | All these older (4.x, 5.0.x etc) releases that were suddenly
       | uploaded a few months ago should probably also be considered
       | suspect: https://github.com/tukaani-project/tukaani-
       | project.github.io...
        
       | shortsunblack wrote:
       | Pretty much proof that OSS != automatically more secure. And
       | proof that OSS projects can get backdoored. See this for more
       | ideas on this issue: https://seirdy.one/posts/2022/02/02/floss-
       | security/
        
         | derkades wrote:
         | The malware was hidden inside an opaque binary. If anything,
         | this shows that we need more open source and more
         | reproducibility.
        
       | llmblockchain wrote:
       | Was Debian 12/stable unaffected? Only sid?
        
       | xvilka wrote:
       | Maybe it's finally time to start sunsetting LZMA and xz all
       | together in favor of newer algorithms like Zstandard that also
       | offer better performance but compression rates on par with LZMA.
        
         | illusive4080 wrote:
         | Yes but don't start thinking they're immune to compromise
        
           | xvilka wrote:
           | Nobody is. But it's a great opportunity window.
        
       | mrbluecoat wrote:
       | > _I am *not* a security researcher, nor a reverse engineer._
       | 
       | Could have fooled me - impressive write-up!
        
       | neoneye2 wrote:
       | Damn. I'm on macOS and use homebrew. To my surprise I had "xz"
       | version 6.5.1 installed on my computer!
       | 
       | I ran "brew upgrade" and that downgraded to version 5.4.6.
        
       | afh1 wrote:
       | Potentially malicious commit by same author on libarchive:
       | https://github.com/libarchive/libarchive/pull/1609
        
       | andix wrote:
       | Is there already a list of distributions that included the
       | affected versions in non-prereelase channels?
        
         | illusive4080 wrote:
         | None that I could find have included it. Not even NixOS 23.11.
        
       | devttyeu wrote:
       | Wouldn't be surprised that the ssh auth being made slower was
       | deliberate - that makes it fairly easy to index all open ssh
       | servers on the internet, then to see which ones get slower to
       | fail preauth as they install the backdoor
        
       | A1kmm wrote:
       | Looks like GitHub has suspended access to the repository, which
       | while it protects against people accidentally compiling and using
       | the code, but certainly complicates forensic analysis for anyone
       | who doesn't have a clone or access to history (which is what I
       | think a lot of people will be doing now to understand their
       | exposure).
        
         | gpm wrote:
         | Well that's inconvenient, I was (probably, time permitting)
         | going to propose to some of my friends that we attempt to
         | reverse this for fun tomorrow.
         | 
         | Anyone have a link to the git history? I guess we can use the
         | ubuntu tarball for the evil version.
        
         | A1kmm wrote:
         | It looks like git clone https://git.tukaani.org/xz.git still
         | works for now (note: you will obviously be cloning malware if
         | you do this) - that is, however, trusting the project
         | infrastructure that compromised maintainers could have had
         | access to, so I'm not sure if it is unmodified.
         | 
         | HEAD (git rev-parse HEAD) on my result of doing that is
         | currently 0b99783d63f27606936bb79a16c52d0d70c0b56f, and it does
         | have commits people have referenced as being part of the
         | backdoor in it.
        
           | gowthamgts12 wrote:
           | > https://git.tukaani.org/xz.git
           | 
           | it's throwing 403 now.
        
             | amszmidt wrote:
             | Works cloning though.
        
           | gpm wrote:
           | Apparently there's a wayback machine for git repos and it
           | "just coincidentally" archived this repo the day before the
           | news broke:
           | 
           | https://archive.softwareheritage.org/browse/origin/visits/?o.
           | ..
        
       | sn wrote:
       | For bad-3-corrupt_lzma2.xz, the claim was that "the original
       | files were generated with random local to my machine. To better
       | reproduce these files in the future, a constant seed was used to
       | recreate these files." with no indication of what the seed was.
       | 
       | I got curious and decided to run 'ent'
       | https://www.fourmilab.ch/random/ to see how likely the data in
       | the bad stream was to be random. I used some python to split the
       | data into 3 streams, since it's supposed to be the middle one
       | that's "bad":
       | 
       | I used this regex to split in python, and wrote to "tmp":
       | re.split(b'\xfd7zXZ', x)
       | 
       | I manually used dd and truncate to strip out the remaining header
       | and footer according to the specification, which left 48 bytes:
       | $ ent tmp2 # bad file payload         Entropy = 4.157806 bits per
       | byte.                  Optimum compression would reduce the size
       | of this 48 byte file by 48 percent.                  Chi square
       | distribution for 48 samples is 1114.67, and randomly
       | would exceed this value less than 0.01 percent of the times.
       | Arithmetic mean value of data bytes is 51.4167 (127.5 = random).
       | Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
       | Serial correlation coefficient is 0.258711 (totally uncorrelated
       | = 0.0).                  $ ent tmp3 # urandom         Entropy =
       | 5.376629 bits per byte.                  Optimum compression
       | would reduce the size         of this 48 byte file by 32 percent.
       | Chi square distribution for 48 samples is 261.33, and randomly
       | would exceed this value 37.92 percent of the times.
       | Arithmetic mean value of data bytes is 127.8125 (127.5 = random).
       | Monte Carlo value for Pi is 3.500000000 (error 11.41 percent).
       | Serial correlation coefficient is -0.067038 (totally uncorrelated
       | = 0.0).
       | 
       | The data does not look random. From
       | https://www.fourmilab.ch/random/ for the Chi-square Test, "We
       | interpret the percentage as the degree to which the sequence
       | tested is suspected of being non-random. If the percentage is
       | greater than 99% or less than 1%, the sequence is almost
       | certainly not random. If the percentage is between 99% and 95% or
       | between 1% and 5%, the sequence is suspect. Percentages between
       | 90% and 95% and 5% and 10% indicate the sequence is "almost
       | suspect"."
        
         | supriyo-biswas wrote:
         | Now to be fair, such an archive could have been created with a
         | "store" level of compression that doesn't actually perform any
         | compression.
        
           | sn wrote:
           | My reading of the commit message is they're claiming the
           | "data" should look random.
        
       | CGamesPlay wrote:
       | Why has Github disabled the (apparently official) xz repository,
       | but left the implicated account open to the world? It makes
       | getting caught up on the issue pretty difficult, when GitHub has
       | revoked everyone's access to see the affected source code.
       | 
       | https://github.com/tukaani-project/xz vs
       | https://github.com/JiaT75
        
         | dzaima wrote:
         | The account has been suspended for a while, but for whatever
         | reason that's not displayed on the profile itself (can be seen
         | at https://github.com/Larhzu?tab=following). Repo being
         | disabled is newer, and, while annoying and realistically likely
         | pointless, it's not particularly unreasonable to take down a
         | repository including a real backdoor.
        
       | betaby wrote:
       | How that backdoor is triggered and what exactly it does?
        
       | west0n wrote:
       | It seems that to counter this type of supply chain attack, the
       | best practices for managing software dependencies are to pin the
       | version numbers of dependencies instead of using `latest`, and to
       | use static linking instead of dynamic linking.
        
       | xyzzy_plugh wrote:
       | This gist summarizes the current situation very well:
       | https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78b...
       | 
       | Definitely looking like they were most likely some sort of state
       | actor. This is very well done and all in plain sight. It's
       | reassuring that it was discovered but given a simple audit of the
       | release build artifacts would have raised alarms, how prevalent
       | is this behavior in other projects? Terrifying stuff.
        
       | Brian_K_White wrote:
       | It doesn't really relate to this issue other than that both
       | issues share a common source, but I wish we'd never fallen for
       | xz.
       | 
       | I agree with the lzip guy
       | 
       | https://www.nongnu.org/lzip/xz_inadequate.html
        
       | bitwize wrote:
       | Looks like Jonathan Blow was right about open source.
        
       | fwungy wrote:
       | Brain fart: would it be possible to attach passwords to a crypto
       | based micro transaction such that every time you attempted a
       | password entry your crypto account was charged a small fee for
       | the login attempt?
       | 
       | This would thwart brute force attacks, but not be a significant
       | cost for users. If you could attach your login to the crypto
       | account it would mean the account would have to be funded to
       | allow the attempt. The token wouldn't store passwords it would
       | just be a gatekeeper to the login attempt.
       | 
       | The fees would be paid to the service providers as mining fees.
       | 
       | E.g. foo@bar.com needs a password and a token provided from a
       | designated crypto address to gain access to the service.
        
       | byearthithatius wrote:
       | I hope mainstream news cover this so the general population can
       | understand the issue with our software ecoysystems reliance on
       | unpaid open-source maintainers
        
       | korginator wrote:
       | xz is so pervasive, I just discovered on my Mac that the
       | (affected?) version 5.6.1 made it into homebrew. The post in the
       | linked article says that only Linux x86-64 systems are affected,
       | but now I'm left scratching my head whether my Mac is also in
       | trouble, just that we don't know it yet.
        
       | BobbyTables2 wrote:
       | Why doesn't GitHub force "releases" to be a simple repo tarball
       | for sources and with binaries from GitHub actions or such...
       | 
       | I find it incredibly ironic that a "version control" site gives
       | no assurance of reproducible builds (nor reproducible source!!)
       | 
       | The real villain is not the perpetrator, it is Microsoft, and it
       | is all of us.
        
         | cryptonector wrote:
         | Because then for autoconf codebases you have to commit
         | `./configure` or you have to require that users have autoconf
         | installed and run `autoreconf -fi` first.
         | 
         | Maybe autoconf-using projects should really just require that
         | users have autoconf installed.
         | 
         | Not that that would prevent backdoors, mind you.
        
           | dpkirchner wrote:
           | If committing configure is objectionable, perhaps there could
           | be "service" repositories that are not directly writable and
           | are guaranteed to be nothing more than the base repo +
           | autoconf cruft used to generate the releases.
        
             | cryptonector wrote:
             | Well, for example in jq we do commit bison/flex outputs
             | because for users ensuring that they have the right version
             | of those can be tricky. We could do the same w.r.t.
             | autoconf and its outputs, though again, that won't preclude
             | backdoors.
        
               | dpkirchner wrote:
               | Yeah, it's less about detecting backdoors specifically
               | and more about having a way to compare releases to build
               | jobs.
        
               | cryptonector wrote:
               | Committing built artifacts presents similar problems: how
               | do you know that the committed artifacts are in fact
               | derived from their sources? Or from non-backdoored
               | versions of build tools for that matter? Hello Ken
               | Thompson attacks.
               | 
               | I don't believe there's a nice easy answer to these
               | questions.
               | 
               | What we do in jq is rely on GitHub Actions to run the
               | build and `make dist`. In fact, we could now stop
               | committing the bison/flex outputs, too, since we can make
               | sure that the tarball includes them.
               | 
               | We do also publish the git repo snapshots that GitHub
               | auto-generates for releases, though we do that because
               | GitHub doesn't give one a choice.
        
               | dpkirchner wrote:
               | Thinking about this more: maybe there would be some
               | benefit to GitHub taking control of "release"
               | repositories that may only be written to be GA. They'd
               | write everything -- maybe as a docker image -- so anyone
               | could pull down the image and compare shas, or whatever.
               | And maybe this could also be done by their competitors.
               | The ultimate goal would be to have multiple trusted
               | parties performing the build on the same code producing
               | the same output, and allowing any randos to do the same.
               | 
               | If the source is included in those images, we could
               | conceivably prove that the target was based on the
               | source.
               | 
               | It's not nice and easy, true.
        
         | Brian_K_White wrote:
         | Too inflexible ideological. There are infinite things that most
         | properly belong in a release file and not in the source, that
         | can't be generated from that source by github actions, and
         | seperately no one should be compelled to use github actions.
        
       | 65a wrote:
       | Is there a proper reverse engineering of the payload yet?
        
       | 17e55aab wrote:
       | a user offered 5.6.0 and 5.4.5 in an issue to microsoft/vcpkg
       | 
       | 5.4.5 can be compromised
       | 
       | https://github.com/microsoft/vcpkg/issues/37197
        
       | shp0ngle wrote:
       | we should take this diagram and change "random person in
       | nebraska" to "possibly a state-level attacker"
       | 
       | https://xkcd.com/2347/
       | 
       | nice
        
       | secondary_op wrote:
       | Github making suspect repository private and hiding recent
       | account activity is wrong move and is interfering with citizens
       | investigation efforts.
        
         | frenchman99 wrote:
         | Going forward this will require more than a citizens
         | investigation. Law enforcement will surely be granted access.
         | Also, tarballs are still available in package managers if you
         | really want to dig into the code.
        
         | zamalek wrote:
         | It's a crime scene. It effectively has the "police" yellow tape
         | around it.
        
       | BarbaryCoast wrote:
       | There's a bug in the detection script. The line:
       | 
       | if [ "$path" == "" ]
       | 
       | should be
       | 
       | if [ "$path" = "" ]
        
         | dualbus wrote:
         | Bash accepts both variants of the equality operator. So it is
         | not a bug.
        
       | kn100 wrote:
       | Here's a handy bash script I threw together to audit any docker
       | containers you might be running on your machine. It's hacky, but
       | will quickly let you know what version, if any, of xz, is running
       | in your docker containers.
       | 
       | ``` #!/bin/bash
       | 
       | # Get list of all running Docker containers containers=$(docker
       | ps --format "{{.Names}}")
       | 
       | # Loop through each container for container in $containers; do #
       | Get container image image=$(docker inspect
       | --format='{{.Config.Image}}' "$container")                   #
       | Execute xz --version inside the container
       | version=$(docker exec "$container" xz --version)              #
       | Write container name, image, and command output to a text file
       | echo "Container: $container" >> docker_container_versions.txt
       | echo "Image: $image" >> docker_container_versions.txt
       | echo "xz Version:" >> docker_container_versions.txt         echo
       | "$version" >> docker_container_versions.txt         echo "" >>
       | docker_container_versions.txt
       | 
       | done
       | 
       | echo "Output written to docker_container_versions.txt" ```
        
       | Rhea_Karty wrote:
       | Notes on time stamps and time zones.
       | 
       | A few interesting bits that I haven't fully fleshed out. TLDR:
       | Some people have been throwing around that Jia is from "China,"
       | but it seems also quite possible that Jia is from somewhere in
       | Eastern Europe pretending to be from China. In addition, Lasse
       | Collin and Hans Jansen are from the same EET time zone.
       | 
       | The following analysis was conducted on JiaT75's
       | (https://github.com/JiaT75?tab=overview&from=2021-12-01&to=20...)
       | commits to the XZ repository, and their time stamps.
       | 
       | Observation 1: Time zone basic analysis
       | 
       | Here is the data on Jia's time zone and the number of times he
       | was recorded in that time zone: 3: + 0200 (in winter: February
       | and November) 6: +0300 (in summer: in Jun, Jul, early October)
       | 440: +0800
       | 
       | 1. The +800 is likely CST. China (or Indonesia or Philippines),
       | given that Australia does daylight savings time and almost no one
       | lives in Siberia and the Gobi dessert. 2. The +0200/+0300, if we
       | are assuming that this is one location, is likely on EET
       | (Finland, Estonia, Latvia, Lithuania, Ukraine, Moldavia, Romania,
       | Bulgaria, Greece, Turkey). This is because we see a switch from
       | +300 in the winter (past the last weekend of October) and +200 in
       | the summer (past the last Sunday in March). 1. Incidentally, this
       | seems to be the same time zone as Lasse Collin and Hans Jansen...
       | 
       | Observation 2: Time zone inconsistencies
       | 
       | Let's analyze the few times where Jia was recorded in a non +800
       | time zone. Here, we notice that there are some situations where
       | Jia switches between +800 and +300/+200 in a seemingly
       | implausible time. Indicating that perhaps he is not actually in
       | +800 CST time, as his profile would like us to believe.
       | 
       | Jia Tan Tue, 27 Jun 2023 23:38:32 +0800 --> 23:38 + 8 = 7:30 (+
       | 1) Jia Tan Tue, 27 Jun 2023 17:27:09 +0300 --> 17:27 + 3 = 20:30
       | --> about a 9 hour difference, but a flight from China to
       | anywhere in Eastern Europe is at a min 10 hours
       | 
       | Jia Tan Thu, 5 May 2022 20:53:42 +0800 Jia Tan Sat, 19 Nov 2022
       | 23:18:04 +0800 Jia Tan Mon, 7 Nov 2022 16:24:14 +0200 Jia Tan
       | Sun, 23 Oct 2022 21:01:08 +0800 Jia Tan Thu, 6 Oct 2022 21:53:09
       | +0300 --> 21:53 + 3 = 1:00 (+1) Jia Tan Thu, 6 Oct 2022 17:00:38
       | +0800 --> 17:00 + 8 = 1:00 (+1) Jia Tan Wed, 5 Oct 2022 23:54:12
       | +0800 Jia Tan Wed, 5 Oct 2022 20:57:16 +0800 --> again, given the
       | flight time, this is even more impossible
       | 
       | Jia Tan Fri, 2 Sep 2022 20:18:55 +0800 Jia Tan Thu, 8 Sep 2022
       | 15:07:00 +0300 Jia Tan Mon, 25 Jul 2022 18:30:05 +0300 Jia Tan
       | Mon, 25 Jul 2022 18:20:01 +0300 Jia Tan Fri, 1 Jul 2022 21:19:26
       | +0800 Jia Tan Thu, 16 Jun 2022 17:32:19 +0300 Jia Tan Mon, 13 Jun
       | 2022 20:27:03 +0800 --> the ordering of these time stamps and the
       | switching back and forth between time zones looks strange.
       | 
       | Jia Tan Thu, 15 Feb 2024 22:26:43 +0800 Jia Tan Thu, 15 Feb 2024
       | 01:53:40 +0800 Jia Tan Mon, 12 Feb 2024 17:09:10 +0200 Jia Tan
       | Mon, 12 Feb 2024 17:09:10 +0200 Jia Tan Tue, 13 Feb 2024 22:38:58
       | +0800 --> this travel time is possible, but the duration of stay
       | is unlikely
       | 
       | Observation 3: Strange record of time stamps
       | 
       | It seems that from the commits, often the time stamps are out of
       | order. I am not sure what would cause this other than some
       | tampering.
       | 
       | Observation 4: Bank holiday inconsistencies
       | 
       | We notice that Jia's work schedule and holidays seems to align
       | much better with an Eastern European than a Chinese person.
       | 
       | Disclaimer: I am not an expert in Chinese holidays, so this very
       | well could be inaccurate. I am referencing this list of bank
       | holidays:(https://www.bankofchina.co.id/en-
       | id/service/information/late...)
       | 
       | Chinese bank holidays (just looking at 2023): - Working on 2023,
       | 29 September: Mid Autumn Festival - Working on 2023, 05 April:
       | Tomb Sweeping Day - Working on 2023, 26, 22, 23, 24, 26, 27 Jan:
       | Lunar New Year
       | 
       | Eastern European holidays: - Never working on Dec 25: Christmas
       | (for many EET countries) - Never working Dec 31 or Jan 1: New
       | Years
       | 
       | Observation 5: Little weekend work --> salary job?
       | 
       | The most common working days for Jia were Tue (86), Wed (85), Thu
       | (89), and Fri (79). If we adjust his time zone to EET, then that
       | means he is usually working 9 am to 6 pm. This makes much more
       | sense than someone working at midnight and 1 am on a Tuesday
       | night.
       | 
       | These times also line up well with Hans Jansen and Lasse Collin.
       | 
       | I think it is more likely that Jia does this as part of his
       | work... somewhere in Eastern Europe. Likely working with, or in
       | fact being one and the same as, Hans Jansen and Lasse Collin.
        
       | Rhea_Karty wrote:
       | TLDR: Some people have been throwing around "China," but it seems
       | also quite possible that Jia is from somewhere in Eastern Europe
       | pretending to be from China. In addition, Lasse Collin and Hans
       | Jansen are from the same EET time zone.
       | 
       | These are my notes on time stamps/zones. There are a few
       | interesting bits that I haven't fully fleshed out.
       | 
       | The following analysis was conducted on JiaT75's
       | (https://github.com/JiaT75?tab=overview&from=2021-12-01&to=20...)
       | commits to the XZ repository, and their time stamps.
       | 
       | Observation 1: Time zone basic analysis
       | 
       | Here is the data on Jia's time zone and the number of times he
       | was recorded in that time zone:
       | 
       | 3: + 0200 (in winter: February and November)
       | 
       | 6: +0300 (in summer: in Jun, Jul, early October)
       | 
       | 440: +0800
       | 
       | 1. The +800 is likely CST. China (or Indonesia or Philippines),
       | given that Australia does daylight savings time and almost no one
       | lives in Siberia and the Gobi dessert.
       | 
       | 2. The +0200/+0300, if we are assuming that this is one location,
       | is likely on EET (Finland, Estonia, Latvia, Lithuania, Ukraine,
       | Moldavia, Romania, Bulgaria, Greece, Turkey). This is because we
       | see a switch from +300 in the winter (past the last weekend of
       | October) and +200 in the summer (past the last Sunday in March).
       | 
       | Incidentally, this seems to be the same time zone as Lasse Collin
       | and Hans Jansen...
       | 
       | Observation 2: Time zone inconsistencies
       | 
       | Let's analyze the few times where Jia was recorded in a non +800
       | time zone. Here, we notice that there are some situations where
       | Jia switches between +800 and +300/+200 in a seemingly
       | implausible time. Indicating that perhaps he is not actually in
       | +800 CST time, as his profile would like us to believe.
       | 
       | Jia Tan Tue, 27 Jun 2023 23:38:32 +0800 --> 23:38 + 8 = 7:30 (+
       | 1) Jia Tan Tue, 27 Jun 2023 17:27:09 +0300 --> 17:27 + 3 = 20:30
       | --> about a 9 hour difference, but flight from China to anywhere
       | in Eastern Europe is at a min 10 hours
       | 
       | Jia Tan Thu, 5 May 2022 20:53:42 +0800
       | 
       | Jia Tan Sat, 19 Nov 2022 23:18:04 +0800
       | 
       | Jia Tan Mon, 7 Nov 2022 16:24:14 +0200
       | 
       | Jia Tan Sun, 23 Oct 2022 21:01:08 +0800
       | 
       | Jia Tan Thu, 6 Oct 2022 21:53:09 +0300 --> 21:53 + 3 = 1:00 (+1)
       | 
       | Jia Tan Thu, 6 Oct 2022 17:00:38 +0800 --> 17:00 + 8 = 1:00 (+1)
       | 
       | Jia Tan Wed, 5 Oct 2022 23:54:12 +0800
       | 
       | Jia Tan Wed, 5 Oct 2022 20:57:16 +0800
       | 
       | --> again, given the flight time, this is even more impossible
       | 
       | Jia Tan Fri, 2 Sep 2022 20:18:55 +0800
       | 
       | Jia Tan Thu, 8 Sep 2022 15:07:00 +0300
       | 
       | Jia Tan Mon, 25 Jul 2022 18:30:05 +0300
       | 
       | Jia Tan Mon, 25 Jul 2022 18:20:01 +0300
       | 
       | Jia Tan Fri, 1 Jul 2022 21:19:26 +0800
       | 
       | Jia Tan Thu, 16 Jun 2022 17:32:19 +0300
       | 
       | Jia Tan Mon, 13 Jun 2022 20:27:03 +0800
       | 
       | --> the ordering of these time stamps, and the switching back and
       | forth looks strange.
       | 
       | Jia Tan Thu, 15 Feb 2024 22:26:43 +0800
       | 
       | Jia Tan Thu, 15 Feb 2024 01:53:40 +0800
       | 
       | Jia Tan Mon, 12 Feb 2024 17:09:10 +0200
       | 
       | Jia Tan Mon, 12 Feb 2024 17:09:10 +0200
       | 
       | Jia Tan Tue, 13 Feb 2024 22:38:58 +0800
       | 
       | --> this travel time is possible, but the duration of stay is
       | unlikely
       | 
       | Observation 3: Strange record of time stamps It seems that from
       | the commits, often the time stamps are out of order. I am not
       | sure what would cause this other than some tampering.
       | 
       | Observation 4: Bank holiday inconsistencies
       | 
       | We notice that Jia's work schedule and holidays seem to align
       | much better with an Eastern European than a Chinese person.
       | 
       | Disclaimer: I am not an expert in Chinese holidays, so this very
       | well could be inaccurate. I am referencing this list of bak
       | holidays:(https://www.bankofchina.co.id/en-
       | id/service/information/late...)
       | 
       | Chinese bank holidays (just looking at 2023):
       | 
       | - Working on 2023, 29 September: Mid Autumn Festival
       | 
       | - Working on 2023, 05 April: Tomb Sweeping Day
       | 
       | - Working on 2023, 26, 22, 23, 24, 26, 27 Jan: Lunar New Year
       | 
       | Eastern European holidays:
       | 
       | - Never working on Dec 25: Christmas (for many EET countries)
       | 
       | - Never working Dec 31 or Jan 1: New Years
       | 
       | Observation 5: No weekend work --> salary job?
       | 
       | The most common working days for Jia was Tue (86), Wed (85), Thu
       | (89), and Fri (79). If we adjust his time zone to be EET, then
       | that means he is usually working 9 am to 6 pm. This makes much
       | more sense than someone working at midnight and 1 am on a Tuesday
       | night.
       | 
       | These times also line up well with Hans Jansen and Lasse Collin.
       | 
       | I think it is more likely that Jia does this as part of his
       | work... somewhere in Eastern Europe. Likely working with, or in
       | fact being one and the same as, Hans Jansen and Lasse Collin.
        
         | mborch wrote:
         | This 2011 addition to the XZ Utils Wikipedia page is
         | interesting because a) why is this relevant, b) who is Mike
         | Kezner since he's not mentioned on the Tukaani project page
         | (https://tukaani.org/about.html) under "Historical
         | acknowledgments".
         | 
         | https://en.wikipedia.org/w/index.php?title=XZ_Utils&diff=pre...
         | 
         | Arch Linux played an important role in making this compression
         | software trusted and depended upon. Perhaps not a coincidence,
         | but at the very least, such a big project should more carefully
         | consider the software they distribute and rely on, whether it's
         | worth the risk.
        
           | ui2RjUen875bfFA wrote:
           | > Arch Linux played an important role in making this
           | compression software trusted and depended upon.
           | 
           | because of the way arch distributes packages? then what you
           | think about freebsd?
        
         | frenchman99 wrote:
         | You say yourself that the time data could be tampered. It's
         | trivial to change commit dates in git. So this analysis means
         | nothing by itself, unfortunately.
        
           | mimop wrote:
           | I wouldn't say that. This guy seems to have tried hard to
           | appear Chinese (and possibly tampered the time stamps this
           | way) - but based on that analysis, it seems plausible they
           | did a bad job and were actually based out of Eastern Europe.
        
         | Rhea_Karty wrote:
         | Made a more detailed write-up on this:
         | https://rheaeve.substack.com/p/xz-backdoor-times-damned-time...
        
       | CanaryLayout wrote:
       | Well isn't this an interesting commit. He finished his inject
       | macro to compose the payload at build, so now he can start
       | clearing up the repo so none of that shit gets seen when cruising
       | through it.
       | 
       | https://git.tukaani.org/?p=xz.git;a=commitdiff;h=4323bc3e0c1...
        
         | astrange wrote:
         | That's not what gitignore does. I can't think of a way it would
         | let you hide this exploit.
        
           | zamalek wrote:
           | Accidentally committing it.
        
       | zeehio wrote:
       | On Ubuntu there is a bug report asking to sync the 5.6 version
       | from Debian experimental
       | https://bugs.launchpad.net/ubuntu/+source/xz-utils/+bug/2055...
        
       | Retr0id wrote:
       | The `pack`[0] compression utility that reached the HN front page
       | the other day[1] is setting off my alarm bells right now. (It was
       | at the time too, but now doubly so)
       | 
       | It's written in Pascal, and the only (semi-)documented way to
       | build it yourself is to use a graphical IDE, and pull in pre-
       | compiled library binaries (stored in the git repo of a dependency
       | which afaict Pack is the only dependent of - appears to be
       | maintained by the same pseudonymous author but from a different
       | account).
       | 
       | I've opened an issue[2] outlining my concerns. I'm certainly not
       | accusing them of having backdoored binaries, but _if_ I was
       | setting up a project to be deliberately backdoorable, it 'd look
       | a lot like this.
       | 
       | [0] https://pack.ac/
       | 
       | [1] https://news.ycombinator.com/item?id=39793805
       | 
       | [2] https://github.com/PackOrganization/Pack/issues/10
        
       | jum4 wrote:
       | Maybe @JiaT75 got forced to do it. Maybe someone has more
       | personal contact with him and can check how he is doing.
        
       | rossant wrote:
       | Incredible. It's like discovering your colleague for 2 years at
       | the secret nuclear weapon facility is a spy for another country,
       | covering his tracks until the very last minute. Feels like a
       | Hollywood movie is coming up.
       | 
       | Should we start doing background checks on all committers to such
       | critical IT infrastructure?
        
         | throwaway290 wrote:
         | Not even background check but a foreground check would already
         | help. Like literally, who dis? any identity at all?
         | 
         | Too often maintainers who have no time just blanket approve PRs
         | and see if stuff breaks.
        
         | arter4 wrote:
         | But how? Let's say you're one of 10 maintainers of an open
         | source project. A new user wants to contribute. What do you do?
         | Do you ask them to send you some form of ID? Assuming this is
         | legal and assuming you could ensure the new user is the actual
         | owner of an actual, non counterfeit ID, what do you do? Do you
         | vet people based on their nationality? If so, what nationality
         | should be blackballed? Maybe 3 maintainers are American, 5 are
         | European and 2 are Chinese. Who gets to decide? Or do you
         | decide based on the company they work for?
         | 
         | Open source is, by definition, open. The PR/merge request
         | process is generally meant to accept or refuse commits based on
         | the content (which is why you have a diff), not on the owner.
         | 
         | Building consensus on which commits are actually valid, even in
         | the face of malicious actors, is a notoriously difficult
         | problem. Byzantine fault tolerance can be achieved with a 2/3 +
         | 1 majority, but if anyone can create new identities and have
         | them join the system (Sybil attack) you're going to have to do
         | things differently.
        
       | pdimitar wrote:
       | This was only a matter of time. Open source projects are under-
       | staffed, maintainers are overworked and burned out, and everyone
       | relies on the goodwill of all actors.
       | 
       | Obviously a bad actor will make use of these conditions and the
       | assumption of good will.
       | 
       | We need automated tooling to vet for stuff like this. And maybe
       | migrate away from C/C++ while we are at it because they don't
       | make such scanning easy at all.
        
       | Epa095 wrote:
       | I hope Lasse Collin is doing OK! Here is a older message from him
       | [1]
       | 
       | "I haven't lost interest but my ability to care has been fairly
       | limited mostly due to longterm mental health issues but also due
       | to some other things. Recently I've worked off-list a bit with
       | Jia Tan on XZ Utils and perhaps he will have a bigger role in the
       | future, we'll see.
       | 
       | It's also good to keep in mind that this is an unpaid hobby
       | project. "
       | 
       | Github (Microsoft) are in a unique position to figure out if his
       | account is hacked or not, and find a way to reach him. I hope
       | they reach out and offer him some proper support! Economic
       | support (if that's needed), or just help clearing his name.
       | 
       | This is another tale of how we are building multi trillion dollar
       | industries on the back of unpaid volunteers. It's not github
       | 'job', and many other organisations have benefited even more from
       | Lasses work, but they are in a unique position, and would be
       | literally pocket change for them.
       | 
       | 1:https://www.mail-archive.com/xz-devel@tukaani.org/msg00567.h...
        
         | syslog wrote:
         | Relevant xkcd:
         | 
         | https://xkcd.com/2347/
        
         | cbolton wrote:
         | In a movie his mental health issues would likely have been
         | caused intentionally by the attacker, setting the stage for the
         | mole to offer to step in just at the right time. Seems a bit
         | far fetched in this case though for what looks like a
         | tangential attack.
        
           | maerF0x0 wrote:
           | or > Recently I've worked off-list a bit with Jia Tan on XZ
           | Utils and perhaps he will have a bigger role in the future,
           | we'll see.
           | 
           | Is actually Jia Tan has him tied up in a basement and is
           | posing as him. State actors can do that kind of thing.
        
           | deanresin wrote:
           | In a movie, he was killed by foreign state actors, and his
           | identity assumed by the foreign state hacker. Actually,
           | someone should check on him.
        
         | delfinom wrote:
         | He came on IRC, he seemed ok. He did some cleanup of access and
         | signed off for easter.
        
           | farmdve wrote:
           | I mean, he was right at least. Jia Tan did have a bigger
           | role.
        
           | 400thecat wrote:
           | which IRC channel ?
        
             | rkta wrote:
             | The official channel for the project.
        
         | slavik81 wrote:
         | Lasse appears to be active and working on undoing the sabotage.
         | https://git.tukaani.org/?p=xz.git;a=blobdiff;f=CMakeLists.tx...
        
       | zh3 wrote:
       | Comment from Andres Freund on how and why he found it [0] and
       | more information on the LWN story about the backdoor. Recommend
       | people read this to see how close we came (and think about what
       | this is going to mean for the future).
       | 
       | [0] https://lwn.net/Articles/967194/
        
         | eBombzor wrote:
         | That man deserves a Nobel Prize
        
       | itsTyrion wrote:
       | that's... creative. and patient. 11/10 concerning - now I'm
       | wondering how many other projects could have shit like this in
       | them or added right as I'm writing this _shudder_
        
       | kzrdude wrote:
       | Jia Tan "cleaned up" in all their ZSTD branches some hours ago,
       | probably hiding something
       | https://github.com/JiaT75/zstd/branches/all
        
         | zamalek wrote:
         | Bad move. Destroying evidence is a felony.
        
           | cypress66 wrote:
           | If you are this deep into it, it doesn't matter.
        
           | delfinom wrote:
           | If only you could prosecute people in adversarial countries
           | for a felony, lol.
        
             | maerF0x0 wrote:
             | You can if you can get them extradited (from any country,
             | not just their home country).
        
           | JackSlateur wrote:
           | Not everywhere, and only if _you_ can prove that were
           | evidences :)
        
       | dfgdfg34545456 wrote:
       | chmod u+x running detect_sh script just runs with no output on my
       | arch linux box?
       | 
       | https://www.openwall.com/lists/oss-security/2024/03/29/4
        
         | Hackbraten wrote:
         | Yes, Arch Linux's OpenSSH binary doesn't even link to liblzma,
         | which means your installation is not affected by this
         | particular backdoor.
         | 
         | The authors of the `detect_sh` script didn't have that scenario
         | in mind, so the `ldd` invocation never finds a link and the
         | script bails early without a message.
        
           | dfgdfg34545456 wrote:
           | Thanks!
        
         | 77pt77 wrote:
         | remove the -e option on the script and run it.
         | 
         | Anyway, arch is not affected because they don't modify openssh
         | to link against any of this nonesense.
        
       | imanhodjaev wrote:
       | now I wonder which browsers link liblzma?
        
       | imanhodjaev wrote:
       | I wonder which browsers link liblzma and can this lead to https
       | eavesdropping?
        
       | hcks wrote:
       | It was caught out of luck due to performance degradation. So
       | nobody reads the code - not even once- prior to merging into
       | upstream supply chain?
        
         | hcks wrote:
         | https://x.com/bl4sty/status/1773780531143925959?s=20
         | 
         | So nobody reads releases notes either.
         | 
         | But I'm sure this was a one off and were safe now
        
       | dhx wrote:
       | A mirror of the offending repository created by someone else is
       | available at [1]. GitHub should be keeping the evidence in the
       | open (even if just renamed or archived in a safer format) instead
       | of deleting it/hiding it away.
       | 
       | The offending tarball for v5.6.1 is easier to find, an example
       | being.[2]
       | 
       | m4/.gitignore was updated 2 weeks ago to hide build-to-host.m4
       | that is only present in the release tarball and is used to inject
       | the backdoor at build time.[3]
       | 
       | [1] https://git.phial.org/d6/xz-analysis-mirror
       | 
       | [2] https://mirrors.xtom.ee/gentoo/distfiles/9f/xz-5.6.1.tar.gz
       | 
       | [3] https://git.phial.org/d6/xz-analysis-
       | mirror/commit/4323bc3e0...
        
       | 8organicbits wrote:
       | There's good discussion of the timeline here:
       | https://boehs.org/node/everything-i-know-about-the-xz-backdo...
        
       | user20180120 wrote:
       | Why is the Long Range Zip lrzip compression format not used? It
       | gives better compression than xz when using the correct switches.
        
       | squarefoot wrote:
       | State actor or not, let's not ignore that the backdoor has been
       | discovered thanks to the open nature of the projects involved
       | that allowed digging into the code. Just another example like the
       | infamous Borland InterBase backdoor in the early 2K that remained
       | dormant for years and was discovered months after the source code
       | has been released. If the xz malware authors worked for any corp
       | that produced closed source drivers or blobs that can't be
       | properly audited, we would be fucked; I just hope this is not
       | already happening, because the attack surface in all those
       | devices and appliances out there running closed code is huge.
        
       | Roark66 wrote:
       | Sadly this is exactly one of the cases where open source is much
       | more vulnerable to a state actor sponsored attack than
       | proprietary software. (it is also easier to find such backdoors
       | in OS software but that's BTW)
       | 
       | Why? Well, consider this, to "contribute" to a proprietary
       | project you need to get hired by a company, go through their he.
       | Also they have to be hiring in the right team etc. Your operative
       | has to be in a different country, needs a CV that checks out,
       | passports/ids are checked etc.
       | 
       | But to contribute to an OS project? You just need an email
       | address. Your operative sends good contributions until they build
       | trust, then they start introducing backdoors in the part of the
       | code "no one, but them understands".
       | 
       | The cost of such attack is a lot lower for a state actor so we
       | have to assume every single OS project that has a potential to
       | get back doored had many attempts of doing so. (proprietary
       | software too, but as mentioned, this is much more expensive)
       | 
       | So what is the solution? IDK, but enforcing certain
       | "understandability" requirements can be a part of it.
        
         | alufers wrote:
         | Is that true? Large companies producing software usually have
         | bespoke infra, which barely anyone monitors. See: the
         | Solarwinds hack. Similarly to the xz compromise they added the
         | a Trojan to the binary artifacts by hijacking the build
         | infrastructure. According to Wikipedia "around 18,000
         | government and private users downloaded compromised versions",
         | it took almost a year for somebody to detect the trojan.
         | 
         | Thanks to the tiered updates of Linux distros, the backdoor was
         | caught in testing releases, and not in stable versions. So only
         | a very low percentage of people were impacted. Also the whole
         | situation happened because distros used the tarball with a
         | "closed source" generated script, instead of generating it
         | themselves from the git repo. Again proving that it's easier to
         | hide stuff in closed source software that nobody inspects.
         | 
         | Same with getting hired. Don't companies hire cheap contractors
         | from Asia? There it would be easy to sneak in some crooked or
         | even fake person to do some dirty work. Personally I was even
         | emailed by a guy from China who asked me if I was willing to
         | "borrow" him my identity so he could work in western companies,
         | and he would share the money with me. Of course I didn't agree,
         | but I'm not sure if everybody whose email he found on Github
         | did.
         | 
         | https://en.wikipedia.org/wiki/2020_United_States_federal_gov...
        
         | throwaway7356 wrote:
         | > Well, consider this, to "contribute" to a proprietary project
         | you need to get hired by a company, go through their he.
         | 
         | Or work for a third-party company that gets access to critical
         | systems without any checks. See for example the incident from
         | 2022 here: https://en.wikipedia.org/wiki/Okta,_Inc.
         | 
         | Or a third-party that rents critical infrastructure to the
         | company (Cloud, SaaS solutions).
        
       | sirsinsalot wrote:
       | I think we have to assume that all community software is a
       | target. The payoff for bad actors is too great.
       | 
       | For every one of these we spot, assume there are two we have not.
        
       | Dribble4633 wrote:
       | Hello,
       | 
       | Github just disabled the repo : https://github.com/tukaani-
       | project/xz
       | 
       | Do someone have an up to date fork to see the project history ?
        
       | _zephyrus_ wrote:
       | Is there any news concerning the payload analysis? Just curious
       | to see if it can be correlated with something I have in my sshd
       | logs (e.g. login attempt with specific RSA keys).
        
       | Randalthorro wrote:
       | Since GitHub disabled the repos.. I uploaded all GitHub Events
       | from the two suspected users and from their shared project repo
       | as easy to consume CSV files:
       | 
       | https://github.com/emirkmo/xz-backdoor-github
       | 
       | For those who want to see the GitHub events (commits, comments,
       | pull_requets, diffs, etc.)
        
       | inevitable112 wrote:
       | Surely the real target of this was Tor (which links liblzma) not
       | random SSH servers.
        
       | KOLANICH wrote:
       | Please note: the changes have been made after GitHub has enforced
       | 2FA (certainly not for "better security", but for promotion of
       | FIDO2 and Windows Hello biometric impl of FIDO2, see
       | https://codeberg.org/KOLANICH/Fuck-GuanTEEnomo for more info.
       | Until recent times (for now access via git protocol is blocked
       | for my acc, I guess based on lack of 2FA set up) it was even
       | possible to push into all repos one has access by just using
       | single-factor SSH key even without enabling 2FA in the account).
       | As I have warned, nothing will protect when a backdoor is
       | introduced by a malicious maintainer, or a "smart entrepreneur"
       | who sold his project to a ad-company, or a loyal "patriot" living
       | and earning money within reach of some state, or just a powerless
       | man who got an offer he can't refuse. In general supply chain
       | attacks by "legitimate" maintainers cannot be prevented. "Jia
       | Tan" is just a sockpuppet to mitigate consequences to maintainers
       | to make it look like they are not involved into it. They surely
       | are. At least according to the current info it were they who have
       | given the malicious account the permission to publish releases on
       | behalf of the project and access to the repo.
       | 
       | IMHO all maintainers of the backdooored projects anyhow related
       | to accepting the malicious changes should be considered as
       | accomplices and boycotted. We don't need evidence of their
       | liability, it is they who need to maintain their reputation. We
       | are just free to take our decisions based on their reputation.
       | Even if they were hacked themselves, it is not our problem, it is
       | their problem. Our problem is to keep ourselves safe. It may feel
       | "unjust" to ruin reputation of a person based on the fact he may
       | be cheated or hacked... But if a person can be cheated or hacked,
       | why should he/she have such a good reputation as everyone else?!
       | So, it makes a lot of sense to just exclude and replace everyone,
       | for whome there exists evidence of comprometation, no matter due
       | to unconcern or malice. But FOSS is a doocracy serving products
       | at dumpling prices ($0, free of charge), and for majority
       | backdoored software is completely acceptable given that they get
       | them free of charge. And powerful actors who can afford to pay
       | for software will just hire devs to develop their private
       | versions, while allowing the public to pay $0 for their free
       | versions and use the backdoors placed into them themselves. In
       | other words a complete market failure.
       | 
       | I think that 1. xz project must be shut down completely. I mean
       | projects should stop using it as a dependency, exclude from
       | distros, boycott it. LZMA algo was developed by Igor Pavlov in 7z
       | project, but somehow it has happenned that liblzma was developed
       | and maintained by unrelated folks. liblzma should be developed as
       | a part of 7z project taking no code other than the trivial one
       | for API compatibility adapter from xz. 2. Projects created by
       | compromised authkrs should be boycotted. 3. Other projects
       | touched by the compromised devs/maintainers should be audited. 4.
       | All the projects using autotools should be audited and must
       | replace autotools with cmake/meson. Autotools is a piece of shit,
       | completely uncomprehensible. There is no surprise it was used to
       | hude a backdoor - according to my experience in FOSS noone likes
       | to touch its scripts anyhow. 5. No project should be built from
       | releases. Project should be built from git directly. Implementing
       | full support of SHA256 in git and git forges (GitHub, GitLab,
       | Codeberg, sr.ht) should be accelerated to mitigate attacks using
       | collisions to replace approved commits (I guess the randomness
       | can be concealed from reviewer's eye in binary resource files,
       | like pictures).
        
       | throwaway67743 wrote:
       | It's always Debian, like last time when they removed RNG
       | randomness from ssh because of a warning.
        
       | zingelshuher wrote:
       | Why isn't he identified personally? Very likely he is
       | 'contributing' to other projects under different accounts.
        
       | Decabytes wrote:
       | So when are we going to stop pretending that OSS
       | maintainers/projects are reaping what they sow when they "work
       | for free" and give away their source code away using OSS licensed
       | software, while large companies profit off of them? If they were
       | paid more (or in some cases even actually paid), then they could
       | afford to quit their day jobs, reducing burn out, they could
       | actually hire a team of trusted vetted devs instead of relying on
       | the goodwill of strangers who step up "just to help them out" and
       | they could pay security researchers to vet their code.
       | 
       | Turns out burned out maintainers are a great attack vector and if
       | you are willing to play the long game you can ingratiate yourself
       | with the community with your seemingly innocuous contributions.
        
         | kortilla wrote:
         | Paid people get burnt out as well and they are just as likely
         | to accept free help as an unpaid person.
        
           | Decabytes wrote:
           | That's true, but many of these maintainers work a day job on
           | top of doing the open source work precisely because the open
           | source work doesn't pay the bills. If they could get back 40
           | hours of their time I think many would appreciate it
        
         | qwery wrote:
         | > So when are we going to stop pretending ...
         | 
         | I'm not sure that we are. Doesn't everybody know that
         | developing/maintaining free software is largely thankless work,
         | with little to no direct recompense?
         | 
         | I don't think moving towards unfree software is a good way to
         | make free software more secure. It shouldn't be a surprise that
         | proprietary software is less likely to be exploited _in this
         | way_ simply because they don 't accept any patches from outside
         | of the team. What you want is more people that understand and
         | care about free software and _low_ barriers to getting
         | involved.
        
           | Decabytes wrote:
           | > Doesn't everybody know that developing/maintaining free
           | software is largely thankless work, with little to no direct
           | recompense?
           | 
           | No I don't think that is a universally acknowledged feeling.
           | Numerous maintainers have detailed recieving entitled demands
           | from users, as if they were paying customers to the open
           | source software projects. Georges Stavracas' interview on the
           | Tech over Tea podcast^1 describes many such experiences.
           | Similarly, when Aseprite transitioned its license^2 to secure
           | financial stability, it faced backlash from users accusing
           | the developer of betraying and oppressing the community.
           | 
           | On the flipside, if everyone truly does know this is the
           | case, then it's a shame that so many people know, and yet are
           | unwilling to financially support developers to change that.
           | See all of the developers for large open source projects who
           | have day jobs, or take huge pay cuts to work on open source
           | projects. I get that not everyone can support a project
           | financially, but I've personally tried to break that habit of
           | expecting everything I use to be free, and go out of my way
           | to look for donation buttons for project maintainers, and
           | raise awareness during fundraisers. Now if only I could
           | donate directly to Emacs development... I'd encourage other
           | people to do the same.
           | 
           | > What you want is more people that understand and care about
           | free software and low barriers to getting involved.
           | 
           | This is tough. For example, the intention behind initiatives
           | like DigitalOcean's Hacktoberfest, are designed to do just
           | this. It is a good idea in theory, submit 4 pull requests and
           | win a tshirt, but not in practice. The event has been
           | criticized for inadvertently encouraging superficial
           | contributions, such as minor text edits or trivial commits,
           | which burden maintainers^3, causing many maintainers to just
           | archive their repos for the month of October.
           | 
           | So, while there's a recognition of the need for more people
           | who understand and value free software, along with lower
           | barriers to entry, the current state of affairs often falls
           | short. The path forward should involve not just increasing
           | awareness and participation but also providing meaningful
           | support and compensation to maintainers. By doing so, we can
           | foster a more sustainable, secure, and vibrant open source
           | community. Or at least that is how I feel...
           | 
           | 1. https://www.youtube.com/watch?v=kO0V7BE1bEo 2.
           | https://github.com/aseprite/aseprite/issues/1242 3.
           | https://twitter.com/shitoberfest?lang=en
        
       | pinley wrote:
       | https://imgur.com/WGaK3Tn
        
       | 7ero wrote:
       | is this sev0?
        
       | 7ero wrote:
       | Is this sev0?
        
       | jaromilrojo wrote:
       | This is another proof that systemd is an anti-pattern for
       | security: with its crawling and ever growing web of dependencies,
       | it extends the surface of vulnerability to orders of magnitude,
       | and once embraced not even large distro communities can defend
       | you from that.
       | 
       | A malware code injection in upstream xz-tools is a vector for
       | remote exploitation of the ssh daemon due to a dependency on
       | systemd for notifications and due to systemd's call to dlopen()
       | liblzma library (CVE-2024-3094). The resulting build interferes
       | with authentication in sshd via systemd.
        
         | saagarjha wrote:
         | This isn't Twitter you don't have to use hashtags
        
           | heptazoid wrote:
           | This isn't Xitter, you don't have to tell people how to
           | write.
        
         | acdha wrote:
         | Please take the systemd trolling to Reddit. They likely
         | targeted xz specifically because it's so widely used but there
         | are dozens of other libraries which are potential candidates
         | for an attack on sshd, much less everything else which has a
         | direct dependency unrelated to systemd (e.g. dpkg).
         | 
         | Rather than distracting, think about how the open source
         | projects you use would handle an attack like this where someone
         | volunteers to help a beleaguered maintainer and spends time
         | helpfully taking on more responsibilities before trying to
         | weaken something.
        
           | jaromilrojo wrote:
           | You are distracting from facts with speculations and trolling
           | FUD. I refer to what is known and has happened, you are
           | speculating on what is not known.
        
             | acdha wrote:
             | Your claim is an appeal to emotion trying to build support
             | for a position the Linux community has largely rejected.
             | Starting with the goal rather than looking unemotionally at
             | the facts means that you're confusing your goal with the
             | attackers' - they don't care about a quixotic attempt to
             | remove systemd, they care about compromising systems.
             | 
             | Given control of a package which is on most Linux systems
             | and a direct dependency of many things which are not
             | systemd - run apt-cache rdepends liblzma5! - they can
             | choose whatever they want to accomplish that goal. That
             | could be things like a malformed archive which many things
             | directly open or using something similar to this same
             | hooking strategy to compromise a different system
             | component. For example, that includes things like kmod and
             | dpkg so they could target sshd through either of those or,
             | if their attack vector wasn't critically dependent on SSH,
             | any other process running on the target. Attacking systemd
             | for this is like saying Toyotas get stolen a lot without
             | recognizing that you're just describing a popularity
             | contest.
        
         | throwaway7356 wrote:
         | > systemd's call to dlopen() liblzma library (CVE-2024-3094)
         | 
         | That's technically wrong, but no surprise. Anti-systemd trolls
         | usually don't understand technical details after all.
        
           | jaromilrojo wrote:
           | It is 10 and more years that I experience such ad-hominem
           | attacks.
           | 
           | You are so quickly labeling an identifiable professional as
           | troll, while hiding behind your throwaway identity, that I am
           | confident readers will be able to discern.
           | 
           | Meanwhile let us be precise and add more facts
           | https://github.com/systemd/systemd/pull/31550
           | 
           | Our community is swamped by people like you, so I will
           | refrain from answering further provocations, believing I have
           | provided enough details to back my assertion.
        
             | throwaway7356 wrote:
             | That MR is not part of any released version of systemd.
             | That is simply to verify: there has been no new systemd
             | release.
             | 
             | So much for the "facts".
             | 
             | As for trolling: just look at the usual contributions from
             | your community like
             | https://twitter.com/DevuanOrg/status/1619013961629995008
             | Excellent work with the ad-hominem attacks there.
        
               | lamp987 wrote:
               | It's already accepted and merged into master so it will
               | be released in a future systemd release. What's your
               | point?
               | 
               | LP in da house?
        
               | throwaway7356 wrote:
               | The MR linked to would actually prevent the backdoor from
               | working, but that doesn't stop some people from claiming
               | it enables the backdoor.
               | 
               | But as already said: no surprise given where the comment
               | comes from.
        
         | geggo98 wrote:
         | Actually you have a point. A collection of shell scripts (like
         | the classical init systems) have obviously a smaller attack
         | surface. In this case the attacker used some integration code
         | with systemd to attack the ssh daemon. So sshd without systemd
         | integration is safe against this specific attack.
         | 
         | In general, I'm not convinced that systemd makes things less
         | secure. I have the suspicion that the attacker would just have
         | used a different vector, if there was no systemd integration.
         | After all it looks like the attacker was also trying to
         | integrate exploits in owner libraries, like zstd.
         | 
         | Still I would appreciate it, if systemd developers would find a
         | better protection against supply chain attacks.
        
           | jaromilrojo wrote:
           | I really appreciate your tone and dialectic reasoning, thanks
           | for your reply. And yes, as simple as it sounds, I believe
           | that shell scripts help a lot to maintain mission critical
           | tools. One hands-on example is https://dyne.org/software/tomb
           | where I took this approach to replace whole disk encryption
           | which is nowadays also dependent on systemd-cryptsetup.
        
       | the_errorist wrote:
       | Looks like Lasse Collin has commented on LKML:
       | https://lkml.org/lkml/2024/3/30/188
       | 
       | Also, some info here: https://tukaani.org/xz-backdoor/
        
         | jwilk wrote:
         | Or if you can't stand the lkml.org UI:
         | 
         | https://lore.kernel.org/lkml/20240330144848.102a1e8c@kaneli/
        
       | hypnagogic wrote:
       | In the future: automated `diff` or any other A/B check to see
       | whether or not the tarball matches the source repo (if not, auto-
       | flag with a mismatch warning attribute), is that feasible to
       | implement?
        
       | qxfys wrote:
       | So, it's been almost 24 hours since I read this yesterday. Is it
       | confirmed that Jia Tan is the perpetrator? do we know who he/she
       | really is? Or are we going to live for the rest of our lives only
       | knowing the pseudo name? just like Satoshi Nakamoto did to us. ;)
        
       | mise_en_place wrote:
       | This is why we never upgrade software versions. I've been asked
       | by our customers why we use such an old AMI version. This is why.
        
         | gkoberger wrote:
         | This feels like the exact opposite of the takeaway you should
         | have. Old software isn't inherently more secure; you're missing
         | thousands of security and bug fixes. Yes, this was bad, but
         | look how quickly the community came together to catch it and
         | fix it.
         | 
         | It only took 6 days for it to be found and fixed.
        
       | hypnagogic wrote:
       | - * _ring ring_ * - "Hello?" - "It's Lasse Collin." - "Why are
       | you collin me? Why not just use the backdoor?"
        
       | nolist_policy wrote:
       | Debian is considering that their infrastructure may be
       | compromised[1].
       | 
       | [1] https://fulda.social/@Ganneff/112184975950858403
        
       | evilmonkey19 wrote:
       | Which OS are affected by this compromise?? Is Ubuntu affected?
        
       | costco wrote:
       | Anyone have any idea what the code in the malicious liblzma_la-
       | crc64-fast.o is actually doing? It's difficult to follow
       | statically.
        
       | snickerer wrote:
       | When I search for "digital masquerade" on Google, the first
       | result is a book with this title from the author Jia Tan. I
       | assume that is how the attackers got their fake name. Or they
       | think using this author's name is a joke.
        
       | wowserszzzzz wrote:
       | Wow
       | 
       | https://ubuntu.com/security/notices/USN-5378-2
        
       | wowserszzzzzz wrote:
       | Wow
       | 
       | https://ubuntu.com/security/notices/USN-5378-2
        
       | ptx wrote:
       | Python for Windows bundles liblzma from this project, but it
       | appears to be version 5.2.5 [0] vendored into the Python
       | project's repo on 2022-04-18 [1], so that should be fine, right?
       | 
       | [0]
       | https://github.com/python/cpython/blob/main/PCbuild/get_exte...
       | 
       | [1] https://github.com/python/cpython-source-deps/tree/xz
        
       ___________________________________________________________________
       (page generated 2024-03-30 23:02 UTC)