[HN Gopher] XZ backdoor: "It's RCE, not auth bypass, and gated/u...
___________________________________________________________________
XZ backdoor: "It's RCE, not auth bypass, and gated/unreplayable."
Author : junon
Score : 346 points
Date : 2024-03-30 18:19 UTC (4 hours ago)
(HTM) web link (bsky.app)
(TXT) w3m dump (bsky.app)
| junon wrote:
| EDIT: Here's some more RE work on the matter. Has some symbol
| remapping information that was extracted from the prefix trie the
| backdoor used to hide strings. Looks like it tried to hide itself
| even from RE/analysis, too.
|
| https://gist.github.com/smx-smx/a6112d54777845d389bd7126d6e9...
|
| Full list of decoded strings here:
|
| https://gist.github.com/q3k/af3d93b6a1f399de28fe194add452d01
|
| --
|
| For someone unfamiliar with openssl's internals (like me): The N
| value, I presume, is pulled from the `n` field of `rsa_st`:
|
| https://github.com/openssl/openssl/blob/56e63f570bd5a479439b...
|
| Which is a `BIGNUM`:
|
| https://github.com/openssl/openssl/blob/56e63f570bd5a479439b...
|
| Which appears to be a variable length type.
|
| The back door pulls this from the certificate received from a
| remote attacker, attempts to decrypt it with ChaCha20, and if it
| decrypts successfully, passed to `system()`, which is essentially
| a simple wrapper that executes a line of shellscript under
| whichever user the process is currently executing.
|
| If I'm understanding things correctly, this is worse than a
| public key bypass (which myself and I think a number of others
| presumed it might be) - a public key bypass would, in theory,
| only allow you access as the user you're logging in with.
| Assumedly, hardened SSH configurations would disallow root
| access.
|
| However, since this is an RCE in the context of e.g. an sshd
| process itself, this means that sshd running as root would allow
| the payload to itself run as root.
|
| Wild. This is about as bad as a widespread RCE can realistically
| get.
| jeroenhd wrote:
| > However, since this is an RCE in the context of e.g. an sshd
| process itself, this means that sshd running as root would
| allow the payload to itself run as root.
|
| With the right sandboxing techniques, SELinux and mitigations
| could prevent the attacker from doing anything with root
| permissions. However, applying a sandbox to an SSH daemon
| effectively is very difficult.
| junon wrote:
| Right, though if I'm understanding correctly, this is
| targeting openssl, not just sshd. So there's a larger set of
| circumstances where this could have been exploited. I'm not
| sure if it's yet been confirmed that this is confined only to
| sshd.
| jeroenhd wrote:
| The exploit, as currently found, seems to target OpenSSH
| specifically. It's possible that everything involving xz
| has been compromised, but I haven't read any reports that
| there is a path to malware execution outside of OpenSSH.
|
| A quote from the first analysis that I know of
| (https://www.openwall.com/lists/oss-security/2024/03/29/4):
|
| > Initially starting sshd outside of systemd did not show
| the slowdown, despite the backdoor briefly getting invoked.
| This appears to be part of some countermeasures to make
| analysis harder.
|
| > a) TERM environment variable is not set
|
| > b) argv[0] needs to be /usr/sbin/sshd
|
| > c) LD_DEBUG, LD_PROFILE are not set
|
| > d) LANG needs to be set
|
| > e) Some debugging environments, like rr, appear to be
| detected. Plain gdb appears to be detected in some
| situations, but not others
| onedognight wrote:
| > With the right sandboxing techniques, SELinux and
| mitigations could prevent the attacker from doing anything
| with root permissions.
|
| Please review this commit[0] where the sandbox detection was
| "improved".
|
| [0] https://git.tukaani.org/?p=xz.git;a=commitdiff;h=328c52da
| 8a2...
| ronsor wrote:
| Well, the definition of "improve" depends on one's goals.
| Denvercoder9 wrote:
| That one's a separate attack vector, which is seemingly
| unused in the sshd attack. It only disables sandboxing of
| the xzdec(2) utility, which is not used in the sshd attack.
| formerly_proven wrote:
| Which strongly suggests that they planned and/or executed
| more backdoors via Jia Tan's access.
| pja wrote:
| I guess xzdec was supposed to sandbox itself where
| possible so they disabled the sandbox feature check in
| the build system so that future payload exploits passed
| to xzdec wouldn't have to escape the sandbox in order to
| do anything useful?
|
| Sneaky.
| glandium wrote:
| Oh, that one is interesting, because it only breaks it in
| cmake.
| sn wrote:
| I wonder if there is anything else cmake related that
| should be looked at.
|
| Wasn't cmake support originally added to xz to use with
| Windows and MSVC?
| glandium wrote:
| But that's a check for a Linux feature. So the more
| interesting question would be, what in the Linux world
| might be building xz-utils with cmake, I guess using
| ExternalProject_Add or something similar.
| dolmen wrote:
| I can't blame anyone who has missed that dot dissimulated
| at the beginning of the line.
|
| https://git.tukaani.org/?p=xz.git;a=commitdiff;h=f9cf4c05ed
| d...
| Muromec wrote:
| I specifically opened this diff to search for a sneaky
| dot, knowing it's there, and wasn't able to find it until
| I checked the revert patch
| gouggoug wrote:
| For people like me whose C knowledge is poor, can you
| explain why this dot is significant? What does it do in
| actuality?
| ezekg wrote:
| As far as I can tell, the check is to see if a certain
| program compiles, and if so, disable something. The dot
| makes it so that it always fails to compile and thus
| always disables that something.
| Denvercoder9 wrote:
| It's part of a test program used for feature detection
| (of a sandboxing functionality), and causes a syntax
| error. That in turn causes the test program to fail to
| compile, which makes the configure script assume that the
| sandboxing function is unavailable, and disables support
| for it.
| loumf wrote:
| You are looking at a makefile, not C. The C code is in a
| string that is being passed to a function called
| `check_c_source_compiles()`, and this dot makes that code
| not compile when it should have -- which sets a boolean
| incorrectly, which presumably makes the build do
| something it should not do.
| hellcow wrote:
| Another reason to adopt OpenBSD style pledge/unveil in Linux.
| somat wrote:
| Would that help? sshd, by design, opens shells. the
| backdoor payload was basically to open a shell. that is,
| the very thing that sshd has to do.
|
| The pledge/unvail system is pretty great, but my
| understanding is that it do not do anything that the linux
| equivalent interfaces(seccomp i think) cannot do. It is
| just a simplified/saner interface to the same problem of
| "how can a program notify the kernel what it's scope is?"
| The main advantage to pledge/unveil bring to the table is
| that they are easy to use and cannot be turned off,
| optional security isn't.
| semiquaver wrote:
| Could you explain how SELinux could _ever_ sandbox against
| RCE in sshd? Its purpose is to grant login shells to
| arbitrary users, after all.
| ajross wrote:
| It's possible to spawn a sshd as an unprivileged or
| partially-capabilitized process. Such as sandbox isn't the
| default deployment, but it's done often enough and would
| work as designed to prevent privilege elevation above the
| sshd process.
| admax88qqq wrote:
| How can sshd spawn interactive sessions for other users
| if it's sandboxed?
| ajross wrote:
| Plausibly by having set-user-ID capability but not others
| an attacker might need.
|
| But in the more common case it just doesn't: you have an
| sshd running on a dedicated port for the sole purpose of
| running some service or another under a specific
| sandboxed UID. That's basically the github business
| model, for example.
| kbolino wrote:
| SELinux does not rely on the usual UID/GID to determine
| what a process can do. System services, even when running
| as "root", are running as confined users in SELinux.
| Confined root cannot do anything which SELinux policy
| does not allow it to do. This means you can allow sshd to
| create new sessions for non-root users but not to do the
| other things which unconfined root would be able to do.
| This is still a lot of power but it's not the godlike
| access which a person logged in as root has.
| kbolino wrote:
| Even though sshd must run as root (in the usual case), it
| doesn't need unfettered access to kernel memory, most of
| the filesystem, most other processes, etc. However, you
| could only really sandbox sshd-as-root. In order for sshd
| to do its job, it does need to be able to masquerade as
| arbitrary non-root users. That's still pretty bad but
| generally not "undetectably alter the operating system or
| firmware" bad.
| saltcured wrote:
| You could refactor sshd so most network payload processing
| is delegated to sandboxed sub-processes. Then an RCE there
| has less capabilities to exploit directly. But, I think you
| would have to assume an RCE can cause the sub-process to
| produce wrong answers. So if the answers are authorization
| decisions, you can transitively turn those wrong answers
| into RCE in the normal login or remote command execution
| context.
|
| But, the normal login or remote command execution is at
| least audited. And it might have other enforcement of which
| accounts or programs are permitted. A configuration
| disallowing root could not be bypassed by the sub-process.
|
| You could also decide to run all user logins/commands under
| some more confined SE-Linux process context. Then, the
| actual user sessions would be sandboxed compared to the
| real local root user. Of course, going too far with this
| may interfere with the desired use cases for SSH.
| asveikau wrote:
| I thought that OpenSSH's sshd already separates itself
| into a privileged process and a low-privilege process. I
| don't know any details about that. Here's what Google
| showed me for that: https://github.com/openssh/openssh-
| portable/blob/master/READ...
| treasy wrote:
| You can definitely prevent a lot of file/executable
| accesses via SELinux by running sshd in the default sshd_t
| or even customizing your own sshd domain and preventing
| sshd from being able to run binaries in its own domain
| without a transition. What you cannot prevent though is
| certain things that sshd _requires_ to function like
| certain capabilities and networking access.
|
| by default sshd has access to all files in
| /home/$user/.ssh/, but that could be prevented by giving
| private keys a new unique file context, etc.
|
| SELinux would not prevent all attacks, but it can mitigate
| quite a few as part of a larger security posture
| Animats wrote:
| Is there a git diff which shows this going in?
| tempodox wrote:
| Apparently it's not in the original repo, but in a build script
| in a distribution tar.
| joshcryer wrote:
| They also used social engineering to disable fuzzing which
| would have caught the discrepancy:
| https://github.com/google/oss-fuzz/pull/10667
| codetrotter wrote:
| It's pretty funny how a bunch of people come piling
| reaction emojis onto the comments in the PR, _after_ it has
| all become publicly known.
|
| I'm like.. bro, adding reaction emojis after the fact as if
| that makes any sort of difference to anything.
| happosai wrote:
| Honestly, it's harrasment at this point.
| qudat wrote:
| Feels almost like tampering with evidence at a crime
| scene
| jijijijij wrote:
| That thread has become an online event and obviously lost
| its original constructive purpose the moment the
| malicious intent became public. The commenters are not
| trying to alter history, it's leaving their mark in an
| historic moment. I mean the "lgtm" aged like milk and the
| emoji reactions are pretty funny commentary.
| moomoo11 wrote:
| Is the person Jia who did this PR a malicious actor?
| CSMastermind wrote:
| The person who submitted the PR, JiaT75, is.
|
| The person who approved and merged it is not.
| moomoo11 wrote:
| Yeah that's what I am asking. Thanks
| filleokus wrote:
| Would it really have caught it?
| formerly_proven wrote:
| No
| ptx wrote:
| I think they added it in parts over the course of a year or
| two, with each part being plausibly innocent-looking: First
| some testing infrastructure, some test cases with binary test
| data to test compression, updates to the build scripts - and
| then some updates to those existing binary files to put the
| obfuscated payload in place, modifications to the build scripts
| to activate it, etc.
| jtchang wrote:
| So this basically means to scan for this exploit remotely we'd
| need the private key of the attacker which we don't have. Only
| other option is to run detection scripts locally. Yikes.
| tialaramex wrote:
| One completely awful thing some scanners might choose to do is
| if you're offering RSA auth (which most SSH servers are and
| indeed the SecSH RFC says this is Mandatory To Implement) then
| you're "potentially vulnerable" which would encourage people to
| do password auth instead.
|
| Unless we find that this problem has somehow infested a _lot_
| of real world systems that seems to me even worse than the time
| similar "experts" decided that it was best to demand people
| rotate their passwords every year or so thereby ensuring the
| real security is reduced while on paper you claim you improved
| it.
| CodesInChaos wrote:
| It might be possible to use timing information to detect this,
| since the signature verification code appears to only run if
| the client public key matches a specific fingerprint.
|
| The backdoor's signature verification should cost around 100us,
| so keys matching the fingerprint should take that much longer
| to process than keys that do not match it. Detecting this
| timing difference should at least be realistic over LAN,
| perhaps even over the internet, especially if the scanner runs
| from a location close to the target. Systems that ban the
| client's IP after repeated authentication failures will
| probably be harder to scan.
|
| (https://bench.cr.yp.to/results-sign.html lists Ed448
| verification at around 400k cycles, which at 4GHz amounts to
| 100us)
| pstrateman wrote:
| However only probabilistic detection is possible that way and
| really 100us variance over the internet would require many
| many detection attempts to discern.
| Thorrez wrote:
| According to[1], the backdoor introduces a much larger
| slowdown, without backdoor: 0m0.299s, with backdoor:
| 0m0.807s. I'm not sure exactly why the slowdown is so large.
|
| [1] https://www.openwall.com/lists/oss-security/2024/03/29/4
| CodesInChaos wrote:
| The effect of the slowdown on the total handshake time
| wouldn't work well for detection, since without a baseline
| you can't tell if it's slow due to the backdoor, or due to
| high network latency or a slow/busy CPU. The relative
| timing of different steps in the TCP and SSH handshakes on
| the other hand should work, since the backdoor should only
| affect one/some steps (RSA verification), while others
| remain unaffected (e.g. the TCP handshake).
| Thorrez wrote:
| The tweet says "unreplayable". Can someone explain how it's not
| replayable? Does the backdoored sshd issue some challenge that
| the attacker is required to sign?
| candiodari wrote:
| What it does is this: RSA_public_decrypt verifies a signature
| on the client's (I think) host key by a fixed Ed448 key, and
| then if it verifies, passes the payload to system().
|
| If you send a request to SSH to associate (agree on a key for
| private communications), signed by a specific private key, it
| will send the rest of the request to the "system" call in
| libc, which will execute it in bash.
|
| So this is quite literally a "shellcode". Except, you know,
| it's on your system.
| gavinhoward wrote:
| If that's true, then I am 100% certain that this backdoor is from
| a nation state-level actor.
| junon wrote:
| That's what this feels like. That, or someone who wanted to
| sell this to one. Can't imagine the ABC's are sleeping on this
| one at this point.
| jeroenhd wrote:
| I already felt like this was way too sophisticated for a random
| cybercriminal. It's not like making up fake internet identities
| is very difficult, but someone has pretended to be a good-faith
| contributor for ages, in a surprisingly long-term operation.
| You need some funding and a good reason to pull off something
| like that.
|
| This could also be a ransomware group hoping to break into huge
| numbers of servers, though. Ransomware groups have been getting
| more sophisticated and they will already infiltrate their
| targets for months at a time (to make sure all old backups are
| useless when they strike), so I wouldn't put it past them to
| infiltrate the server authentication mechanism directly.
| TheBlight wrote:
| Whoever this was is going after a government or a crypto
| exchange. Don't think anything else merits this effort.
| delfinom wrote:
| Otoh, maybe they just wanted to create a cryptomining farm.
| Lol.
|
| Don't underestimate the drive some people have to make a
| buck.
| dralley wrote:
| I don't know that they had a singular target necessarily.
| Much like Solarwinds, they could take their pick of
| thousands of targets if this had gone undetected.
| TheBlight wrote:
| I think we can all agree this attacker was sophisticated.
| But why would a government want to own tons of random
| Linux machines that have open sshd mappings? You have to
| expose sshd explicitly in most cloud environments (or on
| interesting networks worthy of attack.) Besides, the
| attacker must've known that if this is all over the
| internet eventually someone is going to notice.
|
| I think the attacker had a target in mind. They were
| clearly focused on specific Linux distros. I'd imagine
| they were after a specific set of sshd bastion
| machine(s). Maybe they have the ability to get on the VPN
| that has access to the bastion(s) but the subset of users
| with actual bastion access is perhaps much smaller and
| more alert/less vulnerable to phishing.
|
| So what's going to be the most valuable thing to hack
| that uses Linux sshd bastions? Something so valuable it's
| worth dedicating ~3 years of your life to it? My best
| guess is a crypto exchange.
| baq wrote:
| > But why would a government want to own tons of random
| Linux machines that have open sshd mappings?
|
| They don't want tons. They want the few important ones.
|
| Turns out it was easiest to get to the important ones by
| pwning tons of random ones.
| TheBlight wrote:
| That still implies there was a target in mind. But also
| they would've had to assume the access would be
| relatively short-lived. This means to me they had
| something specific they wanted to get access to, didn't
| plan to be there long, and weren't terribly concerned
| about leaving a trail of their methods.
| pessimizer wrote:
| Why couldn't they have had 50 or 100 targets in mind, and
| hoped that the exploit would last for at least the month
| (or whatever) they needed to accomplish their multiple,
| unrelated goals?
|
| I think your imagination is telling you a story that is
| prematurely limiting the range of real possibilities.
| icegreentea2 wrote:
| > Something so valuable it's worth dedicating ~3 years of
| your life to it?
|
| This isn't the right mindset if you want to consider a
| state actor, particularly for something like contributing
| to an open source project. It's not like you had to
| physically live your cover life while trying infiltrate a
| company or something.
|
| Yes, this is a lot of resources to spend, but at the
| same, even dedicating one whole FTE 3 years isn't that
| much resources. It's just salary at that point.
| Ekaros wrote:
| Government have lot of money and time to spend. So having
| one more tool in box for that single time you need to
| access a target where this work is entirely reasonable
| investment. Would this if it weren't used have been
| noticed possibly in years? That gives quite a lot of room
| to find target for times when it is needed.
|
| And you could have multiple projects doing this type of
| work in parallel.
| neffy wrote:
| Those guys are estimated to have made $1 billion last year -
| have to think that buys some developer talent.
| BoardsOfCanada wrote:
| I would expect a nation state to have better sock puppet
| accounts though.
| nemothekid wrote:
| > _better sock puppet accounts though._
|
| Seems like to me they had perfectly good enough sock puppet
| accounts. It wasn't at all obvious they were sock puppets
| until someone detected the expliot.
| fuzunoglu wrote:
| What is the possibility of identity theft that is commenced on
| state-level? There are reports that the time the backdoor was
| pushed do not match the usual timing of changes committed by
| the author.
|
| It also seems like a convenient ground for a false flag
| operation: hijacking an account that belong to a trustworthy
| developer from another country.
| Aloisius wrote:
| And risk discovery by the trustworthy developer? Unlikely.
| ignoramous wrote:
| I imagine such actors are embed within major consumer tech
| teams, too: Twitter, TikTok, Chrome, Snap, WhatsApp,
| Instagram... covers ~70% of all humanity.
| emeraldd wrote:
| If I'm reading this right, would there be any persistent evidence
| of the executed payload? I can't think of a reason anything would
| have to go to disk in this situation, so a compromise could
| easily look like an Auth failure in the logs .... maybe a
| difference in timings .. but that's about it ...
| junon wrote:
| Unless the payload did something that produced evidence, or if
| the program using openssl that was affected was, for some
| reason, having all of its actions logged, then no, probably
| not.
| noncoml wrote:
| Unpopular opinion, but I cannot but admire the whole operation.
| Condemn it of course, but still admire it. It was a piece of art!
| From conception to execution, masterful! We got extremely lucky
| that it was caught so early.
| dc-programmer wrote:
| I agree, but the social engineering parts do feel particularly
| cruel
| nocoiner wrote:
| I felt really bad for the original maintainer getting dog-
| piled by people who berated him for not doing his (unpaid)
| job and basically just bring shame and discredit to himself
| and the community. Definitely cruel.
|
| Though... do we know that the maintainer at that point was
| the same individual as the one who started the project? Goes
| deep, man.
| StefanBatory wrote:
| Even if it's not his fault the maintainer at this point
| won't be trusted at all. I feel for him, I think even
| finding a job at this moment for him would be impossible.
| Why would you hire someone that could be suspected for
| that?
| polio wrote:
| This could've happened to anybody, frankly. The attacker
| was advanced and persistent. I cannot help but feel
| sympathetic for the original maintainer here.
| consumer451 wrote:
| From TFA's profile:
|
| https://bsky.app/profile/filippo.abyssdomain.expert/post/
| 3ko...
|
| This is a profound realization, isn't it? How much more
| paranoid should/will maintainers be going forward?
| plg94 wrote:
| No. From what I've read on the openwall and lkml mailing
| lists (so generally people who know a lot more about
| these things than I do), nobody accused Lasse Collins,
| the original maintainer, of being involved in this, at
| all, and there wasn't any notion of him becoming
| untrustworthy.
| thinkingemote wrote:
| Its possible the adversary was behind or at least
| encouraged the dog piling who berated him. Probably a
| normal basic tactic from a funded evil team playbook.
|
| Might be worth reviewing those who berated him to see if
| they resolve to real people, to see how deep this operation
| goes.
| formerly_proven wrote:
| One of them who left only one comment does, the rest are
| sock puppets.
| returningfory2 wrote:
| This has been investigated and the conclusion is IMO
| clear: the dogpilling accounts were part of the
| operation. See the parts about Jigar Kumar in this link:
| https://boehs.org/node/everything-i-know-about-the-xz-
| backdo...
| Tuna-Fish wrote:
| If the payload didn't have a random .5 second hang during SSH
| login, it would probably not have been found for a long time.
|
| The next time, the attackers probably manage to build a payload
| that doesn't cause weird latency spikes on operations that
| people wait on.
|
| (For some reason this brings to mind how Kim Dotcom figured out
| he was the target of an illegal wiretap... because he suddenly
| had a much higher ping in MW3. When he troubleshooted, he found
| out that all his packets specifically got routed a very long
| physical distance through a GCSB office. GCSB has no mandate to
| wiretap permanent NZ residents. He ended up getting a personal
| apology from the NZ Prime Minister.)
| jijijijij wrote:
| > If the payload didn't have a random .5 second hang during
| SSH login, it would probably not have been found for a long
| time.
|
| Ironic, how an evil actor failed for a lack of premature
| optimization :D
| jijijijij wrote:
| > _" After all, He-Who-Must-Not-Be-Merged did great things -
| Terrible, yes, but great."_
|
| I think the most ingenious part was picking the right project
| to infiltrate. Reading "Hans'" IFUNC pull request discussion is
| heart-wrenching in hindsight, but it really shows why this
| project was chosen.
|
| I would love to know how many people where behind "Jia" and
| "Hans" analyzing and strategizing communication and code
| contributions. Some aspects, like those third tier personas
| faking pressure on mailing lists, seem a bit carelessly
| crafted, so I think it's still possible this was done by a
| sophisticated small team or even single individual. I presume a
| state actor would have people pumping out and maintaining fake
| personas all day for these kind of operations. I mean, would
| have kinda sucked, if someone thought: "Hm. It's a bit odd how
| rudely these three users are pushing. Who are they anyway? Oh,
| look they are all created at the same time. Suspicious. Why
| would anyone fake accounts to push so hard for this
| specifically? I need to investigate". Compared to the overall
| effort invested, that's careless, badly planned or underfunded.
| empath-nirvana wrote:
| I don't think "admire" is the right word, but it's a pretty
| impressive operation.
| wiktor-k wrote:
| > OpenSSH certs are weird in that they include the signer's
| public key.
|
| OpenSSH signatures in general contain signer's public key, which
| I personally think it's not weird but rather cool since it allows
| verifying the signature without out of the band key delivery
| (like in OpenPGP). The authentication of the public key is a
| separate subject but at least some basic checks can be done with
| an OpenSSH signature only.
| 1oooqooq wrote:
| > cool since it allows verifying the signature without out of
| the band key delivery
|
| hope you do key selection sanitization instead of the default
| (nobody does). otherwise you're accepting random keys you have
| laying around (like github) when logging to secret.example.com
| slooonz wrote:
| What do you mean ?
| asveikau wrote:
| I have found it irritating how in the community, in recent years,
| it's popular to say that if a project doesn't have recent commits
| or releases that something is seriously wrong. This is a toxic
| attitude. There was nothing wrong with "unmaintained" lzma two
| years ago. The math of the lzma algorithm doesn't change. The
| library was "done" and that's ok. The whiny mailing list post
| from the sock puppet, complaining about the lack of speedy
| releases, which was little more than ad hominem attacks on the
| part time maintainer, is all too typical and we shouldn't assume
| those people are "right" or have any validity to their opinion.
| snnn wrote:
| I mostly agree with you, but I think your argument is wrong.
| Last month I found a tiny bug in Unix's fgrep program(the bug
| has no risk). The program implements Aho Corasick algorithm,
| which hasn't changed much over decades. However, at least when
| the code was released to 4.4BSD, the bug still existed. It is
| not much a concern as nowadays most fgrep progroms are just an
| alias of grep. They do not use the old Unix code anymore. The
| old Unix code, and much part of FreeBSD, really couldn't meet
| today's security standard.For example, many text processing
| programs are vulnerable to DoS attacks when processing well-
| crafted input strings. I agree with you that in many cases we
| really don't need to touch the old code. However, it is not
| just because the algorithm didn't change.
| cesarb wrote:
| > The math of the lzma algorithm doesn't change. The library
| was "done" and that's ok.
|
| Playing devil's advocate: the math doesn't change, but the
| environment around it does. Just off the top of my head, we
| have: the 32-bit to 64-bit transition, the removal of pre-C89
| support
| (https://fedoraproject.org/wiki/Changes/PortingToModernC) which
| requires an autotools update, the periodic tightening of
| undefined behaviors, new architectures like RISC-V, the
| increasing amount of cores and a slowdown in the increase of
| per-core speed, the periodic release of new and exciting vector
| instructions, and exotic security features like CHERI which
| require more care with things like pointer provenance.
| asveikau wrote:
| > the 32-bit to 64-bit transition
|
| Lzma is from 2010. Amd64 became mainstream in the mid 2000s.
|
| > removal of pre-C89 support
|
| Ibid. Also, at the library API level, c89 compatible code is
| still pretty familiar to c99 and later.
|
| > new architectures like RISC-V
|
| Shouldn't matter for portable C code?
|
| > the increasing amount of cores and a slowdown in the
| increase of per-core speed,
|
| Iirc parallelism was already a focus of this library in the
| 2010s, I don't think it really needs a lot of work in that
| area.
| snnn wrote:
| Actually, the new architectures are a big source of
| concerns. As a maintainer of a large open source project, I
| often received pull requests for CPU architectures that I
| never had a chance to touch. Therefore I cannot build the
| code, cannot run the tests, and do not understand most of
| the code. C/C++ themselves are portable, but libs like xz
| needs to beat the other competitors on performance, which
| means you may need to use model specific SIMD instructions,
| query CPU cache size and topology, work at very low level.
| These code are not portable. When people add these code,
| they often need to add some tests, or disable some existing
| tests conditionally, or tweak the build scripts. So they
| are all risks.
|
| No matter how smart you are, you cannot forecast the
| future. Now many CPUs have a heterogeneous configuration,
| which means they have big cores and little cores. But do
| all the cores have the same capabilities? Is possible that
| a CPU instruction only available on some of the CPU cores?
| What does it mean for a multithreaded application? Would it
| be possible that 64-bit CPUs may drop the support for
| 32-bit at hardware level? Tens years ago you cannot predict
| what's going to happen today.
|
| Windows has a large compatibility layer, which allows you
| running old code on the latest hardware and latest Windows.
| It needs quite a lot efforts. Many applications would crash
| without the compatibility patches.
| asveikau wrote:
| I am a former MS employee, I used to read the
| compatibility patches when I was bored at the office.
|
| Anyway, liblzma does not "need" to outperform any
| "competition". If someone wants to work on some
| performance optimization, it's completely fair to fork.
| Look at how many performance oriented forks there are of
| libjpeg. The vanilla libjpeg still works.
| Hakkin wrote:
| and then that fork becomes more performant or feature
| rich or secure or (etc), and it becomes preferred over
| the original code base, and all distributions switch to
| it, and we're back at square one.
| bulatb wrote:
| A software project has the features it implements, the
| capabilities it offers users, and the boundary between itself
| and the environment in which those features create value for
| the user by becoming capabilities.
|
| The "accounting" features in the source code may be finished
| and bug-free, but if the outside world has changed and now the
| user can't install the software, or it won't run on their
| system, or it's not compatible with other current software,
| then the software system doesn't grant the capability
| "accounting," even though the features are "finished."
|
| Nothing with a boundary is ever finished. Boundaries just keep
| the outside world from coming in too fast to handle. If you
| don't maintain them then eventually the system will be
| overwhelmed and fail, a little at a time, or all at once.
| asveikau wrote:
| I feel like this narrative is especially untrue for things
| like lzma where the only dependencies are memory and CPU, and
| written in a stable language like C. I've had similar
| experiences porting code for things like image formats, audio
| codecs, etc. where the interface is basically "decode this
| buffer into another buffer using math". In most cases you can
| plop that kind of library right in without any maintenance at
| all, it might be decades old, and it works. The type of
| maintenance I would expect for that would be around security
| holes. Once I patched an old library like that to handle the
| fact that the register keyword was deprecated.
| rdtsc wrote:
| Excellent point. I believe that's coming from corporate supply
| chain attack "response" and their insistence on making hard
| rules about "currency" and "activity" and "is maintained"
| pushes this kind of crap.
|
| Attackers know this as well. It doesn't take much to hang
| around various mailing lists and look for stuff like this:
| https://www.mail-archive.com/xz-devel@tukaani.org/msg00567.h...
|
| > (Random user or sock puppet) Is XZ for Java still maintained?
|
| > (Lasse) I haven't lost interest but my ability to care has
| been fairly limited mostly due to ...
|
| > (Lasse) Recently I've worked off-list a bit with Jia Tan on
| XZ Utils and perhaps he will have a bigger role in the future,
| we'll see. It's also good to keep in mind that this is an
| unpaid hobby project
|
| With a few years worth of work by a team of 2-3 people: one
| writes and understand the code, one communicates, a few others
| pretend to be random users submitting ifunc patches, etc., you
| can end up controlling the project and signing releases.
| empath-nirvana wrote:
| 2 popular and well tested rust yaml libraries have recently
| been marked as unmaintained and people are moving away from
| them to brand new projects in a rush because warnings went out
| about it.
| bawolff wrote:
| The headline seems like a distinction without a difference.
| Bypassing ssh auth means getting a root shell. There is no
| significant difference between that and running system(). At most
| maybe system() has less logging.
| armitron wrote:
| > Bypassing ssh auth means getting a root shell
|
| Only if you're allowed to login as root, which is definitely
| not the case everywhere.
| colinsane wrote:
| not only that, but logins show up in logs.
| juliusdavies wrote:
| My sense was this backdoor gets to execute whatever it wants
| using whatever "user" sshd is running as. So even if root
| logins are disabled, this backdoor doesn't care.
| jmward01 wrote:
| Where is the law enforcement angle on this? This
| individual/organization needs to be on the top of every country's
| most wanted lists.
| ganeshkrishnan wrote:
| The individual has worked with top fang companies and is now
| working for an unicorn startup
| stephc_int13 wrote:
| How do you know that?
| delfinom wrote:
| Absolute fucking morons are using the name "Jia Tan" to
| find a guy on LinkedIn to basically bully and harass. The
| description ganeshkrishnan gave is exactly of that guy.
|
| You know, because names are totally fucking unique. /s
|
| ganeshkrishnan should be ashamed for being such a fucking
| piece of shit.
| consumer451 wrote:
| Total dumb dumb here, but in the og thread here on HN, I
| saw it noted that Jia was supposedly part of the
| opensource.google group on GitHub . I believe that I read
| that only an active googler could be part of that group.
|
| I am just asking for my own sanity. Was that not the
| case?
| greyface- wrote:
| I understand the impulse to seek justice, but what crime have
| they committed? It's illegal to gain unauthorized access, but
| not to write vulnerable code. Is there evidence that this is
| being exploited in the wild?
| jmward01 wrote:
| I am definitely not a lawyer so I have no claim to knowing
| what is or is not a crime. However, if backdooring SSH on a
| potentially wide scale doesn't trip afoul of laws then we
| need to seriously have a discussion about the modern world.
| I'd argue that investigating this as a crime is likely in the
| best interest of public safety and even (I hesitate to say
| this) national security considering the potential scale of
| this. Finally, I would say there is a distinction between
| writing vulnerable code and creating a backdoor with
| malicious intent. It appears (from the articles I have been
| reading so far) that this was malicious, not an accident or
| lack of skill. We will see over the next few days though as
| more experts get eyes on this.
| greyface- wrote:
| Agreed on a moral level, and it's true that describing this
| as simply "vulnerable code" doesn't capture the clear
| malicious intent. I'm just struggling to find a specific
| crime. CFAA requires unauthorized access to occur, but the
| attacker was authorized to publish changes to xz. Code is
| speech. It was distributed with a "no warranty" clause in
| the license.
| fragmede wrote:
| CFAA covers distribution of malicious software without
| the owners consent, the Wire Fraud Act covers malware
| distribution schemes intended to defraud for property,
| Computer Misuse act in the UK is broad and far reaching
| like the CFAA, so this likely fall afoul of that. The
| GDPR protects personal data, so there's possibly a case
| that could be made that this violates that as well,
| though that might be a bit of reach.
| javajosh wrote:
| In which case the defense will claim, correctly, that
| this malware was never distributed. It was caught.
| "Attempted malware distribution" may not actually be a
| crime (but IANAL so I don't know).
| stephc_int13 wrote:
| It is like opening the door of a safe and letting someone
| else rob the money inside.
|
| This is way beyond "moral level".
| ndriscoll wrote:
| If more than one person was involved, it'd presumably
| fall under criminal conspiracy. Clearly this was an overt
| act in furtherance of a crime (unauthorized access under
| CFAA, at the least).
| sneak wrote:
| The criminal conspiracy laws don't apply to the
| organizations that write this kind of code, just like
| murder laws don't.
| ceejayoz wrote:
| Sure they do. Getting the perpetrator into your
| jurisdiction is the tough part.
|
| Putin is, for example, unlikely to go anywhere willing to
| execute an ICC arrest warrant.
| sneak wrote:
| Nah, the CIA assassinates people in MLAT zones all the
| time. The laws that apply to you and I don't apply to the
| privileged operators of the state's prerogatives.
|
| We don't even know that this specific backdoor wasn't the
| NSA or CIA. Assuming it was a foreign intelligence
| service because the fake name was asian-sounding is a bit
| silly. The people who wrote this code might be sitting in
| Virginia or Maryland already.
| ceejayoz wrote:
| > The people who wrote this code might be sitting in
| Virginia or Maryland already.
|
| Sure, that's possible. They will as a result probably
| avoid traveling to unfriendly jurisdictions without a
| diplomatic passport.
| rl3 wrote:
| > _They will as a result probably avoid traveling to
| unfriendly jurisdictions without a diplomatic passport._
|
| First of all, it's not like their individual identities
| would ever be known.
|
| Second, they would already know that traveling to a
| hostile country is a great way to catch bullshit
| espionage charges, maybe end up tortured, and certainly
| used as a political pawn.
|
| Third, this is too sloppy to have originated from there
| anyways--however clever it was.
| tomoyoirl wrote:
| > Virginia or Maryland
|
| Eastern Europe, suggest the timestamp / holiday analysts.
| https://rheaeve.substack.com/p/xz-backdoor-times-damned-
| time...
| semiquaver wrote:
| CFAA covers this. Its a crime to >
| knowingly [cause] the transmission of a program,
| information, code, or command, and as a result of such
| conduct, intentionally causes damage without
| authorization, to a protected computer;
|
| Where one of the definitions of "protected computer" is
| one that is used in interstate commerce, which covers
| effectively all of them.
| pbhjpbhj wrote:
| It seems like the backdoor creates the potential to
| "cause damage" but doesn't [provably?] cause damage _per
| se_?
|
| The author of the backdoor doesn't themselves "[cause]
| the transmission of a program ...". Others do the
| transmission.
|
| Seems weak, unless you know of some precedent case(s)?
| ronsor wrote:
| It's not simply vulnerable code: it's an actual backdoor.
| That is malware distribution (without permission) and is
| therefore illegal.
| greyface- wrote:
| Is it illegal to distribute malware? I see security
| researchers doing it all the time for analysis purposes.
| ronsor wrote:
| No, it is not illegal to distribute malware by itself,
| but it is illegal to trick people into installing
| malware. The latter was the goal of the XZ contributor.
| pbhjpbhj wrote:
| I assume you're talking from a USC perspective? Can you
| say which specific law, chapter, and clause applies?
| fragmede wrote:
| specifically, thevCFAA covers distribution of malicious
| software without the owners consent. Security researchs
| downloading malware implicitly give consent to be
| downloading malware marked as such.
| M2Ys4U wrote:
| In the UK, at least, unauthorised access to computer material
| under section 1 of the Computer Misuse Act 1990 - and I would
| also assume that it would also fall foul of sections 2
| ("Unauthorised access with intent to commit or facilitate
| commission of further offences") and 3A ("Making, supplying
| or obtaining articles for use in offence under section 1, 3
| or 3ZA") as well.
|
| Though proving jurisdiction would be tricky...
| stephc_int13 wrote:
| Calling this backdoor "vulnerable code" is a gross
| mischaracterization.
|
| This is closer to a large scale trojan horse, that does not
| have to be randomly discovered by a hacker to be exploited,
| but is readily available for privileged remote code execution
| by whoever have the private key to access this backdoor.
| Seattle3503 wrote:
| I'd be surprised if the attacker didn't meet the criteria for
| mens rea.
| hnarn wrote:
| What is constantly overlooked here on HN is that in legal
| terms, one of the most important things is _intent_.
| Commenters on HN always approach legal issues from a
| technical perspective but that is simply not how the judicial
| system works. Whether something is "technically X" or not is
| irrelevant, laws are usually written with the purpose of
| catching people based on their intent (malicious hacking),
| not merely on the technicalities (pentesters distributing
| examples).
| drowsspa wrote:
| Yeah, it bothers me so much. They really seem to think that
| "law is code".
| 1oooqooq wrote:
| easy there. calling the cops on the NSA might be treason or
| something
| bawolff wrote:
| Its been all of 24 hours, these things take time. Presumably
| someone doing an attack this audacious took steps to cover
| their tracks and is using a fake name.
| tootie wrote:
| CISA had a report on this pretty quickly. I think they refer
| cases to Secret Service for enforcement. But really, we
| seemingly have no idea who or where the perpetrator is located.
| This could easily be a state actor. It could be a lone wolf.
| And the effects of the attack would be global too, so
| jurisdiction is tricky. We really have no idea at this point.
| The personas used to push the commits and push for inclusion
| were almost certainly fronts. I'm sure github is sifting
| through a slew of subpoenas right now.
| pcthrowaway wrote:
| Based on the level of sophistication being alluded to, I'm
| personally inclined to assume this is a state actor, possible
| even some arm of the U.S. govt.
| tejohnso wrote:
| > possible even some arm of the U.S. govt.
|
| Possible. But why mention U.S. specifically? Is it more
| likely than Russia, Iran, China, France ... ?
| mardifoufs wrote:
| The US is behind more documented backdoors than those other
| countries.
| pcthrowaway wrote:
| > This individual/organization needs to be on the top of
| every country's most wanted lists
|
| Because if the "organization" is a U.S. agency, not much is
| going to happen here. Russia or China or North Korea might
| make some strongly worded statements, but nothing is going
| to happen.
|
| It's also very possible that security researchers won't be
| able to find out, and government agencies will finger-point
| as a means of misdirection.
|
| For example, a statement comes out in a month that this was
| North Korea. Was it really? Or are they just a convenient
| scapegoat so the NSA doesn't have to play defense on its
| lack of accountability again?
| paolomainardi wrote:
| I am wondering if reinstalling the entire Archlinux installation
| would be a wise choice.
| sebiw wrote:
| Arch Linux uses a native/unpatched version of OpenSSH without
| dependency on libsystemd and thus without dependency on xz-
| utils, resulting in no exploitable code path. This means that
| at least the currently talked about vulnerability/exploit via
| SSH did presumably not work on Arch. Disclaimer: This is my
| understanding of the currently circulating facts. Additional
| fallout might be possible, as the reverse engineering of the
| backdoor is ongoing.
| Phelinofist wrote:
| Just to extend the sibling comment with an excerpt of the Arch
| announce mail regarding the backdoor: >From the
| upstream report [1]: > openssh does not
| directly use liblzma. However debian and several other
| distributions patch openssh to support systemd notification,
| and libsystemd does depend on lzma. Arch
| does not directly link openssh to liblzma, and thus this attack
| vector is not possible. You can confirm this by issuing the
| following command: ``` ldd
| "$(command -v sshd)" ```
| However, out of an abundance of caution, we advise users to
| remove the malicious code from their system by upgrading either
| way. This is because other yet-to-be discovered methods to
| exploit the backdoor could exist.
| bagels wrote:
| Why does xz need new features at this point?
| sneak wrote:
| The questions this backdoor raises:
|
| - what other ones exist by this same team or similar teams?
|
| - how many such teams are operating?
|
| - how many such dependencies are vulnerable to such infiltration
| attacks? what is our industry's attack surface for such covert
| operations?
|
| I think making a graph of all major network services (apache
| httpd, postgres, mysql, nginx, openssh, dropbear ssh, haproxy,
| varnish, caddy, squid, postfix, etc) and all of their
| dependencies and all of the committers to all of those
| dependencies might be the first step in seeing which parts are
| the most high value and have attracted the least scrutiny.
|
| This can't be the first time someone attempted this - this is
| just the first unsuccessful time. (Yes, I know about the
| attempted/discovered backdoor in the linux kernel - this is
| remote and is a horse of a different color).
| almostnormal wrote:
| Why did they decide to create a backdoor, instead of using a
| zeroday like everyone else?
|
| Why did they implement a fully-featured backdoor and attempted
| to hide the way it is deployed, instead of deploying something
| innocent-looking that might as well be a bug if detected?
|
| These must have been conscious decisions. The reasons might
| provide a hint what the goals might have been.
| formerly_proven wrote:
| Ed448 is orders of magnitude better NOBUS than hoping that
| nobody else stumbles over the zero-day you found.
| Ekaros wrote:
| If they seemingly almost succeeded how many others have
| already done similar backdoor? Or was this actually just
| poking on things seeing if it was possible to inject this
| sort of behaviour?
| sega_sai wrote:
| One have question on this is, if the backdoor would not been
| discovered due to performance issue (which was as I understood it
| purely an oversight/fixable deficiency in the code), what are the
| chances of discovering this backdoor later, or are there tools
| that would have picked it up? Those questions are IMO relevant to
| understand if this kind of backdoor is the first one of the kind,
| or the first one that was uncovered.
| wepple wrote:
| I expect a lot of people will be doing a whole lot of thinking
| along these lines over the next months.
|
| Code review? Some kind of behavioral analysis?
|
| IMO the call to system() was kind of sloppy, and a binary
| capabilities scanner could have potentially identified a path
| to that.
| tux3 wrote:
| I think behavioral analysis could be promising. There's a lot
| of weird stuff this code does on startup that any reasonable
| Debian package on the average install should not be doing in
| a million years.
|
| Games and proprietary software will sometimes ship with DRM
| protection layers that do insane things in the name of
| obfuscation, making it hard to distinguish from malware.
|
| But (with only a couple exceptions) there's no reason for a
| binary or library in a Debian package to ever try to write
| the PLT outside of the normal mechanism, to try to overwrite
| symbols in other modules, to add LD audit hooks on startup,
| to try to resolve things manually by walking ELF structures,
| to do anti-debug tricks, or just to have any kind of
| obfuscation or packing that free software packaged for a
| distro is not supposed to have.
|
| Some of these may be (much) more difficult to detect than
| others, some might not be realistic. But there are several
| plausible different ways a scanner could have detected
| something weird going on in memory during ssh startup.
|
| No one wants a Linux antivirus. But I think everyone would
| benefit from throwing all the behavioral analysis we can come
| up with at new Debian package uploads. We're very lucky
| someone noticed this one, we may not have the same luck next
| time.
| raggi wrote:
| Except had we been doing that they would have put guards in
| place to detect it - as they already had guards to avoid
| the code path when a debugger is attached, to avoid
| building the payload in when it's not one of the target
| systems, and so on. Their evasion was fairly extensive, so
| we'd need many novel dynamic systems to stand a chance, and
| we'd have to guard those systems extremely tightly - the
| author got patches into oss-fuzz as well to "squash false
| positives". All in all, adding more arms to the arms race
| does raise the bar, but the bar they surpassed already
| demonstrated tenacity, long term thinking, and significant
| defense and detection evasion efforts.
| tux3 wrote:
| I broadly agree, but I think we can draw a parallel with
| the arms race of new exploit techniques versus exploit
| protection.
|
| People still manage to write exploits today, but no one
| regrets the arms race. Now you must find an ASLR leak,
| you must chain enough primitives to work around multiple
| layers of protection, it's generally a huge pain to write
| exploits compared to the 90s.
|
| Today the dynamic detection that we have for Linux
| packages seems thin to non-existent, like the arms race
| has not even started yet. I think there is a bit of low-
| hanging fruit to make attacker lives harder (and some
| much higher-hanging fruit that would be a real headache).
|
| Luckily there is an asymmetry in favor of the defenders
| (for once). If we create a scanner, we do not _have_ to
| publish every type of scan it knows how to do. Much like
| companies fighting spammers and fraud don't detail
| exactly how they catch bad actors. (Or, for another
| example, I know the Tor project has a similar asymmetry
| to detect bad relays. They collaborate on their relay
| scanner internally, but no one externally knows all the
| details.)
| raggi wrote:
| Yeah, perhaps something akin to an OSS variant of
| virustotal's multi-vendor analysis. I'm still not sure it
| would catch this, but as you say, raising the bar isn't
| something we tend to regret.
| snnn wrote:
| > to try to overwrite symbols in other modules, to add LD
| audit hooks on startup, to try to resolve things manually
| by walking ELF structures
|
| I want to name one thing: when Windows failed to load a DLL
| because a dependency was missing, it doesn't tell you what
| was missed. To get the information, you have to interact
| with the DLL loader with low level Windows APIs. In some
| circumstances Linux apps may also have the need. Like for
| printing a user friendly error message or recovery from a
| non-fatal error. For example, the patchelf tool that is
| used for building portable python packages.
|
| > No one wants a Linux antivirus
|
| It is not true. Actually these software are very popular in
| enterprise settings.
| ffsm8 wrote:
| > _No one wants a Linux antivirus_
|
| ClamAV has been around for a very long time at this point.
|
| It's just not installed on servers, usually
| x0x0 wrote:
| It feels like systemd is irresponsible. They changed the
| supply chain surface area from the limited set of libs
| openssh uses to any lib that systemd itself uses.
|
| I'm sure there were reasons for this, but for the process
| most people use on naked internet, that seems like a poor
| idea.
| raggi wrote:
| Think whatever you shall about systemd of course, but
| please stop with the blind belief mud slinging:
| - systemd didn't create the patch to include libsystemd,
| distros did - current systemd versions already remove
| liblzma from their dependencies, the affected distros are
| behind on systemd updates though - you can implement
| notify in standalone code in about the same effort as it
| takes to use the dependency, there wasn't really a good
| reason for distro's to be adding this dependency to such a
| critical binary. systemd documents the protocol
| independently to make this easy. distros having sketchy
| patches to sshd has a long history, remember the debian
| weak key fiasco?
| Denvercoder9 wrote:
| > - current systemd versions already remove liblzma from
| their dependencies, the affected distros are behind on
| systemd updates though
|
| The affected distros aren't behind on systemd updates,
| the change to systemd you describe has been merged but
| not yet released.
| raggi wrote:
| Ah, thank you for the correction!
| jnwatson wrote:
| The real problem was doing expensive math for every
| connection. If it had relied on a cookie or some simpler-to-
| compute pre-filter, no one would have been the wiser.
| raggi wrote:
| the call to system is obfuscated, static analysis wouldn't
| see it
| joeyh wrote:
| Since a liblzma backdoor could be used to modify compiler
| packages that are installed on some distributions, it gets
| right back to a trusting trust attack.
|
| Although initial detection via eg strace would be possible, if
| the backdoor was later removed or went quiescentit would be
| full trusting trust territory.
| ghostpepper wrote:
| How would this be possible? This backdoor works because lzma
| is loaded into sshd (by a roundabout method involving
| systemd). I don't think gcc or clang links lzma.
| rdtsc wrote:
| At least for some comic relief I'd like to imagine Jia's boss
| slapping him and saying something like "you idiot, we worked on
| this for so many years and you couldn't have checked for any
| perf issues?"
|
| But seriously, we could have found ourselves with this in all
| stable repos: RHEL, Debian, Ubuntu, IoT devices 5 years from
| now and it would have been a much larger shit show.
| chris_wot wrote:
| Surely this is something the FBI should be involved with? Or
| some authority?
| colinsane wrote:
| sure. what makes you think they aren't?
| djao wrote:
| Maybe they didn't have time to test? They could have been
| scrambling to make it into timed releases such as Ubuntu
| 24.04 or Fedora 40.
| raggi wrote:
| There is one possible time pressure involved, which is that
| libsystemd dropped the liblzma dependency
| quatrefoil wrote:
| If the exploit wasn't baing used, the odds would would be
| pretty low. They picked the right place to bury it (i.e.,
| effectively _outside_ the codebase, where no auditor ever
| looks).
|
| That said, if you're not using it, it defeats the purpose. And
| the more you're using it, the higher the likelihood you will be
| detected down the line. Compare to Solarwinds.
| formerly_proven wrote:
| I think this would've been difficult to catch because the
| patching of sshd happens during linking, when it's permissible,
| and if this is correct then it's not a master key backdoor, so
| there is no regular login audit trail. And sshd would of course
| be allowed to start other processes. A very tight SELinux
| policy could catch sshd executing something that ain't a shell
| but hardening to that degree would be extremely rare I assume.
|
| As for being discovered outside the target, well we tried that
| exercise already, didn't we? A bunch of people stared at the
| payload with valgrind et al and didn't see it. It's also fairly
| well protected from being discovered in debugging environments,
| because the overt infrastructure underlying the payload is
| incompatible with ASan and friends. And even if it is linked
| in, the code runs long before main(), so even if you were
| prodding around near or in liblzma with a debugger you wouldn't
| normally observe it execute.
|
| e: sibling suggests strace, yes you can see all syscalls after
| the process is spawned and you can watch the linker work. But
| from what I've gathered the payload isn't making any syscalls
| at that stage to determine whether to activate, it's just
| looking at argv and environ etc.
| tux3 wrote:
| One idea may be to create a patched version of ld-linux
| itself with added sanity checks while the process loads.
|
| For something much more heavy-handed, force the pages in
| sensitive sections to fault, either in the kernel or in a
| hypervisor. Then look at where the access is coming from in
| the page fault handler.
|
| I don't think you can reliably differentiate a backdoor
| executing a command, and a legitimate user logged in with ssh
| running a command once the backdoor is already installed. But
| the way backdoors install themselves is where they really
| break the rules.
| chatmasta wrote:
| Can someone explain succinctly what the backdoor _does_? Do we
| even know yet? The backdoor itself is not a payload, right? Does
| it need a malicious archive to exploit it? Or does it hook into
| the sshd process to listen for malicious packets from a remote
| attacker?
|
| The OP makes it sound like an attacker can send a malicious
| payload in the pre-auth phase of an SSH session - but why does he
| say that an exploit might never be available? Surely if we can
| reverse the code we can write a PoC?
|
| Basically, how does an attacker control a machine with this
| backdoor on it?
| swid wrote:
| You can imagine a door that opens if you knock on it just
| right. For anyone without the secret knock, it appears and
| functions as a wall. Without the secret knock, there might not
| even be a way to prove it opens at all.
|
| This is sort of the situation here. xz tries to decode some
| data before it does anything shady; since it is asymmetric; it
| can do the decryption without providing the secret encryption
| key (it has the public counterpart).
|
| The exploit code may never be available, because it is not
| practical to find the secret key, and it doesn't do anything
| obviously different if the payload doesn't decrypt
| successfully. The only way to produce the exploit code would be
| if the secret key is found somehow; and the only real way for
| that to happen would be for the people who developed the
| backdoor to leak it.
| snnn wrote:
| That's the most interesting part. No, we don't know it yet. The
| backdoor is so sophisticated that none of us can fully
| understand it. It is not a "usual" security bug.
| heresWaldo wrote:
| Yeah these types of security issues will be used by
| politicians to force hardware makers to lockdown hardware,
| embed software in chips.
|
| The go fast startups habit of "import the world to make my
| company products" is a huge security issue IT workers ignore.
|
| The only solution politics and big tech will chase is
| obsolete said job market by pulling more of the stack into
| locked down hardware, with updates only allowed to come from
| the gadget vendor.
| georgyo wrote:
| I'm not saying political forces won't try legislating the
| problem away, but that won't even help here.
|
| A supply chain attack can happen in hardware or software.
| Hardware has firmware, which is software.
|
| What makes this XZ attack so scary is that it was directly
| from a "trusted" source. A similar attack could come from
| any trusted source.
|
| At least with software it is much easier to patch.
| berkes wrote:
| Why would "embed software in chips" be a solution?
|
| If anything, I'd expect it to be an even bigger risk,
| because when (not if) a security issue is found in the
| hardware, you now have no way to fix it, other than
| throwing out this server/fridge/toothbrush or whatever is
| running it.
| WesolyKubeczek wrote:
| Which will make updates either expensive or impossible. You
| will be able to write books about exploitable bugs in the
| hardware, and those books will easily survive several
| editions.
| avidiax wrote:
| The NSA demands that Intel and AMD provide backdoor ways to
| turn off the IME/PSP, which are basically a small OS
| running in a small processor inside your processor. So the
| precedent is that the government wants less embedded
| software in their hardware, at least for themselves.
|
| If we relied on gadget vendors to maintain such software, I
| think we can just look at any IoT or router manufacturer to
| get an idea of just how often and for how long they will
| update the software. So that idea will probably backfire
| spectacularly if implemented.
| skywhopper wrote:
| From what I've read I _think_ the attack vector is:
|
| 1. sshd starts and loads the libsystemd library which loads the
| XZ library which contains the hack
|
| 2. The XZ library injects its own versions of functions in
| openssl that verify RSA signatures
|
| 3. When someone logs into SSH and presents a signed SSH
| certificate as authentication, those hacked functions are
| called
|
| 4. The certificate, in turn, can contain arbitrary data that in
| a normal login process would include assertions about username
| or role that would be used to determine if the certificate is
| valid for use logging in as the particular user. But if the
| hacked functions detect that the certificate was signed by a
| _specific_ attacker key, they take some subfield of the
| certificate and execute it as a command on the system in the
| sshd context (ie, as the root user).
|
| Unfortunately, we don't know the attacker's signing key, just
| the public key the hacked code uses to validate it. But
| basically this would give the attacker a way to run any command
| as root on any compromised system without leaving much of a
| trace, beyond the (presumably failed) login attempt, which any
| system on the internet will be getting a lot of anyway.
| ghostpepper wrote:
| > beyond the (presumably failed) login attempt
|
| There is some evidence it's scrubbing logs so we might not
| even have that.
| plg94 wrote:
| I don't think we know what exactly this does, yet. I can only
| answer one of those questions, as far as I understand the
| "unreplayable" part is refering to this:
|
| > Apparently the backdoor reverts back to regular operation if
| the payload is malformed or *the signature from the attacker's
| key doesn't verify*.
|
| emphasis mine, note the "signature of the attacker's key". So
| unless that key is leaked, or someone breaks the RSA algorithm
| (in which case we have _far_ bigger problems), it 's impossible
| for someone else (researcher or third-party) to exploit this
| backdoor.
| q3k wrote:
| > The OP makes it sound like an attacker can send a malicious
| payload in the pre-auth phase of an SSH session - but why does
| he say that an exploit might never be available? Surely if we
| can reverse the code we can write a PoC?
|
| Not if public-key cryptography was used correctly, and if there
| are no exploitable bugs.
| ajross wrote:
| > The OP makes it sound like an attacker can send a malicious
| payload in the pre-auth phase of an SSH session - but why does
| he say that an exploit might never be available?
|
| The exploit as shipped is a binary (cleverly hidden in the test
| data), not source. And it validates the payload vs. a private
| key that isn't known to the public. Only the attacker can
| exercise the exploit currently, making it impossible to scan
| for (well, absent second order effects like performance, which
| is how it was discovered).
| dolmen wrote:
| git.tukaani.org runs sshd. If that sshd was upgraded with the xz
| backdoor, we cannot exclude that the host was compromised as it
| could be have been a obvious target for the backdoor author.
| SubiculumCode wrote:
| So is this backdoor active in Ubuntu distributions?
| justinsaccount wrote:
| > Apparently the backdoor reverts back to regular operation if
| the payload is malformed or the signature from the attacker's key
| doesn't verify.
|
| Does this mean it's possible to send every ssh server on the
| internet a malformed payload to get it to disable the backdoor if
| it was vulnerable?
| pcthrowaway wrote:
| Has anyone proposed a name for this exploit yet?
| kstrauser wrote:
| Dragon Gate. There's my contribution.
| martinohansen wrote:
| Imagine a future where state actors have hundreds of AI agents
| fixing bugs, gaining reputation while they slowly introduce
| backdoors. I really hope open source models succeed.
___________________________________________________________________
(page generated 2024-03-30 23:00 UTC)