[HN Gopher] Xzbot: Notes, honeypot, and exploit demo for the xz ...
       ___________________________________________________________________
        
       Xzbot: Notes, honeypot, and exploit demo for the xz backdoor
        
       Author : q3k
       Score  : 526 points
       Date   : 2024-04-01 15:40 UTC (7 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | miduil wrote:
       | Super impressed how quickly the community and in particular
       | amlweems were able to implement and document a POC. If the
       | cryptographic or payload loading functionality has no further
       | vulnerabilities, this would have been also at least not
       | introducing a security flaw to all the other attackers until the
       | key is broken or something.
       | 
       | Edit: I think what's next for anyone is to figure out a way to
       | probe for vulnerable deployments (which seems non-trivial) and
       | also perhaps possibly ?upstreaming? a way to monitor if someone
       | actively probes ssh servers with the hardcoded key.
       | 
       | Kudos!
        
         | rst wrote:
         | Well, it's a POC against a re-keyed version of the exploit; a
         | POC against the original version would require the attacker's
         | private key, which is undisclosed.
        
           | miduil wrote:
           | It's a POC nevertheless, it's a complete implementation of
           | the RCE minus obviously the private key.
        
             | nindalf wrote:
             | It doesn't matter. The people with the private key already
             | knew all of this because they implemented it. The script
             | kiddies without the private key can't do anything without
             | it. A POC doesn't help them in any way.
             | 
             | A way to check if servers are vulnerable is probably by
             | querying the package manager for the installed version of
             | xz. Not very sophisticated, but it'll work.
        
               | miduil wrote:
               | > It doesn't matter.
               | 
               | To understand the exact behavior and extend of the
               | backdoor, this does matter. An end to end proof of how it
               | works is exactly what was needed.
               | 
               | > A way to check if servers are vulnerable is probably by
               | querying the package manager
               | 
               | Yes, this has been know since the initial report + later
               | discovering what exact strings are present for the
               | payload.
               | 
               | https://github.com/Neo23x0/signature-
               | base/blob/master/yara/b...
               | 
               | > Not very sophisticated, but it'll work.
               | 
               | Unfortunately, we live in a world with closed-servers and
               | appliances - being able as a customer or pen tester rule
               | out certain class of security issues without having the
               | source/insights available is usually desirable.
        
               | nindalf wrote:
               | > we live in a world with closed-servers and appliances
               | 
               | Yeah but these servers and appliances aren't running
               | Debian unstable are they? I'd understand if it affected
               | LTS versions of distros, but these were people living on
               | the bleeding edge anyway. Folks managing such servers are
               | going to be fine running `apt-get update`.
               | 
               | We got lucky with this one, tbh.
        
               | doakes wrote:
               | Are you saying POCs are pointless unless a script kiddie
               | can use it?
        
               | nindalf wrote:
               | The _context_ of the conversation, which you seem to have
               | missed, is that now that we have a POC, we need a way to
               | check for vulnerable servers. The link being that a POC
               | makes it easier for script kiddies to use it, meaning we
               | 're in a race against them. But we aren't, because only
               | one group in the whole world can use this exploit.
        
               | miduil wrote:
               | > is that now that we have a POC, we need a way to check
               | for vulnerable servers.
               | 
               | You misunderstand me, the "need to check for vulnerable
               | servers" has nothing to do with the PoC in itself. You
               | want to know whether you're vulnerable against this
               | mysterious unknown attacker that went through the all the
               | hoops for a sophisticated supply chain attack. I never
               | said that we need a way to detect it because there is a
               | POC out, at least I didn't meant to imply that either.
               | 
               | > script kiddies to use it, meaning we're in a race
               | against them
               | 
               | This is something you and the other person were suddenly
               | coming up with, never said this in first place.
        
           | misswaterfairy wrote:
           | Could the provided honeypot print out keys used in successful
           | and unsuccessful attempts?
        
         | cjbprime wrote:
         | Probing for vulnerable deployments over the network (without
         | the attacker's private key) seems impossible, not non-trivial.
         | 
         | The best one could do is more micro-benchmarking, but for an
         | arbitrary Internet host you aren't going to know whether it's
         | slow because it's vulnerable, or because it's far away, or
         | because the computer's slow in general -- you don't have access
         | to how long connection attempts to that host took historically.
         | (And of course, there are also routing fluctuations.)
        
           | anonymous-panda wrote:
           | Should be able to do it by having the scanner take multiple
           | samples. As long as you don't need a valid login and the
           | performance issue is still observable, you should be about to
           | scan for it with minimal cost
        
       | supriyo-biswas wrote:
       | All these efforts are appreciated, but I don't think the attacker
       | is going to leak their payload after the disclosure and
       | remediation of the vuln.
        
         | chonkerz wrote:
         | These efforts have deterrence value
        
       | acdha wrote:
       | Has anyone tried the PoC against one of the anomalous process
       | behavior tools? (Carbon Black, AWS GuardDuty, SysDig, etc.) I'm
       | curious how likely it is that someone would have noticed
       | relatively quickly had this rolled forward and this seems like a
       | perfect test case for that product category.
        
         | knoxa2511 wrote:
         | Sysdig released a blog on friday. "For runtime detection, one
         | way to go about it is to watch for the loading of the malicious
         | library by SSHD. These shared libraries often include the
         | version in their filename."
         | 
         | The blog has the actual rule content which I haven't seen from
         | other security vendors
         | 
         | https://sysdig.com/blog/cve-2024-3094-detecting-the-sshd-bac...
        
           | acdha wrote:
           | Thanks! That's a little disappointing since I would have
           | thought that the way it hooked those functions could've been
           | caught by a generic heuristic but perhaps that's more common
           | than I thought.
        
           | RamRodification wrote:
           | That relies on knowing what to look for. I.e. "the malicious
           | library". The question is whether any of these solutions
           | could catch it without knowing about it beforehand and having
           | a detection rule specifically made for it.
        
         | dogman144 wrote:
         | Depends how closely the exploit mirrors and/or masks itself
         | within normal compression behavior imo.
         | 
         | I don't think GuardDuty would catch it as it doesn't look at
         | processes like an EDR does (CrowdStrike, Carbon black), I don't
         | think sysdig would catch it as looks at containers and cloud
         | infra. Handwaving some complexity here, as GD and sysdig could
         | prob catch something odd via privileges gained and follow-on
         | efforts by the threat actor via this exploit.
         | 
         | So imo means only EDRs (monitoring processes on endpoints) or
         | software supply chain evaluations (monitoring sec problems in
         | upstream FOSS) are most likely to catch the exploit itself.
         | 
         | Leads into another fairly large security theme interestingly -
         | dev teams can dislike putting EDRs on boxes bc of the hit on
         | compute and UX issues if a containment happens, and can dislike
         | limits policy and limits around FOSS use. So this exploit hits
         | at the heart of a org-driven "vulnerability" that has a lot of
         | logic to stay exposed to or to fix, depending on where you sit.
         | Security industry's problem set in a nutshell.
        
           | acdha wrote:
           | Guard Duty does have some ptocees level monitoring with some
           | recent additions: https://aws.amazon.com/blogs/aws/amazon-
           | guardduty-ec2-runtim...
           | 
           | The main thing I was thinking is that the audit hooking and
           | especially runtime patching across modules (liblzma5 patching
           | functions in the main sshd code block) seems like the kind of
           | thing a generic behavioral profile could get but especially
           | one driven by the fact that sshd does not do any of that
           | normally.
           | 
           | And, yes, performance and reliability issues are a big
           | problem here. When CarbonBlack takes down production again,
           | you probably end up with a bunch of exclusions which mean an
           | actual attacker might be missed.
        
       | arnaudsm wrote:
       | Have we seen exploitation in the wild yet?
        
         | winkelmann wrote:
         | I assume the operation has most likely been called off. Their
         | goal was probably to wait until it got into stable distros. I
         | doubt there is a large number of unstable Debian or Fedora
         | Rawhide servers with open SSH in the wild.
        
         | rwmj wrote:
         | If it hadn't been discovered for another month or so, then it
         | would have appeared in stable Fedora 40, Ubuntu 24.4 and
         | Debian, and then it definitely would have been exploited.
         | Another year it would have been in RHEL 10. Very luck escape.
        
       | asveikau wrote:
       | It's pretty interesting that they didn't just introduce an RCE
       | that anyone can exploit, it requires the attacker's private key.
       | It's ironically a very security conscious vulnerability.
        
         | haswell wrote:
         | I suspect the original rationale is about preserving the
         | longevity of the backdoor. If you blow a hole wide open that
         | anyone can enter, it's going to be found and shut down quickly.
         | 
         | If this hadn't had the performance impact that brought it
         | quickly to the surface, it's possible that this would have
         | lived quietly for a long time exactly because it's not widely
         | exploitable.
        
           | declan_roberts wrote:
           | I agree that this is probably about persistence. Initially I
           | thought the developer was playing the long-con to dump some
           | crypto exchange and make off with literally a billion dollars
           | or more.
           | 
           | But if that was the case they wouldn't bother with the key.
           | It'd be a one-and-done situation. It would be a stop-the-
           | world event.
           | 
           | Now it looks more like nation-state spycraft.
        
             | meowface wrote:
             | >But if that was the case they wouldn't bother with the
             | key. It'd be a one-and-done situation. It would be a stop-
             | the-world event.
             | 
             | Why not? It's possible someone else could've discovered the
             | exploit before the big attack but decided not to disclose
             | it. Or that they could've disclosed it and caused a lot of
             | damage the attacker didn't necessarily want. And they
             | easily could've been planning both a long-term way to
             | access a huge swath of machines and also biding their time
             | for a huge heist.
             | 
             | They have no reason to not restrict the backdoor to their
             | personal use. And it probably is spycraft of some sort, and
             | I think more likely than not it's a nation-state, but not
             | necessarily. I could see a talented individual or group
             | wanting to pull this off.
        
               | varenc wrote:
               | I think we need to consider the context. The attacker
               | ultimately only had control over the lzma library. I'm
               | skeptical that there's an innocent looking way that lzma
               | could have in the open introduced an "accidental" RCE
               | vuln that'd affect sshd. Of course I agree that they also
               | wanted an explicit stealth backdoor for all the other
               | reasons, but I don't think a plausibly deniable RCE or
               | authentication bypass vuln would have even been possible.
        
             | btown wrote:
             | It's worth also noting that the spycraft involved a
             | coordinated harassment campaign of the original maintainer,
             | with multiple writing styles, to accelerate a transition of
             | maintainership to the attacker:
             | 
             | https://www.mail-archive.com/xz-
             | devel@tukaani.org/msg00566.h...
             | 
             | https://www.mail-archive.com/xz-
             | devel@tukaani.org/msg00568.h...
             | 
             | https://www.mail-archive.com/xz-
             | devel@tukaani.org/msg00569.h...
             | 
             | While this doesn't prove nation-state involvement, it
             | certainly borrows from a wide-ranging playbook of
             | techniques.
        
               | allanbreyes wrote:
               | Ugh, that this psyops sockpuppetry may have started or
               | contributed to the maintainer's mental health issues
               | seems like the most depressing part of all this.
               | Maintaining OSS is hard enough.
        
               | __MatrixMan__ wrote:
               | No good deed goes unpunished, what a shame.
        
               | btown wrote:
               | I hope that one takeaway from this entire situation is
               | that if you're a maintainer and your users are pushing
               | you outside your levels of comfort, that's a reflection
               | on them, not on you - and that it could be reflective of
               | something far, far worse than just their impatience.
               | 
               | If you, as a maintainer, value stability of not only your
               | software but also your own mental health, it is
               | _entirely_ something you can be proud of to resist calls
               | for new features, scope increases, and rapid team
               | additions.
        
               | Delk wrote:
               | Probably didn't start them, considering that he already
               | mentioned (long-term) mental health issues in the mailing
               | list discussion in which the (likely) sock puppets
               | started making demands.
               | 
               | But it's hard to see the whole thing helping, and it is
               | some combination of depressing and infuriating. I hope
               | he's doing ok.
        
               | mrkramer wrote:
               | At first I thought the guy who did this was a lone wolf
               | but now I believe it was indeed state actor. They
               | coordinated and harassed original maintainer into giving
               | them access to the project, basically they hijacked the
               | open source project. The poor guy(the original
               | maintainer) was alone against state actor who was
               | persistent with the goal of hijacking and then
               | backdooring the open source project.
               | 
               | It seems like they were actively looking[0] which open
               | source compression library they can inject with
               | vulnerable code and then exploit and backdoor afterwards.
               | 
               | [0] https://lwn.net/Articles/967763/
        
               | asveikau wrote:
               | Reading that link, it seems like the vulnerability is
               | that a file name gets printed, so you can add terminal
               | control characters in a file name and have it printed.
               | 
               | https://github.com/libarchive/libarchive/pull/1609#issuec
               | omm...
        
               | Voultapher wrote:
               | I mean one person can use sock puppet accounts to write
               | emails.
        
               | yread wrote:
               | Maybe they weren't all sockpuppets. Here Jigar Kumar was
               | nitpicking Jia Tan's changes:
               | 
               | https://www.mail-archive.com/xz-
               | devel@tukaani.org/msg00556.h...
               | 
               | That was not necessary to gain trust. Writing style is
               | different, too. Later when Jia gained commit access he
               | reminds him to merge it.
        
               | barkingcat wrote:
               | that's precisely what sock puppetry does ... talk/write
               | in a different writing style to make others believe it's
               | different people.
        
               | asveikau wrote:
               | This looks like a very phony "debate".
               | 
               | I think the most convincing case made about the sock
               | puppets is around account creation dates, and also people
               | disappearing after they get what they need. Like Jigar
               | disappearing after Jia becomes maintainer. Or the guy
               | "misoeater19" who creates his debian bug tracker account
               | to say that his work is totally blocked on needing xz
               | 5.6.1 to be in debian unstable.
        
               | furstenheim wrote:
               | I actually wondered how many packages they harassed until
               | they got access to one such
        
             | juitpykyk wrote:
             | You're talking as if securing a backdoor with public
             | cryptography is some unimaginable feat of technology.
             | 
             | It's literally a couple hours work.
        
               | kbenson wrote:
               | I don't think they were using complexity as the reason
               | for that assumption, but instead goals. Adding security
               | doesn't require a nation state's level of resources, but
               | it is a more attractive feature for a nation state that
               | wants to preserve it over time and prevent adversaries
               | from making use of it.
        
               | neodymiumphish wrote:
               | And on the contrary, creating a vulnerability that's not
               | identifiable to a limited attack group provides for a bit
               | more deniability and anonymity. It's hard to say which is
               | more favorable by a nation-state actor.
        
           | eli wrote:
           | More to the point it prevents your enemies from using the
           | exploit against friendly targets.
           | 
           | The tradeoff is that, once you find it, it's very clearly a
           | backdoor. No way you can pretend this was an innocent bug.
        
           | chpatrick wrote:
           | Also people can come and immediately undo whatever you did if
           | it's not authenticated.
        
           | Beijinger wrote:
           | This is just a safety measure that it does not blow up in
           | your own face (country).
        
         | Alifatisk wrote:
         | For real, it's almost like a state-sponsored exploit. It's
         | crafted and executed incredibly well, the performance issue
         | feels like pure luck it got found.
        
           | jdewerd wrote:
           | Yeah, we should probably expect that there are roughly
           | 1/p(found) more of these lurking out there. Not a pleasant
           | thought.
        
           | andersa wrote:
           | It'd make total sense if it was. This way you get to have the
           | backdoor without your enemies being able to use it against
           | your own companies.
        
           | hnthrowaway0328 wrote:
           | Do we have a detailed technical analysis of the code? I read
           | a few analysis but they all seem preliminary. It is very
           | useful to learn from the code.
        
             | coldpie wrote:
             | There's a few links down at the bottom of the OP to quite
             | detailed analysis. From there you could join a Discord
             | where discussion is ongoing.
        
               | hnthrowaway0328 wrote:
               | Thanks coldpie.
        
           | stingraycharles wrote:
           | I like the theory that actually, it wasn't luck but was
           | picked up on by detection tools of a large entity (Google /
           | Microsoft / NSA / whatever), and they're just presenting the
           | story like this to keep their detection methods a secret.
           | It's what I would do.
        
             | jsmith99 wrote:
             | The attacker changed the projects contact details at oss
             | fuzz (an automated detection tool). There's an interesting
             | discussion as to whether that would have picked up the
             | vulnerability https://github.com/google/oss-
             | fuzz/issues/11760
        
               | meowface wrote:
               | That's a fascinating extra detail. They really tried to
               | cover all their bases.
               | 
               | There's some plausible evidence here that they may've
               | tried to use alter egos to encourage Debian to update the
               | package:
               | https://twitter.com/f0wlsec/status/1773824841331740708
        
               | metzmanj wrote:
               | I work on oss-fuzz.
               | 
               | I don't think it's plausible OSS-Fuzz could have found
               | this. The backdoor required a build configuration that
               | was not used in OSS-Fuzz.
               | 
               | I'm guessing "Jia Tan" knew this and made changes to XZ's
               | use of OSS-Fuzz for the purposes of cementing their
               | position as the new maintainer of XZ, rather than out of
               | worry OSS-Fuzz would find the backdoor as people have
               | speculated.
        
             | est31 wrote:
             | I doubt that if Google detected it with some internal tool,
             | they'd reach out to Microsoft to hide their contribution.
             | 
             | It was reported by an MS engineer who happens to be
             | involved in another OSS project. MS is doing business with
             | the US intelligence community, for example there is the
             | Skype story: First, rumors that NSA offers a lot of money
             | for people who can break Skype's E2E encryption, then MS
             | buys Skype, then MS changes Skype's client to not be E2E
             | encrypted any more and to use MS servers instead of peer to
             | peer, allowing undetectable wiretapping of arbitrary
             | connections.
             | 
             | But it's a quite credible story too that it was just a
             | random discovery. Even if it was the NSA, why would they
             | hide that capability. It doesn't take much to run a script
             | to compare git state with uploaded source tarballs in
             | distros like Debian (Debian has separate tarballs for the
             | source and the source with Debian patches applied).
        
               | anarazel wrote:
               | > It was reported by an MS engineer who happens to be
               | involved in another OSS project.
               | 
               | I view it as being OSS or postgresql dev that happens to
               | work at microsoft. I've been doing the former for much
               | longer (starting somewhere between 2005 and 2008,
               | depending on how you count) than the latter (2019-12).
        
               | est31 wrote:
               | Thanks for the explanation. Also thanks for catching this
               | and protecting us all; I think in the end it's way more
               | believable that you indeed found it on your own, above
               | was just brainless blathering into the ether =). Lastly,
               | thanks for your Postgres contributions.
        
               | bevekspldnw wrote:
               | "Even if it was the NSA, why would they hide that
               | capability"
               | 
               | Perhaps you're not familiar with what NSA historically
               | stood for: Never Say Anything.
        
               | t0mas88 wrote:
               | Intelligence agencies are very careful about sharing
               | their findings, even with "friends", because the findings
               | will disclose some information about their capabilities
               | and possibly methods.
               | 
               | Let's say agency A has some scanning capability on open
               | source software that detected this backdoor attempt by
               | agency B. If they had gone public, agency B now knows
               | they have this ability. So agency B will adjust their
               | ways the next time and the scanning capability becomes
               | less useful. While if agency A had told Microsoft to
               | "find" this by accident, nobody would know about their
               | scanning capability. And the next attempt by agency B
               | would only try to avoid having the performance impact
               | this first attempt had, probably leaving it visible to
               | agency A.
        
               | mistrial9 wrote:
               | > an MS engineer
               | 
               | no this engineer is world-known for being core
               | PostgreSQL, a team with high standards.. unlike that
               | company you mention
        
             | edflsafoiewq wrote:
             | They could announce it without revealing their detection
             | method. I don't see what the extra indirection buys them.
        
               | MOARDONGZPLZ wrote:
               | Shutting down 99.9% of speculation on how the
               | vulnerability was found in the first place.
        
               | VMG wrote:
               | (1) obviously not
               | 
               | (2) they could just say they found it during some routine
               | dependency review or whatever
        
             | meowface wrote:
             | It's really interesting to think what might've happened if
             | they could've implemented this with much less performance
             | overhead. How long might it have lasted for? Years?
        
             | TacticalCoder wrote:
             | > ... nd they're just presenting the story like this to
             | keep their detection methods a secret. It's what I would
             | do.
             | 
             | Basically "parallel construction". It's very possible it's
             | what happened.
             | 
             | https://en.wikipedia.org/wiki/Parallel_construction
        
             | takeda wrote:
             | Sorry for that ugly comparison, but that explanation
             | reminds me of the theories when covid started, that it was
             | created by secret organization that is actually ruling the
             | world.
             | 
             | People love when there's some explanation that doesn't
             | involve randomness, because with randomness looks like we
             | don't have grasp on things.
             | 
             | Google actually had tooling that was detecting it, but he
             | disabled check that would show it.
             | 
             | Google/Microsoft/NSA could just say they detected it with
             | internal tooling and not disclose how exactly. Google and
             | Microsoft would love to have credit.
        
           | webmaven wrote:
           | Was the performance issue pure luck? Or was it a subtle bit
           | of sabotage by someone inside the attacking group worried
           | about the implications of the capability?
           | 
           | If it had been successfully and secretly deployed, this is
           | the sort of thing that could make your leaders _much_ more
           | comfortable with starting a  "limited war".
           | 
           | There are shades of "Setec Astronomy" here.
        
           | lyu07282 wrote:
           | one question I still have is what exactly the performance
           | issue was? I heard it might be related to enumeration of
           | shared libraries, decoding of the scrambled strings[1], etc.
           | anyone know for sure yet?
           | 
           | one other point for investigation is if the code is similar
           | to any other known implants? like the way it obfuscates
           | strings, the way it detects debuggers, the way its setting up
           | a vtable, there might be code fragments shared across
           | projects. Which might give clues about its origin.
           | 
           | [1]
           | https://gist.github.com/q3k/af3d93b6a1f399de28fe194add452d01
        
           | timmytokyo wrote:
           | I'm not sure why everyone is 100% sure this was a state-
           | sponsored security breach. I agree that it's more likely than
           | not state-sponsored, but I can imagine all sorts of other
           | groups who would have an interest in something like this,
           | organized crime in particular. Imagine how many banks or
           | crypto wallets they could break into with a RCE this
           | pervasive.
        
             | blablabla123 wrote:
             | Especially considering this introduced a 500ms waiting
             | time. But surely this was quite a risky time investment, 2
             | years. How likely is it that this was the only attempt if
             | this was done by a group? (And maybe there were failed
             | attempts after trying to take over maintenance of other
             | packages?) Maybe really a very well-funded cybercrime group
             | that can afford such moonshot endeavours or a state group
             | that doesn't completely know yet what it's doing or isn't
             | that well equipped (anymore?). I'm definitely curious about
             | analysis of attribution
        
             | bevekspldnw wrote:
             | Motive and patience. Motive as you point out is shared by
             | many parties.
             | 
             | Typically its only state agencies that will fund an
             | operation with uncertain pay off over long periods of time.
             | That type of patience is expensive.
             | 
             | Online criminals are beholden to changing market pressures
             | and short term investment pressures like any other start
             | up.
        
           | 7373737373 wrote:
           | I read someone speculating that the performance issue was
           | intentional, so infected machines could be easily identified
           | by an internet wide scan without arousing further
           | suspicicion.
           | 
           | If this is or becomes a widespread method, then anti-malware
           | groups should perhaps conduct these scans themselves.
        
             | zarzavat wrote:
             | Very small differences in performance can be detected over
             | the network as long as you have enough samples. Given that
             | every port 22 is being hit by a gazillion attempts per day
             | already, sample count shouldn't be an issue.
             | 
             | So if distinguishing infected machines was their intention
             | they definitely over-egged it.
        
           | IshKebab wrote:
           | I don't think it was executed incredibly well. There were
           | definitely very clever aspects but they made multiple
           | mistakes - triggering Valgrind, the performance issue, using
           | a `.` to break the Landlock test, not giving the author a
           | proper background identity.
           | 
           | I guess you could also include the fact that they made it a
           | very obvious back door rather than an exploitable bug, but
           | that has the advantage of only letting you exploit it so it
           | was probably an intentional trade-off.
           | 
           | Just think how many back doors / intentional bugs there are
           | that we don't know about because they didn't make any of
           | these mistakes.
        
             | richardfey wrote:
             | Maybe it's the first successful attempt of a state which
             | nobody would right now suspect as capable of carrying this
             | out. Everyone is looking at the big guys but a new player
             | has entered the game.
        
         | linsomniac wrote:
         | Am I reading it correctly that the payload signature includes
         | the target SSH host key? So you can't just spray it around to
         | servers, it's fairly computationally expensive to send it to a
         | host.
        
           | miduil wrote:
           | *host key fingerprint, but I assume what you've meant.
           | 
           | It's practically a good backdoor then, crypto graphically
           | protected and safe against "re-play" attacks.
        
             | amluto wrote:
             | Not quite. It still looks vulnerable: an attacker A without
             | the private key impersonates a victim server V and reports
             | their host key. A careless attacker B with the key tries to
             | attack A, but an ends up recovering a valid payload
             | targeting V.
        
               | Denvercoder9 wrote:
               | I'm not too familar with the SSH protocol, but is it
               | possible to impersonate a victim server V without having
               | the private key to their host key?
        
               | pxx wrote:
               | This stuff is pre-auth.
               | 
               | You can just treat the entire thing as opaque and proxy
               | everything to the host you're trying to compromise; as
               | soon as you have an exploit string for a given host you
               | can just replay it.
        
         | nialv7 wrote:
         | how are you going to sell it if anyone can get in?
        
         | cesarb wrote:
         | This is called NOBUS: https://en.wikipedia.org/wiki/NOBUS
        
           | halJordan wrote:
           | This is not that concept. That concept is no one but us can
           | technically complete the exploit. Technical feasibility in
           | that you need a supercomputer to do it, not protecting a
           | backdoor with the normal cia triad
        
         | takeda wrote:
         | If you think of it as a state sponsored attack it makes a lot
         | of sense to have a "secure" vulnerability in system that your
         | own citizens might use.
         | 
         | It looks like the whole contribution to xz was an effort to
         | just inject that backdoor. For example the author created the
         | whole test framework where he could hide the malicious payload.
         | 
         | Before he started work on xz, he made contribution to
         | libarchive in BSD which created a vulnerability.
        
           | pxx wrote:
           | The libarchive diff didn't create any vulnerability. The
           | fprintf calls were consistent with others in the same
           | repository.
        
             | takeda wrote:
             | They still preferred to revert it as it looks very
             | suspicious.
             | 
             | https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1068047
             | 
             | https://github.com/libarchive/libarchive/pull/2101
        
               | asveikau wrote:
               | If I read that correctly the problem is that it prints a
               | filename that might include terminal control sequences
               | that come from an attacker-controlled file name.
               | 
               | Comment in your second link:
               | 
               | https://github.com/libarchive/libarchive/pull/1609#issuec
               | omm...
        
             | jhugo wrote:
             | It did, actually: the filename can contain terminal control
             | characters, which thanks to the change from safe_fprintf to
             | fprintf, were printed without escaping, which allows the
             | creator of the archive being extracted to control the
             | terminal of the user extracting the archive.
        
       | jobs_throwaway wrote:
       | Imagine how frustrating it has to be for the attacker to
       | meticulously plan and execute this and get foiled so late in the
       | game, and so publicly
        
         | Alifatisk wrote:
         | Must be punching the air right now
        
           | cellis wrote:
           | Or...falling back on less noticed contingency plans...
        
             | Ekaros wrote:
             | My pet theory is that this was just one project they have
             | been running for years. They are likely doing many more at
             | same time. Slowly inserting parts in various projects and
             | getting their contributors inside the projects.
        
               | 0cf8612b2e1e wrote:
               | That seems like a safe bet. If you are planning a multi
               | year operation, it would be silly to do it all under a
               | single account. Best to minimize the blast radius if any
               | single exploit gets discovered.
        
               | guyomes wrote:
               | Some of this activity was actually documented through the
               | alleged data breach of a hacking company [1].
               | 
               | [1]: https://en.wikipedia.org/wiki/Hacking_Team
        
               | SAI_Peregrinus wrote:
               | If it's an intelligence agency exploit, this is nearly
               | certain. Getting agents hired as employees of foreign
               | companies to provide intelligence is an ancient practice.
               | Getting agents to be open source maintainers is a
               | continuation of the same thing.
        
           | noobermin wrote:
           | Or, they must be strapped into a chair having teeth being
           | pulled out by whoever directed them.
        
           | TechDebtDevin wrote:
           | Likely a team of people at a three letter agency.
        
             | morkalork wrote:
             | I wonder if they have OKRs too.
        
             | TheBlight wrote:
             | Everyone keeps saying this but it seems unlikely to me that
             | they'd do this for a relatively short window of opportunity
             | and leave their methods for all to see.
        
               | avidiax wrote:
               | You are judging this by the outcome, as though it were
               | pre-ordained, and also assuming that this is the only
               | method this agency has.
               | 
               | It is much more likely that this backdoor would have gone
               | unnoticed for months or years. The access this backdoor
               | provides would be used only once per system, to install
               | other APT (advanced persistent threats), probably layers
               | of them. Use a typical software RAT or rootkit as the
               | first layer. If that is discovered, fallback to the
               | private keys you stole, or the social engineer the
               | company directory you copied. If that fails, rely on the
               | firmware rootkit that only runs if it's timer hasn't been
               | reset in 6 months. Failing that, re-use this backdoor if
               | it's still available.
        
               | TheBlight wrote:
               | It was found in a few weeks so why is it more likely it
               | wouldn't have been noticed for months/years with more
               | people running the backdoored version of the code?
        
               | ufo wrote:
               | We were lucky that the backdoor called attention to
               | itself, because it impacted the performance off ssh and
               | introduced valgrind warnings.
        
               | TheBlight wrote:
               | Doesn't that further suggest non-state actor(s)?
        
               | AtNightWeCode wrote:
               | My guess is that a ransomware group is behind this. Even
               | if the backdoor had gone into production servers it would
               | have been found fairly quickly if used at some scale.
        
               | TheBlight wrote:
               | >My guess is that a ransomware group is behind this.
               | 
               | My bet would be that they were after a crypto exchange(s)
               | where they've already compromised some level of access
               | and want to get deeper into the backend.
               | 
               | >Even if the backdoor had gone into production servers it
               | would have been found fairly quickly if used at some
               | scale.
               | 
               | I agree. Yes it's possible the backdoor could've gone
               | unnoticed for months/years but I think the perp would've
               | had to assume not.
        
         | pixl97 wrote:
         | So who's started scanning open source lib test cases for more
         | stuff like this?
        
           | beefnugs wrote:
           | Like everything else in this world, its utterly and
           | completely unfixably broken: There is no way you can have
           | complex huge dependencies properly.
           | 
           | The best most paranoid thing is to have multiple networking
           | layers hoping they have to exploit multiple things at once,
           | and whitelist only networking for exactly what you are
           | expecting to happen. (which is completely incompatible with
           | the idea of ssl, we would need a new type of firewall that
           | sits between ALL applications before encryption, like a
           | firewall between applications and the crypto library itself,
           | which breaks a bunch of other things people want to do)
        
           | slaymaker1907 wrote:
           | I think it would help to try and secure signing
           | infrastructure as much as possible. First of all, try to have
           | 3 devs for any given project (this is the most difficult
           | step). Make sure logging into or changing any of the signing
           | stuff requires at least 2 of the devs, one to initiate the
           | JIT and one to monitor.
           | 
           | Additionally, take supply chain steps like requiring
           | independent approval for PRs and requiring the signing
           | process to only work with automated builds. Don't allow for
           | signing bits that were built on a dev machine.
           | 
           | Finally, I think it would help to start implementing signing
           | executables and scripts which get checked at runtime. Windows
           | obviously does executable signing, but it largely doesn't
           | sign stuff like Python scripts. JS in the browser is kind of
           | signed given that the whole site is signed via https. It's
           | not perfect, but it would help in preventing one program from
           | modifying another for very sensitive contexts like web
           | servers.
        
             | bawolff wrote:
             | I do not think any of that would have prevented this
             | situation.
        
               | slaymaker1907 wrote:
               | Upon further reading, I think you might be correct. I
               | initially thought a good signing process would be
               | sufficient since it sounded like this malicious blob was
               | secretly being included in the tarball by the build
               | server, but it instead seems to be the case that the
               | malicious binary was included in the repo as a test file.
               | 
               | You could probably still protect against this sort of
               | attack using signing, but it would be much more laborious
               | and annoying to get working. The idea is that you would
               | somehow declare that OpenSSH binaries must be signed by a
               | *particular* key/authority, that VS Code is signed by
               | Microsoft, Chrome signed by Google, etc. Additionally,
               | the config declaring all of this obviously needs to be
               | secured so you'd need to lock down access to those files
               | (changing key associations would need to require more
               | permissions than just updating old software or installing
               | new software to be useful).
        
               | Aloisius wrote:
               | It was both. The binary object file was obfuscated in two
               | "test" xz files in the repo itself. The code to extract
               | the object file and inject it into the build process was
               | only in the GitHub release tarball.
               | 
               | The code in the tarball could have been prevented if only
               | automated tarballs were permitted (for instance, GitHub's
               | branch/tag source tarballs) or caught after the fact by
               | verifying the file hashes in the tarball against those in
               | the repo.
        
             | j-krieger wrote:
             | All this riffraff and OSS maintainers basically work for
             | free.
        
         | zoeysmithe wrote:
         | This actor, and others like them, may have dozens or hundreds
         | of these things out there. We don't know. This was only found
         | accidentally, not via code review.
        
       | herpderperator wrote:
       | > Note: successful exploitation does not generate any log
       | entries.
       | 
       | Does this mean, had this exploit gone unnoticed, the attacker
       | could have executed arbitrary commands as root without even a
       | single sshd log entry on the compromised host regarding the
       | 'connection'?
        
         | sureglymop wrote:
         | Yes.. The RCE happens at the connection stage before anything
         | is logged.
        
           | udev4096 wrote:
           | That's insane. How exactly does this happen? Are there no
           | EDR/IDS who can detect an RCE at the connection stage?
        
             | bawolff wrote:
             | An IDS may detect something depending on what it is looking
             | for. The grandparent is saying that sshd doesn't log
             | anything. Which is not that surprising since sshd is
             | atracker controlled.
        
         | gitfan86 wrote:
         | Yeah, but then you would have ssh traffic without a matching
         | login.
         | 
         | Wonder if any anomaly detection would work on that
        
           | FergusArgyll wrote:
           | Interesting... Though you can edit whatever log file you want
        
             | jmb99 wrote:
             | Any log that root on that box has write access to. It's
             | theoretically possible to have an anomaly detection service
             | running on a vulnerable machine dumping all of its' data to
             | an append-only service on some other non-compromised box.
             | In that case, (in this ideal world) the attacker would not
             | be able to disable the detection service before it had
             | logged the anomalous traffic, and wouldn't be able to purge
             | those logs since they were on another machine.
             | 
             | I'm not aware of any services that a) work like this, or b)
             | would be able to detect this class of attack earlier than
             | last week. If someone does though, please share.
        
               | fubar9463 wrote:
               | You would be sending logs to a log collector (a SIEM) in
               | security terms, and then you could join your firewall
               | logs against your SSH auth logs.
               | 
               | This kind of anomaly detection is possible. Not sure how
               | common it is. I doubt it is common.
        
               | fubar9463 wrote:
               | In any case the ROI for correlating SSH logs against
               | network traffic is potentially error prone and may be
               | more noisy than useful (can you differentiate in logs
               | between SSH logins from a private IP and a public one?).
               | 
               | An EDR tool would be much better to look for an
               | attacker's _next_ steps. But if you're trying to catch a
               | nation state they probably already have a plan for hiding
               | their tracks.
        
               | juitpykyk wrote:
               | You can do it on a single machine if you use the TPM to
               | create log hashes which can't be rolled back.
        
       | mrob wrote:
       | Do we know if this exploit only did something if a SSH connection
       | was made? There's a list of strings from it on Github that
       | includes "DISPLAY" and "WAYLAND_DISPLAY":
       | 
       | https://gist.github.com/q3k/af3d93b6a1f399de28fe194add452d01
       | 
       | These don't have any obvious connection to SSH, so maybe it did
       | things even if there was no connection. This could be important
       | to people who ran the code but never exposed their SSH server to
       | the Internet, which some people seem to be assuming was safe.
        
         | cma wrote:
         | Could that be related x11 session forwarding (common security
         | hole on the connectors' side if they don't turn it off when
         | connecting to an untrusted machine).
        
         | rdtsc wrote:
         | Those are probably kill switches to prevent the exploit from
         | working if there is a terminal open or runs in a GUI session.
         | In other words someone trying to detect, reproduce or debug it.
        
       | Joel_Mckay wrote:
       | Too bad, for a minute I thought it was something useful like
       | adding a rule to clamav to find the private key-pair signature ID
       | of the servers/users that up-streamed the exploit.
       | 
       | "Play stupid games, win stupid prizes"
        
       | dec0dedab0de wrote:
       | Stuff like this is why I like port knocking, and limiting access
       | to specific client IPs/networks when possible.
       | 
       | 20 years ago, I was working at an ISP/Telco and one of our
       | vendors had a permanent admin account hardcoded on their gear,
       | you couldn't change the password and it didn't log access, or
       | show up as an active user session.
       | 
       | Always limit traffic to just what is necessary, does the entire
       | internet really need to be able to SSH to your box?
        
         | herpderperator wrote:
         | The thing about port knocking is that if you're on a host where
         | you don't have the ability to port-knock, then you're not able
         | to connect.
         | 
         | This can turn into a footgun: you're away from your usual
         | device, something happens and you desperately need to connect,
         | but now you can't because all the devices in your vicinity
         | don't have the ability to perform $SECURITY_FEATURE_X so that
         | you can connect, and you're screaming at yourself for adding so
         | much security at the expense of convenience.
         | 
         | This could happen as easily as restricting logins to SSH keys
         | only, and not being able to use your SSH key on whatever host
         | you have available at the time, wishing you'd have enabled
         | password authentication.
        
           | pajko wrote:
           | https://github.com/CERN-CERT/pam_2fa
        
             | herpderperator wrote:
             | What does this have to do with port knocking?
        
             | nequo wrote:
             | This is great but would this neutralize the xz backdoor?
             | The backdoor circumvents authentication, doesn't it?
        
           | Joel_Mckay wrote:
           | Port knocking with ssh over https using client certs.
           | 
           | And port knocking is one of the few effective methods against
           | the standard distributed brute forcing slow attacks.
           | 
           | Note if you blanket-ban IN/RU/UK/CN/HK/TW/IR/MX/BZ/BA +
           | tor/proxies lists, than 99.998% of your nuisance traffic
           | issues disappear overnight. =)
        
             | deknos wrote:
             | can you give an example for an implementation of
             | portnocking/ssh/over https and client certs?
        
               | Joel_Mckay wrote:
               | In general, it is a standard shore-wall firewall rule in
               | perl, and the standard ssh protocol wrapper mod.
               | 
               | These are very well documented tricks, and when combined
               | with a standard port 22 and interleaved knock ports
               | tripwire 5 day ban rules... are quite effective against
               | scanners too.
               | 
               | I am currently on the clock, so can't write up a detailed
               | tutorial right now.
               | 
               | Best regards, =)
        
             | oarsinsync wrote:
             | > if you blanket-ban IN/RU/UK/CN/HK/TW/IR/MX/BZ/BA
             | 
             | The list of countries there: India, Russia, United Kingdom,
             | China, Hong Kong, Taiwan, Iran, Mexico, Belize, Bosnia and
             | Herzegovina
             | 
             | I'm amused that the UK is in that group.
        
               | Joel_Mckay wrote:
               | I was too, but the past few years their government server
               | blocks have been pen-testing servers without
               | authorization.
               | 
               | It is apparently a free service they offer people even
               | when told to get stuffed.
               | 
               | =)
               | 
               | My spastic ban-hammer active-response list i.e. the
               | entire block gets temporarily black-holed with a single
               | violation: AU BG BR BZ CN ES EE FR GB HR IL IN ID IR IQ
               | JP KG KR KP KW LV MM MX NI NL PA PE PK PL RO RU RS SE SG
               | TW TH TR YE VN UA ZA ZZ
        
           | dec0dedab0de wrote:
           | Absolutely, everything in security is a tradeoff. I guess the
           | real point is that there should be layers, and even though
           | you should never rely on security through obscurity, you
           | should still probably have a bit of obscurity in the mix.
        
           | jolmg wrote:
           | > The thing about port knocking is that if you're on a host
           | where you don't have the ability to port-knock, then you're
           | not able to connect.
           | 
           | If that's important, one should be able to set up port
           | knocking such that you're able to do the knocks even by
           | changing the port in a sequence by hand on e.g. a web browser
           | address bar.
        
             | mr_mitm wrote:
             | Note that port knocking is vulnerable to replay attacks.
             | Single Packet Authentication is better, but requires a
             | private key (can be your SSH key).
             | 
             | https://www.cipherdyne.org/fwknop/
        
               | jolmg wrote:
               | Not if you set it up such that each knocking sequence can
               | only be used once. Port knocking is a flexible concept.
        
               | squigz wrote:
               | I wonder if one could combine port knocking and TOTP in
               | some way, so the sequence is determined by the TOTP?
               | 
               | (Security is not my thing; don't judge me!)
        
               | noman-land wrote:
               | Was just having the same thought reading this thread.
        
               | ametrau wrote:
               | Yeah you could but wouldn't it defeat the purpose of
               | being basically a secret knock before you can give the
               | password? The password should be the ssh password.
        
               | cesnja wrote:
               | This would be just to allow you to connect to the server.
               | If there was a vulnerable sshd on port 22, an adversary
               | would have to know the port knocking sequence to connect
               | to sshd and run the exploit.
        
               | ulrikrasmussen wrote:
               | What if the knocking sequence was derived from a TOTP
               | secret?
        
               | swsieber wrote:
               | Ooo, that's a fun idea
        
               | discite wrote:
               | Definitely some fun project to try
        
               | deepbreath wrote:
               | Don't have to do anything too complicated. Here's the
               | knocker code in a short Bash script, produced by GPT4:
               | 
               | ~ % gpt4 'write a very short bash script that takes the
               | number stored in ~/.ssh/knock_seq, increments it by 1 and
               | saves it to the file. It then takes the new number and
               | concatenates it with the value stored in the file
               | ~/.ssh/secret. It pipes the resulting string to sha1sum,
               | spitting out binary. It then takes both the resulting
               | sha1sum and the number used and pipes their concatenation
               | to "nc -u $host $(cat ~/.ssh/knocking_port)". be brief'
               | knock_seq=$(cat ~/.ssh/knock_seq)       let knock_seq++
               | echo $knock_seq > ~/.ssh/knock_seq
               | concat_seq_secret=$(echo -n "${knock_seq}$(cat
               | ~/.ssh/secret)")       sha1_output=$(echo -n
               | $concat_seq_secret | sha1sum -b | awk '{print $1}')
               | final_output="${sha1_output}${knock_seq}"
               | host=localhost       knocking_port=$(cat
               | ~/.ssh/knocking_port)       echo -n $final_output | nc -u
               | $host $knocking_port
        
               | deepbreath wrote:
               | The knockee PoC should also be straightforward, can use
               | socat + udp-listen + fork with a script that checks that
               | input matches `sha1sum(secret||num)||num` and
               | `num>previously_seen_num`, and if so, adds an iptables
               | rule.
               | 
               | This should prevent against replays. Throw in some rate
               | limits somewhere maybe to not get DDoSed, especially if
               | you let socat `fork`.
        
               | vikarti wrote:
               | This looks related to some other problem: - There is
               | Alice's server which provide service X - There are
               | clients like Bob who needs this service. - There is
               | Mallory who thinks clients doesn't need such service.
               | Mallory have significant resources (more than Alice or
               | Bob). - Mallory thinks it's ok to block access to Alice'
               | server IF it's known that it's Alice's server and not
               | some random site. Mallory sometimes also thinks it's ok
               | to block if protocol is unknown.
               | 
               | This problem solved by XRay in all of it's versions. It
               | could be possible (if overkill) to use mostly same
               | methods to authenticate correct user and provide eir
               | access.
        
               | mlyle wrote:
               | Naive knocking isn't good as a primary security
               | mechanism, but it lowers your attack surface and adds
               | defense in depth.
               | 
               | It means that people who can't intercept traffic can't
               | talk to the ssh server-- and that's most attackers at the
               | beginning phases of an attack. And even someone who can
               | intercept traffic needs to wait for actual administrative
               | activity.
        
               | gnramires wrote:
               | Defense in depth has value I agree, but I think it can
               | also be counterproductive in some cases. Every layer can
               | also be buggy and have vulnerabilities, which can often
               | leak (e.g. into code execution) and compromise the whole
               | system (bypassing layers). What happened in this case
               | seems to be a case of maintained hijacking and
               | introducing vulnerabilities. Adding an additional
               | dependency (of say a port-knocking library) doesn't look
               | great in that regard, if the dependency can be hijacked
               | to add remote code execution capabilities. And that
               | library is likely a lot less scrutinized than OpenSSH!
               | 
               | Also underrated I think is security by simplicity.
               | OpenSSH should be extremely simple and easy to
               | understand, such that every proposal and change could be
               | easily scrutinized. Cryptographic constructions
               | themselves are almost mathematically proven invulnerable,
               | then a small codebase can go most of the way to
               | mathematically provable security (bonus points for formal
               | verification).
               | 
               | But for this kind of system there's usually some kind of
               | human vulnerability (e.g. system updates for your distro)
               | in the loop such that the community needs to remain
               | watchful. (It's fun to consider an application that's
               | proven correct and doesn't need updating every again, but
               | usually that's not practical)
        
               | mlyle wrote:
               | > Adding an additional dependency (of say a port-knocking
               | library) doesn't look great in that regard, if the
               | dependency can be hijacked to add remote code execution
               | capabilities.
               | 
               | Port knocking infrastructure can be _minimal_ , knowing
               | nothing but addresses knocking. It can also be completely
               | outside the protected service on a gateway.
               | 
               | Indeed, it can even be no-code, e.g.
               | https://www.digitalocean.com/community/tutorials/how-to-
               | conf...
               | 
               | > OpenSSH should be extremely simple and easy to
               | understand, such that every proposal and change could be
               | easily scrutinized.
               | 
               | But OpenSSH intrinsically is going to have a much larger
               | attack surface.
               | 
               | > then a small codebase can go most of the way to
               | mathematically provable security (bonus points for formal
               | verification).
               | 
               | It's worth noting this would not have helped against this
               | attack:
               | 
               | * It was against another dependency, not openssh
               | 
               | * The actual vulnerability didn't occur in the code you'd
               | inspect as part of verification processes today. (I don't
               | think anyone is formally verifying build process).
        
           | nottorp wrote:
           | You just described the problem with using keys for any login,
           | like the latest fad is?
           | 
           | And generally with depending on a device (phone, xxxkey or
           | whatever) for access.
        
             | Dwedit wrote:
             | At least TOTP is just a long, random, server-assigned
             | password, where you you don't type the whole thing into
             | during login attempts.
             | 
             | You can write down the full TOTP secret, and make new 'TOTP
             | keys' whenever you want. Suggested apps: Firefox Extension
             | named "TOTP", and Android App named "Secur". You don't need
             | the heavier apps that want you to create accounts with
             | their company.
             | 
             | Using a TOTP key only provides a little more protection
             | than any other complex password, since you're not typing in
             | the whole thing during a login attempt.
        
           | tamimio wrote:
           | > The thing about port knocking is that if you're on a host
           | where you don't have the ability to port-knock, then you're
           | not able to connect.
           | 
           | Then you attach a device that can have port knocking to that
           | unsupported host. Also, I remember it was called port
           | punching not knocking.
        
           | mlyle wrote:
           | > The thing about port knocking is that if you're on a host
           | where you don't have the ability to port-knock, then you're
           | not able to connect.
           | 
           | You can type http://hostname:porttoknock in a browser.
           | 
           | As long as you're not behind a super restrictive gateway that
           | doesn't let you connect to arbitrary ports, you're golden.
        
             | webmaven wrote:
             | TIL another interesting browser feature. Thank you.
        
               | Dwedit wrote:
               | That's just the usual way of attempting to connect to an
               | HTTP server running on a different port. Sometimes you
               | see websites hosted on port 8080 or something like that.
        
               | webmaven wrote:
               | Oh. Duh.
               | 
               | I suppose I should have said it was a new-to-me use case
               | for that feature.
        
             | bee_rider wrote:
             | It seems like there'd be a pretty big overlap between those
             | kinds of hosts.
        
             | noman-land wrote:
             | I'm a bit of a noob about this. Can you explain what this
             | means?
        
               | shanemhansen wrote:
               | Port knocking involves sending a packet to certain ports
               | on a host. It's overkill but typing http://host:port/ in
               | your browser will, as part of trying to make a TCP
               | connection, send a packet to that port.
        
               | noman-land wrote:
               | Thanks, I didn't realize port knocking could be done
               | manually like this as a way to "unlock" an eventual
               | regular ssh attempt outside the browser. This makes sense
               | now and is super clever!
        
             | Dwedit wrote:
             | I've actually been behind a firewall that blocked outgoing
             | connections except on several well-known ports. Had to run
             | my SSH server over the port usually used for HTTPS just to
             | get it unblocked.
        
             | AlexCoventry wrote:
             | Some versions of port knocking require a specific type of
             | packet.
             | 
             | https://github.com/moxie0/knockknock
        
             | toast0 wrote:
             | Likely won't be enough if you're behind CGNAT and you get a
             | different public IP on different connections.
        
           | codedokode wrote:
           | You can as well find yourself on a host that doesn't have SSH
           | or network that filters SSH traffic for security reasons.
        
           | godman_8 wrote:
           | My solution to this has been creating a public bastion server
           | and use Wireguard. Wireguard listens on a random UDP port
           | (port knocking is more difficult here.) This client is set up
           | to have a dynamic endpoint so I don't need to worry about
           | whitelisting. The key and port information are stored in a
           | password manager like Vaultwarden with the appropriate
           | documentation to connect. Firewall rules are set to reject on
           | all other ports and it doesn't respond to ICMP packets
           | either. A lot of that is security through obscurity but I
           | found this to be a good balance of security and practicality.
        
             | trelane wrote:
             | I've seen this discussed a fair bit, and always the
             | recommendation is to use wire guard and expose ssh only to
             | the "local network" e.g. https://bugs.gentoo.org/928134#c38
             | 
             | First, I don't see how this works where there's a single
             | server (e.g. colocation).
             | 
             | Second, doesn't that just make Wireguard the new hack
             | target? How does this actually mitigate the risk?
        
           | GabeIsko wrote:
           | I wouldn't really consider port knocking to be that effective
           | of a way to block connections. It is really only obscure, but
           | port knocking software is very accessible. So if you know a
           | port needs to be knocked, it's not hard for an attacker to
           | get to it.
           | 
           | The main benefit of port knocking is that it allows you to
           | present your ports as normally closed. If you have a system
           | and you are worried about it getting port scanned for
           | whatever reason, it makes sense to have a scheme where the
           | ports are closed unless it receives a knock. So if someone
           | gets access they shouldn't for whatever reason and pulls a
           | portscan off, the information they get about your system is
           | somewhat limited.
           | 
           | In this scheme, it would be much better to have an access
           | point you authenticate with and then that handles the port
           | knocking for the other devices. So it is a kind of obscure
           | method that is really only useful in a specific use case.
           | 
           | As far as SSH keys go, I would argue that SSH support is so
           | ubiquitous, and SSH access is so powerful that it is a
           | reasonable security tradeoff. I also don't think that SSH
           | ports should be exposed to the internet, unless it is a
           | specific use case where that is the whole point, like GitHub.
           | I'm very down on connecting directly through SSH without
           | network access. The xz situation has validated this opinion.
           | 
           | I personally don't know any application where you really need
           | supreme admin access to every device, from any device, from
           | anywhere in the world, while at the same time it has extreme
           | security requirements. That's a pretty big task. At that
           | point, constructing a dedicated, hardened access point that
           | faces the internet and grants access to other devices is
           | probably the way to go.
        
             | delusional wrote:
             | > I'm very down on connecting directly through SSH without
             | network access. The xz situation has validated this
             | opinion.
             | 
             | Did it? At the point an attacker has remote code execution,
             | couldn't they just as easily pivot into an outgoing
             | connection to some command and control server? I don't see
             | how some intermediary access point would have alleviated
             | this problem. If the call is coming from inside the house,
             | the gig is already up.
        
               | deepbreath wrote:
               | > At the point an attacker has remote code execution
               | 
               | The attacker doesn't have remote code execution in the xz
               | case unless they can speak to your port 22. Port knocking
               | prevents them from doing so, provided they don't know how
               | to knock.
        
         | nijave wrote:
         | Jump host running a different SSH server implementation or SSH
         | over VPN seems a little more reliable.
         | 
         | There's a lot of solutions now where the host has an agent that
         | reaches out instead of allowing incoming connections which can
         | be useful (assuming you trust that proxy service/software).
         | 
         | One place I worked, we ran our jumphost on GCP with Identity
         | Aware Proxy and on AWS with SSM sessions so had to authenticate
         | to the cloud provider API and the hosts weren't directly
         | listening for connections from the internet. Similar setup to
         | ZeroTier/TailScale+SSH
        
         | Terr_ wrote:
         | My gut-feel is that it rides near the line between "defense in
         | depth" versus "security through obscurity".
        
           | mrguyorama wrote:
           | The reality is that security through obscurity works really
           | well as a layer in an otherwise already good security model.
           | You make sure to test it with some red teaming, and if they
           | fail to get through, you give them all the details about your
           | tricks so they can also test the actual security.
           | 
           | The "obscurity" part mostly serves to limit the noise of
           | drive by and bot'd attacks, such that each attack attempt
           | that you end up stopping and catching is a more serious
           | signal, and more likely to be directed. It's about short
           | circuiting much of the "chaff" in the signal such that you
           | are less warning fatigued and more likely to seriously
           | respond to incidents.
           | 
           | The obscurity is not meant to prevent targeted attacks.
        
           | Attummm wrote:
           | While 'security through obscurity' shouldn't be your only
           | defense, it still plays a crucial role within 'defense in
           | depth' strategies.
           | 
           | In the past, sensitive networks relied heavily on obscurity
           | alone. Just dialing the right phone number could grant access
           | to highly sensitive networks. However, there's a cautionary
           | tale of a security researcher who took the phrase 'don't do
           | security through obscurity' to heart and dared hackers,
           | leading to disastrous consequences.
           | 
           | Obscurity, when used appropriately, complements broader
           | security measures. Take SSH, for example. Most bots target
           | its default port. Simply changing that port removes the easy
           | targets, forcing attackers to use more sophisticated methods,
           | which in turn, leaves your logs with more concerning
           | activity.
        
         | TacticalCoder wrote:
         | > Stuff like this is why I like port knocking, and limiting
         | access to specific client IPs/networks when possible.
         | 
         | Indeed: I whitelist hosts/IP blocks allowed to SSH in. I don't
         | use port-knocking but I never ever _criticized_ those using
         | port knocking.
         | 
         | I do really wonder if people are still going to say that port
         | knocking is pointless and security theatre: we now have a clear
         | example where people who were using port-knocking were
         | protected while those who didn't were potentially wide open to
         | the biggest backdoor discovered to date (even if it's wasn't
         | yet in all the major distros).
         | 
         | > ... does the entire internet really need to be able to SSH to
         | your box?
         | 
         | Nope and I never ever understood the argument saying: _" Port-
         | knocking is security theatre, it doesn't bring any added
         | security"_.
         | 
         | To me port-knocking didn't _lower_ the security of a system.
         | 
         | It seems that we now have a clear proof that it's actually
         | helping versus certain type of attacks (including source-code
         | supply chain attacks).
        
           | AlexCoventry wrote:
           | > say that port knocking is pointless and security theatre
           | 
           | Who was saying that?
        
             | password4321 wrote:
             | https://hn.algolia.com/?query=port%20knocking%20obscurity&t
             | y...
             | 
             | 20200515 https://news.ycombinator.com/item?id=23187662
             | 
             | etc.
        
               | AlexCoventry wrote:
               | Thanks.
        
             | rdtsc wrote:
             | I've heard it many times in the form of "security through
             | obscurity, lol! you obviously, don't know what you're
             | doing".
             | 
             | Yeah, it's a "straw man" pretending the person they are
             | addressing was just planning on running telnet with port
             | knocking.
        
           | bpfrh wrote:
           | >It seems that we now have a clear proof that it's actually
           | helping versus certain type of attacks (including source-code
           | supply chain attacks).
           | 
           | So would have a vpn or using a bastion host with a custom non
           | standard ssh implementation...
           | 
           | At some point you have to make the choice to not implement a
           | security measure and I would argue that should stop at
           | vpn+standard software for secure access.
           | 
           | If you are a bigger company, probably add SSO and network
           | segmentation with bastion hosts and good logging.
           | 
           | Port Knocking doesn't add any security benefit in the sense
           | that there are known non avoidable security risk aka your
           | transmit your password(knocking) in clear text over the
           | network.
           | 
           | You also add another program with potential vulnerabilities,
           | and as port knocking is not as popular as e.g. sshd,
           | wireguard, maybe it gets less scrutiny and it leads to a
           | supply chain attack?
           | 
           | Security measures are also not free in the sense that
           | somebody has to distribute them and keep the configuration up
           | to date, even if that person is you, that means syncing that
           | connect-to-server-script and keeping it in a secure location
           | on your devices.
        
             | tredre3 wrote:
             | You also add another program with potential
             | vulnerabilities, and as port knocking is not as popular as
             | e.g. sshd, wireguard, maybe it gets less scrutiny and it
             | leads to a supply chain attack?
             | 
             | That other program is just a stateful firewall, aka the
             | Linux Kernel itself. If you can't trust your kernel then
             | nothing you do matters.
        
               | bpfrh wrote:
               | That other programm is knockd, which needs to listen to
               | all traffic and look for the specified packets.
               | 
               | Granted, that program is really small and could be easily
               | audited, but that same time could have been spent on
               | trying apparmor/seclinux + a good vpn and 2fa
        
               | rhaps0dy wrote:
               | I much prefer the approach I read about in
               | https://github.com/moxie0/knockknock (use a safe
               | language, trust basically only the program you write and
               | the language), to a random port daemon written in C which
               | pulls libpcap to sniff everything.
               | 
               | To some extent knockknock also trusts the Python
               | interpreter which is not ideal (but maybe OK)
        
               | Aloisius wrote:
               | In Linux, simple knocking (fixed sequences of ports) can
               | be done entirely in the kernel with nftables rules.
               | Probably could even have different knock ports based on
               | day of the week or hour or source IP.
               | 
               | https://wiki.nftables.org/wiki-
               | nftables/index.php/Port_knock...
        
             | marcus0x62 wrote:
             | > Port Knocking doesn't add any security benefit in the
             | sense that there are known non avoidable security risk aka
             | your transmit your password(knocking) in clear text over
             | the network.
             | 
             | This take is bordering on not even wrong territory. The
             | point of port knocking isn't to increase entropy of your
             | password or authentication keys per se, it is to control
             | who can send packets to your SSH daemon, either to limit
             | the noise in your logs or to mitigate an RCE in the SSH
             | daemon. The vast majority of potential attackers in the
             | real world are off-path and aren't going to be in a
             | position to observe someone's port-knocking sequence.
             | 
             | Is VPN a better solution? Maybe, but VPNs, especially
             | commercial VPNs have their own set of challenges with
             | regard to auditability and attack surface.
        
               | bpfrh wrote:
               | >This take is bordering on not even wrong territory. The
               | point of port knocking isn't to increase entropy of your
               | password or authentication keys per se, it is to control
               | who can send packets to your SSH daemon, either to limit
               | the noise in your logs or to mitigate an RCE in the SSH
               | daemon. The vast majority of potential attackers in the
               | real world are off-path and aren't going to be in a
               | position to observe someone's port-knocking sequence.
               | 
               | If you read my full sentence in the context it stands, I
               | argue that authorizing access to your openssh instance is
               | done by sending an authentication code in cleartext.
               | 
               | It does not matter if that authentication code is in the
               | form of bits, tcp pakets, colors or horoscopes as long as
               | your transmit that in clear text it is in fact no a
               | secure mechanism.
               | 
               | Yeah but now you basically have to always run an vpn that
               | only exists between your server and your client, because
               | otherwise your clear text authentication code is visible
               | and at that point just use wireguard and make a 1-1
               | tunnel only for ssh with no known attacks, even if the
               | attacker is in the same network.
               | 
               | Yes a vpn that has no known reliably attack vector is
               | definitely better than a protocol with a known working
               | attack vector
        
           | nick238 wrote:
           | The advantage of port knocking to me is just reducing the
           | amount of garbage script-kiddie scans. IMHO the design of
           | `sshd` needs to just assume it will be slammed by garbage
           | attempts and minimize the logging. I've heard of `fail2ban`,
           | but banning does nothing as the bots have an unlimited number
           | of IPs.
        
         | noman-land wrote:
         | Got any advice to easily set up port knocking?
        
           | AlexCoventry wrote:
           | It's old and there are probably friendlier options out there
           | now, but
           | 
           | https://github.com/moxie0/knockknock/blob/master/INSTALL
        
         | Dwedit wrote:
         | Going in through the main Internet connection might not be the
         | only way in. Someone surfing on Wifi who visits the wrong
         | website can also become a second way into your internal
         | network.
        
         | k8svet wrote:
         | Over the weekend, I added this to a common bit included in all
         | my NixOS systems:
         | -networking.firewall.allowedTCPPorts = [ 22 ];
         | +networking.firewall.interfaces."tailscale0".allowedTCPPorts =
         | [ 22 ];
         | 
         | I probably should have done this ages ago.
        
         | zoeysmithe wrote:
         | Port knocking is bit like a lazy person's VPN. You might as
         | well get off your butt and install a vpn solution and use ssh
         | via vpn. The time and effort is almost the same nowadays
         | anyway. The chances of both vpn and ssh being exploited like
         | this must be zero.
         | 
         | Worse, most corporate, public wifi, etc networks block all
         | sorts of ports. So at home sure you can open random ports but
         | near everywhere else its just 80 and 443. Now you can't knock.
         | But your https vpn works fine.
         | 
         | Also a lot of scary stuff here about identity and code
         | checkins. If someone is some contributor, how do we know if
         | their creds havent been stolen or they've been forced via
         | blackmail or whatever to do this? Or how many contributors are
         | actually intelligence agents? Then who is validating their
         | code? This persons code went through just fine, and this was
         | only caught because someone noticed a lag in logins, which by
         | then, is a running binary.
         | 
         | FOSS works on the concept of x amount of developer trust, both
         | in code and identity. You can't verify everyone all the time
         | (creds and certs get stolen, blackmail, etc), nor can you audit
         | every line of code all the time. Especially if the exploit is
         | submitted piecemeal over the years or months. That trust is now
         | being exploited it seems. Scary times. I wonder if how FOSS
         | works will change after this. I assume the radio silence for
         | Theo and Linus and others means there's a lot of brainstroming
         | to get to the root of this problem. Addressing the symptom of
         | this one attack probably won't be enough. I imagine some very
         | powerful people want some clarity and fixes here and this is
         | probably going to be a big deal.
         | 
         | I wouldn't be surprised if a big identity trust initiative
         | comes out of this and some AI stuff to go over an entire
         | submitter's history to spot any potentially malicious pattern
         | like this that's hard for human beings to detect.
        
           | codedokode wrote:
           | > nor can you audit every line of code all the time
           | 
           | You can if you distribute this job among volunteers or hire
           | people to do that. There are millions of developers around
           | the world capable to do this. But reality is that nobody
           | wants to contribute time or pay for free software.
        
         | belorn wrote:
         | Rather than port knocking, I prefer IP knocking. The server has
         | several ip addresses and once a correct sequence of connection
         | is made, the ssh port opens. Since so few know about IP
         | knocking, it much safer than port knocking.
         | 
         | /s
        
         | teddyh wrote:
         | Port knocking is stupid. It violates Kerckhoffs's principle1.
         | If you want more secret bits which users need to know in order
         | to access your system, increase your password lengths, or
         | cryptographic key sizes. If you want to keep log sizes
         | manageable, adjust your logging levels.
         | 
         | The security of port knocking is also woefully inadequate:
         | 
         | * It adds _very little_ security. How many bits are in a
         | "secret knock"?
         | 
         | * The security it _does_ add is _bad_ : It's sent in cleartext,
         | and easily brute-forced.
         | 
         | * It complicates access, since it's non-standard.
         | 
         | 1. <https://en.wikipedia.org/w/index.php?title=Kerckhoffs%27s_p
         | r...>
        
           | VWWHFSfQ wrote:
           | Port knocking is definitely dumb, but "increase your password
           | lengths, or cryptographic key sizes" does nothing if your ssh
           | binary is compromised and anyone can send a magic packet to
           | let themselves in.
           | 
           | Strict firewalls, VPNs, and defense-in-depth is really the
           | only answer here.
           | 
           | Of course, those things all go out the window too if your TCP
           | stack itself is also compromised. Better to just air-gap.
        
             | teddyh wrote:
             | Many people argue that a VPN+SSH is a reasonable solution,
             | since it uses two separate implementations, where both are
             | unlikely to be compromised at the same time. I would argue
             | that the more reasonable option would be to split the SSH
             | project in two; both of which validates the credentials of
             | an incoming connection. This would be the same as a VPN+SSH
             | but would not convolute the network topography, and would
             | eliminate the need for two keys to be used by every
             | connecting user.
             | 
             |  _However_ , in this case, the two-layer approach would not
             | be a protection. Sure, in this case the SSH daemon was
             | compromised, and a VPN before SSH would have protected SSH.
             | But what if the VPN server itself was compromised? Remember
             | that the SSH server was altered, not to allow normal
             | logins, but to call system() directly. What if a VPN server
             | had been similarly altered? This would not have protected
             | SSH, since a direct system() call by the VPN daemon would
             | have ignored SSH completely.
             | 
             | It is a mistake to look at this case and assume that since
             | SSH was compromised this time, SSH must always be protected
             | by another layer. That other layer might be the next thing
             | to be compromised.
        
               | PhilipRoman wrote:
               | >This would not have protected SSH, since a direct
               | system() call by the VPN daemon would have ignored SSH
               | completely.
               | 
               | I don't know how VPNs are implemented on Linux, but in
               | principle it should be possible to sandbox a VPN server
               | to the point where it can only make connections but
               | nothing else. If capabilities are not enough, ebpf should
               | be able to contain it. I suspect it will have full
               | control over networking but that's still very different
               | from arbitrary code execution.
        
               | teddyh wrote:
               | That would be possible, yes, but it's not the current
               | situation. And since we can choose how we proceed, I
               | would prefer my proposal, i.e. that the SSH daemon be
               | split into two separate projects, one daemon handling the
               | initial connection, locked-down like you describe, then
               | handing off the authenticated connection to the second,
               | "inner", SSH daemon, which _also does the authentication_
               | , using the same credentials as submitted by the
               | connecting user. This way, the connecting user only has
               | to have one key, and the network topology does not become
               | unduly twisted.
        
               | PhilipRoman wrote:
               | Huh, I briefly looked at the documentation for
               | UsePrivilegeSeparation option and it looks very similar
               | to what you're describing. Interesting why it didn't
               | prevent this attack.
        
               | mikeocool wrote:
               | Best practice would have your internet exposed daemon
               | (vpn or ssh) running a fairly locked down box that
               | doesn't also have your valuable "stuff" (whatever that
               | is) on it.
               | 
               | So if someone cracks that box, their access is limited,
               | and they still need to make a lateral move to access
               | actual data.
               | 
               | In the case of this backdoor, if you have SSH exposed to
               | the internet on a locked down jump box, AND use it as
               | your internal mechanism for accessing your valuable
               | boxes, you are owned, since the attacker can access your
               | jump box and then immediately use the same vulnerability
               | to move to an internal box.
               | 
               | In the case of a hypothetical VPN daemon vulnerability,
               | they can use that to crack your jump box, but then still
               | need another vulnerability to move beyond that. Not
               | great, but a lot better than being fully owned.
               | 
               | You could certainly also accomplish a similar topology
               | with two different SSH implementations.
        
           | kelnos wrote:
           | I don't think I agree. This backdoor means that it doesn't
           | matter how long your key lengths or cryptographic key sizes
           | are; that's kinda the point of a backdoor. But an automated
           | attempt to find servers to exploit is not going to attempt
           | something like port knocking, or likely even looking for a
           | sshd on a non-standard port.
           | 
           | Kerckhoff's law goes (from your link):
           | 
           | > _The principle holds that a cryptosystem should be secure,
           | even if everything about the system, except the key, is
           | public knowledge._
           | 
           | With this backdoor, that principle does not hold; it's
           | utterly irrelevant. Obscuring the fact that there even is a
           | ssh server to exploit increases safety.
           | 
           | (I dislike port knocking because of your third point, that it
           | complicates access. But I don't think your assertions about
           | its security principles hold water, at least not in this
           | case.)
           | 
           | (Ultimately, though, instead of something like port knocking
           | or even running sshd on a non-standard port, if I wanted to
           | protect against attacks like these, I would just keep sshd on
           | a private network only accessible via a VPN.)
        
             | teddyh wrote:
             | > _This backdoor means that it doesn 't matter how long
             | your key lengths or cryptographic key sizes are; that's
             | kinda the point of a backdoor._
             | 
             | If we're talkning about this specific backdoor, consider
             | this: If the attacker had successfully identified a target
             | SSH server they could reasonably assume had the backdoor,
             | would they be completely halted by a port knocker? No, they
             | would brute-force it easily.
             | 
             | Port knocking is very bad security.
             | 
             | (It's "Kerckhoffs's <law/principle/etc.>", by the way.)
        
           | reaperman wrote:
           | Security-by-obscurity is dumb, yes. But in the context of
           | supply-chain exploits in the theme of this xz backdoor, this
           | statement is also myopic:
           | 
           | > _If you want more secret bits which users need to know in
           | order to access your system, increase your password lengths,
           | or cryptographic key sizes._
           | 
           | If your _sshd_ (or _any other_ exposed service) is
           | backdoored, then the  "effective bits" of any cryptographic
           | key size is reduced to nil. You personally cannot know
           | whether or not your exposed service is backdoored.
           | 
           | Bottom-line is: adding a defense-in-depth like port knocking
           | is unlikely to cause harm _unless_ you use it as
           | justification for not following best-practices in the rest of
           | your security posture.
        
             | belorn wrote:
             | Chaining multiple different login system can make sense. A
             | more sensible solution over port knocking would be an
             | alternative sshd implementation with a tunnel to the second
             | sshd implementation. Naturally the first one should not run
             | as root (similar to the port knocking daemon).
             | 
             | That way it would not be in clear text, and the number of
             | bits of security will be order of magnitude larger even
             | with very simple password. The public facing sshd can also
             | run more lightweight algorithms and disable loggings for
             | lower resource usage.
             | 
             | Regardless if one uses two sshd or port knocking software,
             | the public facing daemon can have backdoors and security
             | bugs. If we want to avoid Xz-like problems then this first
             | layer need to be significant hardened (With SELinux as one
             | solution). Their only capability should be to open the
             | second layer.
        
           | e_y_ wrote:
           | Security through obscurity is only a problem if obscurity is
           | the _main_ defense mechanism. It 's perfectly fine as a
           | defense-in-depth. This would only be an issue if someone did
           | something stupid like set up a passwordless rlogin or
           | database server expecting port knocking alone to handle
           | security.
           | 
           | Also as pointed out elsewhere, modern port knocking uses
           | Single Packet Authorization which allows for more bits. It's
           | also _simpler_ and uses a _different_ mechanism than ssh
           | (which due to its age, has historically supported a bunch of
           | different login and cryptography techniques), which reduces
           | the chance that an attacker would be able to break both.
        
       | yodsanklai wrote:
       | Do we know who's the attacker?
        
         | toasteros wrote:
         | The name that keeps coming up is Jia Tan
         | (https://github.com/JiaT75/) but we have no way of knowing if
         | this is a real name, pseudonym, or even a collective of people.
        
           | pphysch wrote:
           | Given the sophistication of this attack it would indeed be
           | downright negligent to presume that it's the attackers' legal
           | name and that they have zero OPSEC.
        
             | xvector wrote:
             | He used ProtonMail. I wonder if ProtonMail can pull IP logs
             | for this guy and share them.
        
               | eklitzke wrote:
               | It might be worth looking into, but:
               | 
               | 1) Probably by design protonmail doesn't keep these kinds
               | of logs around for very long
               | 
               | 2) Hacking groups pretty much always proxy their
               | connection through multiple layers of machines they've
               | rooted, making it very difficult or impossible to
               | actually trace back to the original IP
        
           | ajross wrote:
           | It's also worth pointing out, given the almost two years of
           | seemingly valuable contribution, that this could be a real
           | person who was compromised or coerced into pushing the
           | exploit.
        
             | stefan_ wrote:
             | It's also worth pointing out that parts of the RCE were
             | prepared almost two years ago which makes this entirely
             | implausible.
        
               | ajross wrote:
               | Were they? The attacker has had commit rights for 1.5
               | years or so, but my understanding is that all the exploit
               | components were recent commits. Is that wrong?
        
         | cellis wrote:
         | I'd say Ned Stark Associates, Fancy Bear, etc.
        
         | FergusArgyll wrote:
         | This Guy
         | 
         | https://news.ycombinator.com/item?id=39877604
        
       | faxmeyourcode wrote:
       | Edit: I misunderstood what I was reading in the link below, my
       | original comment is here for posterity. :)
       | 
       | > From down in the same mail thread: it looks like the individual
       | who committed the backdoor has made some recent contributions to
       | the kernel as well... Ouch.
       | 
       | https://www.openwall.com/lists/oss-security/2024/03/29/10
       | 
       | The OP is such great analysis, I love reading this kind of stuff!
        
         | davikr wrote:
         | Lasse Collin is not Jia Tan until proven otherwise.
        
           | robocat wrote:
           | Passive aggressive accusation.
           | 
           | This style of fake doubt is really not appropriate anywhere.
        
         | Denvercoder9 wrote:
         | The referenced patch series had not made it into the kernel
         | yet.
        
         | ibotty wrote:
         | No that patch series is from Lasse. He said himself that it's
         | not urgent in any way and it won't be merged this merge window,
         | but nobody (sane) is accusing Lasse of being the bad actor.
        
       | gghffguhvc wrote:
       | Is there anything actually illegal here? Like is it a plausible
       | "business" model for talented and morally compromised developers
       | to do this and then sell the private key to state actors without
       | actually breaking in themselves or allowing anyone else to break
       | in.
       | 
       | Edit: MIT license provides a pretty broad disclaimer to say it
       | isn't fit for any purpose implied or otherwise.
        
         | kstrauser wrote:
         | Yes. This would surely be prosecutable under the CFAA.
         | 
         | Honestly, if I were involved in this, I'd _hope_ that it was,
         | say, the FBI that caught me. I think that 'd be the best chance
         | of staying out of the Guantanamo Bay Hilton, laws be damned.
        
           | gghffguhvc wrote:
           | Even with the MIT disclaimer and the author not being the
           | distributor or have any relationship with the distributor.
           | Publishing vulnerable open source software to GitHub with a
           | disclaimer that says it isn't fit for any purpose seems like
           | a bit of an oversight of using MIT license in distros to me.
        
             | gghffguhvc wrote:
             | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
             | KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
             | WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
             | PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
             | OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
             | OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
             | OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
             | SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
        
             | gowld wrote:
             | A software license has never been a protection against
             | malicious criminal activity. They'd have to prove that the
             | "feature" had a legitimate non-nefarious purpose, or was
             | accidental, neither of which apply here.
        
             | bawolff wrote:
             | That is not how disclaimers work. You cannot disclaim
             | liability for intentionally harming someone.
             | 
             | You also cannot avoid criminal charges for a crime simply
             | by shouting "don't blame me"
        
               | kstrauser wrote:
               | That's exactly right. Imagine a license that said "...and
               | I can come to your house and kill you if I want to." Even
               | if someone signed it in ink and mailed a copy back, the
               | licensor still can't go to their house and kill them even
               | though the agreement says they can.
               | 
               | I can imagine the case of maybe a "King of the Hill"-type
               | game played on bare hardware, where you're actively
               | trying to hack into and destroy other players' systems.
               | Such a thing might have a license saying "you agree we
               | may wipe your drive after downloading all your data", and
               | that _might_ be acceptable in that specific situation.
               | You knew you were signing up for a risking endeavor that
               | might harm your system. If /when it happens, you'd have a
               | hard time complaining about it doing the thing it
               | advertised that it would do.
               | 
               |  _Maybe._ Get a jury involved and who knows?
               | 
               | But somewhere between those 2 examples is the xz case.
               | There's no way a user of xz could think that it was
               | designed to hack their system, and no amount of licensing
               | can just wave that away.
               | 
               | For a real world analogy, if you go skydiving, and you
               | sign an injury against waiver, and you get hurt out of
               | pure dumb luck and not negligence, good luck suing anyone
               | for that. You jumped out of a plane. What did you think
               | might happen? But if you walk into a McDonald's and fall
               | through the floor into a basement and break your leg, no
               | number of "not responsible for accidents" signs on the
               | walls would keep them from being liable.
        
               | bawolff wrote:
               | > For a real world analogy, if you go skydiving, and you
               | sign an injury against waiver, and you get hurt out of
               | pure dumb luck and not negligence, good luck suing anyone
               | for that. You jumped out of a plane. What did you think
               | might happen? But if you walk into a McDonald's and fall
               | through the floor into a basement and break your leg, no
               | number of "not responsible for accidents" signs on the
               | walls would keep them from being liable.
               | 
               | Even this is a bad example, since it is just gross
               | negligence and not intentional. A better analogy would be
               | if mcdonalds shoots you.
        
               | kstrauser wrote:
               | I use to go to the In-N-Out in Oakland that just closed.
               | That was a possibility, believe me.
        
           | shp0ngle wrote:
           | CIA didn't put anyone new into gitmo for years.
           | 
           | The 30 remaining gitmo prisoners are all W Bush holdovers
           | that all subsequent administrations forgot.
        
             | kstrauser wrote:
             | Conspiracy theorist: _That 's what they want you to
             | believe._
             | 
             | And in fairness, the whole nature of their secrecy means
             | there's no way to know for sure. It might be just a
             | boogeyman kept around as a useful tool for scaring people
             | into not breaking national security-level laws. I mean,
             | it's not as though I want to go around hacking the planet,
             | but the idea of ending up at a CIA "black site", assuming
             | such things even exist, would be enough to keep me from
             | trying it.
        
             | aftbit wrote:
             | Sure but that's because the world knows about Gitmo now.
             | What about the other quieter black sites?
        
         | alfanick wrote:
         | Legality things, depends on jurisdiction, which may or may not
         | depend on:
         | 
         | * where were the authors of the code, * where the code is
         | stored, * who is attacked, * where are their servers, * who is
         | attacking, * where are they based, * where did they attack
         | from, * ...
         | 
         | IANAL but it seems very complicated from law perspective (we,
         | humanity, don't have a global law)
         | 
         | Edit2: making a bullet point list is hard
        
         | jcranmer wrote:
         | There are things you can't contractually wave away, especially
         | in form contracts that the other side has no ability to
         | negotiate (which is what software licenses amount to).
         | 
         | One of those things is going to be fraud: if the maintainer is
         | intentionally installing backdoors into their software and not
         | telling the user, there's going to be some fraud-like statute
         | that they'll be liable for.
        
           | dannyw wrote:
           | That said, if you're doing this for your jurisdiction's
           | security agency, you'll certainly be protected.
        
         | greyface- wrote:
         | I brought this up in an earlier thread and got heavily
         | downvoted. https://news.ycombinator.com/item?id=39878227
        
         | pama wrote:
         | You are talking about the greatest exposed hack of the computer
         | supply chain so far by a big margin. Laws can be made
         | retroactively for this type of thing. It has implications that
         | are beyond the legal system, as the threat level is way beyond
         | what is typically required as a sniff test for justifying
         | military actions. This was not an RCE based on identifying
         | negligent code; this was a carefully designed trap that could
         | reverse the power dynamics during military conflict.
        
           | greyface- wrote:
           | > Laws can be made retroactively
           | 
           | Not in the United States. https://constitution.congress.gov/b
           | rowse/article-1/section-9...
        
       | MuffinFlavored wrote:
       | Instead of needing the honeypot openssh.patch at compile-time
       | https://github.com/amlweems/xzbot/blob/main/openssh.patch
       | 
       | How did the exploit do this at runtime?
       | 
       | I know the chain was:
       | 
       | opensshd -> systemd for notifications -> xz included as transient
       | dependency
       | 
       | How did liblzma.so.5.6.1 hook/patch all the way back to
       | openssh_RSA_verify when it was loaded into memory?
        
         | jeffrallen wrote:
         | ifunc
        
         | tadfisher wrote:
         | When loading liblzma, it patches the ELF GOT (global offset
         | table) with the address of the malicious code. In case it's
         | loaded before libcrypto, it registers a symbol audit handler (a
         | glibc-specific feature, IIUC) to get notified when libcrypto's
         | symbols are resolved so it can defer patching the GOT.
        
           | MuffinFlavored wrote:
           | > When loading liblzma, it patches the ELF GOT (global offset
           | table) with the address of the malicious code.
           | 
           | How was this part obfuscated/undetected?
        
             | bewaretheirs wrote:
             | it was part of the binary malware payload hidden in a
             | binary blob of "test data".
             | 
             | In a compression/decompression test suite, a subtly broken
             | allegedly compressed binary blob is not out of place.
             | 
             | This suggests we need to audit information flow during
             | builds - the shipping production binary package should be
             | reproduceably buildable without reading test data or test
             | code.
        
               | MuffinFlavored wrote:
               | How/why did the test data get bundled into the final
               | library output?
        
               | matthew-wegner wrote:
               | "xz/liblzma: Bash-stage Obfuscation Explained" covers it
               | well
               | 
               | https://gynvael.coldwind.pl/?lang=en&id=782
        
               | acdha wrote:
               | That's what the compromised build stage did. It's really
               | interesting to read if you want to see the details of how
               | a sophisticated attacker works:
               | 
               | https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee
               | 78b...
               | 
               | https://gynvael.coldwind.pl/?lang=en&id=782
        
               | MuffinFlavored wrote:
               | > build-to-host.m4
               | 
               | I wasn't aware that the rogue maintainer was able to
               | commit himself without any PR review (or he snuck it
               | through PR review) rogue steps in the build process as
               | well that went unnoticed so that he could bundle
               | decompressed `xz` streams from test data, that also
               | patched output .so files well enough to add hooking code
               | to them.
               | 
               | How many "process failures" are described in that process
               | that exist in every OSS repo with volunteer unknown
               | untrusted maintainers?
        
               | belthesar wrote:
               | That's kind of the rub here.
               | 
               | > volunteer That's the majority of OSS. Only a handful of
               | the projects we use today as a part of the core set of
               | systems in the OSS world actually have corporate
               | sponsorship by virtue of maintainers/contributors on the
               | payroll. > unknown The actor built up a positive
               | reputation by assisting with maintaining the repo at a
               | time when the lead dev was unable to take an active role.
               | In this sense, although we did not have some kind of full
               | chain of authentication that "Jia Tan" was a real human
               | that existed, that's about as good as it gets, and
               | there's plenty of real world examples of espionage in
               | both the open and closed source software world that can
               | tell us that identity verification may not have prevented
               | anything. > untrusted The actor gained trust. The barrier
               | to gaining trust may have been low due to the mental
               | health of the lead maintainer, but trust was earned and
               | received. The lead maintainer communicated to distros
               | that they should be added.
               | 
               | That's the rub here. It's _really easy_ to say this is a
               | process problem. It's not. This was a social engineering
               | attack first and foremost before anything else. It
               | unlocked the way forward for the threat actor to take
               | many actions unilaterally.
        
               | bawolff wrote:
               | > How many "process failures" are described in that
               | process that exist in every OSS repo with volunteer
               | unknown untrusted maintainers?
               | 
               | What process failures actually happened here? What
               | changes in process do you think would have stopped this?
        
               | acdha wrote:
               | This guy was pretty trusted after a couple of years of
               | working on the project so I think it's a category error
               | to say process improvements could have fixed it. The use
               | of autoconf detritus was a canny move since I'd bet long
               | odds that even if your process said three other people
               | had to review every commit they would have skimmed right
               | over that to the "important" changes.
        
       | declan_roberts wrote:
       | One thing I notice about state-level espionage and backdoors. The
       | USA seems to have an affinity for hardware interdiction as
       | opposed to software backdoors. Hardware backdoors make sense
       | since much of it passes through the USA.
       | 
       | Other countries such as Israel are playing the long-con with very
       | well engineered, multi-year software backdoors. A much harder
       | game to play.
        
         | wolverine876 wrote:
         | > The USA seems to have an affinity for hardware interdiction
         | as opposed to software backdoors.
         | 
         | What are some examples?
        
           | nick238 wrote:
           | The Clipper chip? Not sure if that's what they had in mind.
           | Nowadays it's maybe just how the NSA has rooms attached to
           | backbone providers like
           | https://theintercept.com/2018/06/25/att-internet-nsa-spy-
           | hub...
        
           | AlexCoventry wrote:
           | https://en.wikipedia.org/wiki/Tailored_Access_Operations
        
             | acid__ wrote:
             | Be sure to check out the mentioned catalog. [1]
             | 
             | The NSA's capabilities back in 2008 were pretty
             | astonishing: "RAGEMASTER" A $30 device that taps a VGA
             | cable and transmits the contents of your screen to the NSA
             | van sitting outside! Crazy stuff. Makes you wonder what
             | they've built in the last 15 years.
             | 
             | [1] https://en.wikipedia.org/wiki/ANT_catalog
        
         | gowld wrote:
         | This goes back to WWII. USA solves problems with manufacturing
         | and money. Europeans relatively lack both, so they solve
         | problems with their brains.
        
           | hybridtupel wrote:
           | Israel is not in Europe
        
             | Alezz wrote:
             | The perfect definition of Europe is if they took part in
             | the Eurovision Song Contest.
        
               | nmat wrote:
               | Australia participates in the Eurovision Song Contest.
        
         | bawolff wrote:
         | > Other countries such as Israel are playing the long-con with
         | very well engineered, multi-year software backdoors
         | 
         | What is this in reference to?
        
           | markus92 wrote:
           | Stuxnet?
        
           | ginko wrote:
           | Probably Stuxnet.
        
             | bawolff wrote:
             | Stuxnet was not a backdoor.
        
           | hammock wrote:
           | Bob Maxwell (Ghislaine's father) sold backdoored software to
           | corporations and governments all around the world, including
           | US targets, on behalf of Israel's Mossad.
           | 
           | "The Maxwell-Mossad team steals spy software PROMIS from the
           | United States, Mossad puts an undetectable trap door in it so
           | Mossad can track the activities of anyone using it, then
           | Maxwell sells it around the world (including back to the U.S.
           | -- with the trap door)."
           | 
           | https://kclibrary.bibliocommons.com/v2/record/S120C257904
        
           | Voultapher wrote:
           | NSO Group
           | 
           | They are an Israel based company, that sell zero-click RCEs
           | for phones and more. Such malicious software was involved in
           | the murder of journalist Jamal Khashoggi.
           | 
           | Their exploits, developed in-house as well as presumably
           | partially bought on the black market, are some of the most
           | sophisticated exploits found in the wild, e.g.
           | https://googleprojectzero.blogspot.com/2021/12/a-deep-
           | dive-i...
           | 
           | > JBIG2 doesn't have scripting capabilities, but when
           | combined with a vulnerability, it does have the ability to
           | emulate circuits of arbitrary logic gates operating on
           | arbitrary memory. So why not just use that to build your own
           | computer architecture and script that!? That's exactly what
           | this exploit does. Using over 70,000 segment commands
           | defining logical bit operations, they define a small computer
           | architecture with features such as registers and a full
           | 64-bit adder and comparator which they use to search memory
           | and perform arithmetic operations. It's not as fast as
           | Javascript, but it's fundamentally computationally
           | equivalent.
        
             | bawolff wrote:
             | > that sell zero-click RCEs
             | 
             | Exactly my point. They do not sell backdoors.
             | 
             | Don't get me wrong, still icky, but definitely not a "very
             | well engineered, multi-year software backdoors"
        
               | xvector wrote:
               | They implant backdoors and sell zero-click RCEs that
               | exploit those backdoors.
        
         | greggsy wrote:
         | I mean, they're just the high profile ones. China makes and
         | ships a lot of hardware, and the US makes and ships a lot of
         | software.
        
         | fpgaminer wrote:
         | My completely unexpert opinion, informed by listening to all
         | the episodes of Darknet Diaries, agrees with this. US
         | intelligence likes to just bully/bribe/blackmail the supply
         | chain. They've got crypto chops, but I don't recall any
         | terribly sophisticated implants like this one (except Stuxnet,
         | which was likely Israel's work funded/assisted by the US). NK
         | isn't terribly sophisticated, and their goal is money, so that
         | doesn't match either. Russia is all over the place in terms of
         | targets/sophistication/etc because of their laws (AFAIK it's
         | legal for any citizen to wage cyberwarfare on anything and
         | everything except domestically), but this feels a bit beyond
         | anything I recall them accomplishing at a state level. Israeli
         | organizations have a long history of highly sophisticated
         | cyberwarfare (Stuxnet, NSO group, etc), and they're good about
         | protecting their access to exploits. That seems to fit the
         | best. That said, saying "Israeli organization" casts a wide net
         | since it's such a boiling hotspot for cybersecurity
         | professionals. Could be the work of the government, could be
         | co-sponsored by the US, or could just be a group of smart
         | people building another NSO group.
        
           | formerly_proven wrote:
           | > Russia is all over the place in terms of
           | targets/sophistication/etc ... but this feels a bit beyond
           | anything I recall them accomplishing at a state level.
           | 
           | https://en.wikipedia.org/wiki/Triton_(malware)
           | (https://www.justice.gov/opa/pr/four-russian-government-
           | emplo...)
        
       | nix0n wrote:
       | "Yo dawg, I heard you like exploits, so I exploited your exploit"
       | -Xzibit
        
       | wolverine876 wrote:
       | Have the heads of the targeted projects - including xz (Lasse
       | Collin?), OpenSSH (Theo?), and Linux (Linus) - commented on it?
       | 
       | I'm especially interested in how such exploits can be prevented
       | in the future.
        
         | strunz wrote:
         | https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78b...
         | :
         | 
         | >Lasse regularly has internet breaks and is on one at the
         | moment, started before this all kicked off. He has posted an
         | update at https://tukaani.org/xz-backdoor/ and is working with
         | the community.
        
         | gowld wrote:
         | OpenSSH and Linux were not targeted/affected.
         | 
         | xz and the Debian distribtion of OpenSSH were targeted.
        
           | cpach wrote:
           | Fedora too.
        
           | pama wrote:
           | The core source of the vulnerability (symbol lookup order
           | allowing a dependency to preempt a function) might
           | theoretically be fixed at the Linux+OpenSSH level.
        
             | wolverine876 wrote:
             | It's in their ecosystem; they should be concerned about
             | other similar attacks and about addressing the fears of
             | many users, developers, etc.
        
             | formerly_proven wrote:
             | Damien Miller (OpenSSH maintainer, OpenBSD committer) has
             | written a patch that implements the relevant libsystemd
             | functionality without libsystemd:
             | https://bugzilla.mindrot.org/show_bug.cgi?id=2641#c13
        
       | loeg wrote:
       | > The ciphertext is encrypted with chacha20 using the first 32
       | bytes of the ED448 public key as a symmetric key. As a result, we
       | can decrypt any exploit attempt using the following key:
       | 
       | Isn't this wild? Shouldn't the ciphertext be encrypted with an
       | ephemeral symmetric key signed by the privkey? I guess anyone
       | with the public key can still read any payload, so what's the
       | point?
        
         | rcaught wrote:
         | That action would create extra noise.
        
         | thenewwazoo wrote:
         | This is a NOBUS attack - Nobody But Us.
         | 
         | By tying it to a particular key owned by the attacker, no other
         | party can trigger the exploit.
        
           | loeg wrote:
           | I don't think this is responsive to my comment.
        
             | jhugo wrote:
             | I think it is? They were not trying to hide the content,
             | but rather to ensure that nobody else could encrypt valid
             | payloads.
        
         | kevincox wrote:
         | Encrypting the payload will allow you to get by more scanners
         | and in general make the traffic harder to notice. Since the
         | publicly available server code needs to be able to decrypt the
         | payload there is no way to make it completely secure, so this
         | seems like a good tradeoff that prevents passive naive
         | monitoring from triggering while not being more complicated
         | than necessary.
         | 
         | The only real improvement that I can see being made would be
         | adding perfect forward secrecy so that a logged session
         | couldn't be decrypted after the fact. But that would likely add
         | a lot of complexity (I think you need bidirectional
         | communication?)
        
       | aborsy wrote:
       | Why ED448?
       | 
       | It's almost never recommended, in favor of curve 25519.
        
         | stock_toaster wrote:
         | I believe ed25519 offers 128 bits of security, while ed448
         | offers 224 bits of security. ed448 has larger key sizes too,
         | which does seem like an odd choice in this case. Maybe it was
         | chosen for obscurity sake (being less commonly used)?
        
         | throitallaway wrote:
         | As I understand it Ed448 was only recently added to openssh, so
         | maybe it was chosen in order to evade detection by analysis
         | tools that scan for keys (if such a thing is possible.)
        
           | juitpykyk wrote:
           | Some tools scan for crypto algorithms, typically by searching
           | for the magic numbers, Ed448 is so new many tools probably
           | don't recognize it.
        
       ___________________________________________________________________
       (page generated 2024-04-01 23:00 UTC)