[HN Gopher] OpenSSH Backdoors
       ___________________________________________________________________
        
       OpenSSH Backdoors
        
       Author : benhawkes
       Score  : 121 points
       Date   : 2024-08-23 16:14 UTC (6 hours ago)
        
 (HTM) web link (blog.isosceles.com)
 (TXT) w3m dump (blog.isosceles.com)
        
       | DonHopkins wrote:
       | > The "many eyes" theory of open source security isn't popular
       | right now, but it certainly seems like bigger targets have
       | smaller margins for error.
       | 
       | https://news.ycombinator.com/item?id=20383529
       | 
       | DonHopkins on July 8, 2019 | parent | context | favorite | on:
       | Contributor Agreements Considered Harmful
       | 
       | And then there's Linus's Law, which he made up, then tried to
       | blame on Linus.
       | 
       | "Given enough eyeballs, all bugs are shallow." -Eric S Raymond
       | 
       | "My favorite part of the "many eyes" argument is how few bugs
       | were found by the two eyes of Eric (the originator of the
       | statement). All the many eyes are apparently attached to a lot of
       | hands that type lots of words about many eyes, and never actually
       | audit code." -Theo De Raadt
       | 
       | https://en.wikipedia.org/wiki/Linus%27s_Law
       | 
       | >In Facts and Fallacies about Software Engineering, Robert Glass
       | refers to the law as a "mantra" of the open source movement, but
       | calls it a fallacy due to the lack of supporting evidence and
       | because research has indicated that the rate at which additional
       | bugs are uncovered does not scale linearly with the number of
       | reviewers; rather, there is a small maximum number of useful
       | reviewers, between two and four, and additional reviewers above
       | this number uncover bugs at a much lower rate.[4] While closed-
       | source practitioners also promote stringent, independent code
       | analysis during a software project's development, they focus on
       | in-depth review by a few and not primarily the number of
       | "eyeballs".[5][6]
       | 
       | >Although detection of even deliberately inserted flaws[7][8] can
       | be attributed to Raymond's claim, the persistence of the
       | Heartbleed security bug in a critical piece of code for two years
       | has been considered as a refutation of Raymond's
       | dictum.[9][10][11][12] Larry Seltzer suspects that the
       | availability of source code may cause some developers and
       | researchers to perform less extensive tests than they would with
       | closed source software, making it easier for bugs to remain.[12]
       | In 2015, the Linux Foundation's executive director Jim Zemlin
       | argued that the complexity of modern software has increased to
       | such levels that specific resource allocation is desirable to
       | improve its security. Regarding some of 2014's largest global
       | open source software vulnerabilities, he says, "In these cases,
       | the eyeballs weren't really looking".[11] Large scale experiments
       | or peer-reviewed surveys to test how well the mantra holds in
       | practice have not been performed.
       | 
       | The little experience Raymond DOES have auditing code has been a
       | total fiasco and embarrassing failure, since his understanding of
       | the code was incompetent and deeply tainted by his preconceived
       | political ideology and conspiracy theories about global warming,
       | which was his only motivation for auditing the code in the first
       | place. His sole quest was to discredit the scientists who warned
       | about global warming. The code he found and highlighted was
       | actually COMMENTED OUT, and he never addressed the fact that the
       | scientists were vindicated.
       | 
       | http://rationalwiki.org/wiki/Eric_S._Raymond
       | 
       | >During the Climategate fiasco, Raymond's ability to read other
       | peoples' source code (or at least his honesty about it) was
       | called into question when he was caught quote-mining analysis
       | software written by the CRU researchers, presenting a commented-
       | out section of source code used for analyzing counterfactuals as
       | evidence of deliberate data manipulation. When confronted with
       | the fact that scientists as a general rule are scrupulously
       | honest, Raymond claimed it was a case of an "error cascade," a
       | concept that makes sense in computer science and other places
       | where all data goes through a single potential failure point, but
       | in areas where outside data and multiple lines of evidence are
       | used for verification, doesn't entirely make sense. (He was
       | curiously silent when all the researchers involved were
       | exonerated of scientific misconduct.)
       | 
       | More context:
       | 
       | https://news.ycombinator.com/item?id=20382640
        
         | toast0 wrote:
         | > Regarding some of 2014's largest global open source software
         | vulnerabilities, he says, "In these cases, the eyeballs weren't
         | really looking".
         | 
         | This makes a lot of sense, because for the most part, you only
         | go looking for bugs when you've run into a problem.
         | 
         | Looking for bugs you haven't run into is a lot harder
         | (especially in complex software like OpenSSL), you might get
         | lucky and someone sees a bug while looking for something else,
         | but mostly things go unlooked at until they cause a problem
         | that attracts attention.
         | 
         | Even when you pay for a professional audit, things can be
         | missed; but you'll likely get better results for security with
         | organized and focused reviews than by hoping your user base
         | finds everything.
        
           | cbsmith wrote:
           | Large open source projects are regularly subjected to
           | security audits.
           | 
           | I think the reality is that closed source software is
           | vulnerable to the same attack, the only difference is fewer
           | eyes to see it and more likely a profit motive will keep
           | those eyes directed in other ways.
        
         | poikroequ wrote:
         | It's not a complete fallacy. In events like this, after the
         | news hits, there are a flurry of eyeballs looking, at least for
         | a little while. The heartbleed bug got people to look at the
         | openssl code and realize what a mess that code is. Inspectre
         | and meltdown has led to the discovery of many more CPU
         | vulnerabilities. After Chatgpt hit the market, there has been
         | lots of new research on AI security, such as into prompt
         | injection attacks.
        
       | davidfiala wrote:
       | Even with the best intentions, can a volunteer-driven project
       | like OpenSSH truly guarantee the same level of security as a
       | commercial solution with dedicated resources and a financial
       | stake in preventing backdoors?
        
         | chgs wrote:
         | Better.
         | 
         | Imagine a closed source company with cost pressures employing a
         | random developer who can commit code, perhaps without any peer
         | review, but certainly limited peer review from harried
         | employees.
         | 
         | Now imagine why a nation state would want to get staff working
         | in such a company.
         | 
         | Now if companies like Microsoft or Amazon or Google want to pay
         | people to work on these open source projects that's a different
         | thing, and a great thing for them to do given how much they
         | rely on the code.
        
           | wannacboatmovie wrote:
           | Your argument is a model that does no vetting of contributors
           | whatsoever, which resulted in the catastrophe that is the
           | topic of discussion, is better than a hypothetical company
           | which is full of compromised developers that have free reign
           | to commit to the source tree with no oversight? That sounds
           | extremely contrived.
        
             | adolph wrote:
             | If you are positing that government infiltration of
             | companies is hypothetical and not a real threat, here is an
             | example of compromised corporate staff:
             | 
             | https://en.wikipedia.org/wiki/Saudi_infiltration_of_Twitter
        
             | chgs wrote:
             | This wasn't a contributor to OpenSSH, it was a deep level
             | supply chain attack - something that closed source
             | commercial companies are not immune to.
             | 
             | Given how much closed source companies love BSD/apache/etc
             | licenses where they can simply use these low level
             | libraries and charge for stuff on the top I'm not sure how
             | they would be immune from such an attack.
             | 
             | The risk from this was highlighted in xkcd back in 2020
             | 
             | https://xkcd.com/2347/
        
           | davidfiala wrote:
           | There's a ton of great truth here. It's hard to bite the
           | bullet and believe that insiders already exist (everywhere),
           | but I can share that from my experience working in big tech:
           | 
           | - There 100% will be bad actors. Many of them.
           | 
           | - But not always nationstate. Instead, they do it for (dumb)
           | personal reasons, too. Also, don't forget lulzsec as a great
           | example of just doing it for fun. So we cannot presume to
           | know anything about the 'why'. The bad guys I caught did it
           | for the most asinine reasons...
           | 
           | But the good news is that we have options:
           | 
           | - Strategic: Develop processes and systems that account for
           | the perpetual existence of unknown bad actors and allow for
           | successful business operation even when humans are
           | compromised.
           | 
           | - Reactive: Structural logging that makes sense in the
           | context of the action. Alerts and detection systems too.
           | 
           | - Reduction: Reduce access to only what is needed, when it is
           | needed.
           | 
           | - Proactive (not always necessary): Multi party approvals (a
           | la code review and production flag changes or ACL changes,
           | too)
           | 
           | - Social: Build a culture of security practices and awareness
           | of bad actors. Don't make people feel guilty or accusatory,
           | just empower them to make good design and process decisions.
           | It's a team sport.
           | 
           | Bonus: By guarding against evil actors, you've also got some
           | freebie good coverage for when an innocent employee gets
           | compromised too!
           | 
           | ---
           | 
           | Companies like Google and Amazon do the techniques above. And
           | they don't generally rely on antiquated technology that
           | cannot and will not change to meet the modern standards.
           | 
           | I know because I was the person that built and Google's first
           | time-based access system and rational-based access systems.
           | And multi party approval systems for access. (Fun fact: The
           | organizational challenge is harder than the technical).
           | 
           | And, those strategies work. And they increase SRE resilience
           | too!
           | 
           | ---
           | 
           | But even with the best UX, the best security tooling, the
           | best everything, etc there's no guarantees that it matters if
           | we just reject anything except the old system we're used to.
           | 
           | It's like a motorcycle helmet: Only works if you use it.
        
         | jasonjayr wrote:
         | How can a commercial solution prevent backdoors?
         | 
         | A sensitive product like this would have to defend against well
         | funded, patient, well resourced threats, including but not
         | limited to infiltrating an organzation in order to plant code
         | that only a few people may even be able to notice.
        
           | 2OEH8eoCRo0 wrote:
           | Well for one they need to show up in person. They can't be
           | some anon anime character who hides their identity for
           | totally legitimate reasons.
        
             | dijit wrote:
             | It's extremely easy for a three-letter agency or similar to
             | plant a new employee.
             | 
             | Corporate espionage may not be talked about very much, but
             | it is still very fashionable. Even without state sponsored
             | attackers.
        
             | toast0 wrote:
             | As an employee, I've typically needed to show up in person,
             | but I've worked with contractors who never showed up in
             | person. I've even been such a contractor at times.
             | 
             | Lots of commercial products use contractors and licensed
             | code in the final product.
             | 
             | At least with most open source projects, a lot of the
             | contribution process is in the open, so you could watch it
             | you wanted to. As DonHopkins writes elsewhere, few people
             | do, but it's possible. Not a lot of commercial projects
             | offer that level of transparency into changes.
        
             | kstrauser wrote:
             | I worked at my current job for 3 months before I met a
             | coworker in person. That might slightly help at a legacy
             | butts-in-seats factory, but doesn't do a lot for remote
             | jobs. I could be proxying in from Romania for all they'd
             | know.
        
         | yjftsjthsd-h wrote:
         | Thankfully, we aren't limited to asking leading questions and
         | then hand waving at it; we have a rather lot of empirical
         | evidence. OpenSSH is 24 years old; has it _ever_ been
         | successfully backdoored?
        
           | davidfiala wrote:
           | We don't know. We won't know the negative case, but we may
           | someday in some circumstance find out the positive (bugged)
           | case.
           | 
           | But we do know some sane things:
           | 
           | - The stakes couldn't be higher.
           | 
           | - Good: Don't allow inbound SSH connections, even through a
           | fancy $100k firewall.
           | 
           | - Best: Don't let people login with SSH (treat SSH like we
           | treat the serial port: a debugging option of last resort)
        
             | tptacek wrote:
             | Who's running OpenSSH through "fancy $100k firewalls"?
        
               | davidfiala wrote:
               | It's off topic, but in my consulting and networking,
               | security/firewall appliances are an easy first line
               | approach I see companies buy in to. The security sales
               | pitch sounds good and makes you feel good. Cannot name
               | names.
        
             | aflukasz wrote:
             | Re "good"/"best": you are thinking about air gapping, then?
             | Pull based systems are susceptible as any other software.
        
               | davidfiala wrote:
               | tldr; purpose built tools
               | 
               | SSH is kind of a swiss army knife. But 1000x sharper ;)
               | The delta I'm speaking of would be to have bespoke
               | tooling for different needs. And the tooling for each
               | purpose could have appropriate, structured logging and
               | access controls.
               | 
               | With SSH you can do almost anything. But you can imagine
               | a better tool might exist for specific high-value
               | activities.
               | 
               | Case study:
               | 
               | Today: engineering team monitors production errors that
               | might contain sensitive user data with SSH access and
               | running `tail -f /var/log/apache...`.
               | 
               | Better: Think of how your favorite cloud provider has a
               | dedicated log viewing tool with filtering, paging, access
               | control, and even logs of what you looked at all built
               | in. No SSH. Better UX. And even better security, since
               | you know who looked at what.
               | 
               | ---
               | 
               | There are times when terminal access is needed though.
               | SSH kinda fits that use case, but lacks a lot. Including:
               | true audit logging, permissioned access to servers, maybe
               | even restricting some users to a rigid set of pre-
               | ordained commands they are allowed to run. In that cases,
               | a better built tool can allow you to still run commands,
               | but have a great audit log, not require direct network
               | access (mediated access) to servers or internal networks
               | directly, flexible ACLs, and so on.
        
             | yjftsjthsd-h wrote:
             | > We don't know. We won't know the negative case, but we
             | may someday in some circumstance find out the positive
             | (bugged) case.
             | 
             | But that's either the same with any tool regardless of
             | whether it's commercially supported / FOSS / made by
             | anonymous devs or not. If anything, FOSS is easier to
             | audit.
        
             | HeatrayEnjoyer wrote:
             | How would you login without ssh?
        
         | dijit wrote:
         | Reality has shown that the least secure systems tend to be:
         | 
         | A) The ones with financial stakes in the game.
         | 
         | combined with:
         | 
         | B) Completely closed systems.
         | 
         | Contrarily, the most secure seem to be the ones volunteer led,
         | with no financial stakes.
         | 
         | It doesn't matter what you think is true, this is clearly what
         | is consistently happening.
        
           | davidfiala wrote:
           | > tldr; your statement overlooks the reality of businesses
           | with high ethical and financial obligations, like Google,
           | Amazon, and Azure.
           | 
           | - These companies underpin much of the internet's
           | infrastructure.
           | 
           | - Their security practices are far more advanced than typical
           | businesses, with SSH being a heavily restricted last resort.
           | That's not to imply that everyone else shouldn't strive to do
           | meet that (modern) bar too.
           | 
           | - Dedicated teams focus on minimizing access through time-
           | based, role-based, and purpose-based controls.
           | 
           | - They actively develop new security methodologies, often
           | closed-source, but with public evidence of their impact
           | (e.g., https://cloud.google.com/docs/security/production-
           | services-p... ).
           | 
           | - They rarely experience conventional hacks due to reduced
           | blast radius from attacks and insider threats.
           | 
           | - Leading security experts in both major tech companies and
           | niche organizations are driving new strategies and ways to
           | think about security... their focus includes access
           | reduction, resilience, and reliability, regardless of whether
           | the solutions are closed or commercial for them. The ideas
           | spread. (looking at you, Snapchat, for some odd reason)
           | 
           | - This is key: This evolution may not be obvious unless you
           | actively engage with those at the forefront. I think it's
           | what makes people think like the comment above. We cannot see
           | everything.
           | 
           | - It's crucial to recognize that security is a dynamic
           | field... with both open-source and closed-source solutions
           | contributing.
           | 
           | So, the notion that volunteer-led projects are inherently
           | more secure overlooks the significant investments in security
           | made by major corporations that host the internet, and their
           | relative success in doing so. Their advacements are coming to
           | the rest of the world (eventually).
        
             | dijit wrote:
             | Volunteer led still seems to be more secure, even with a
             | lot of corporate investment.
             | 
             | Rather than corporate lead endeavors which are very hit and
             | miss, mostly miss, especially when the product itself
             | claims security as a core principle.
             | 
             | It might not make sense to you, but the evidence points to
             | this.
        
               | davidfiala wrote:
               | Couldn't agree more on this one:
               | 
               | > especially when the product itself claims security as a
               | core principle
               | 
               | My thought is that _both_ volunteers and corporations
               | contribute. In different ways, too.
               | 
               | One example is how a YC company made an open source
               | version of Zanzibar. Zanzibar was an open paper to the
               | world from Google that describes a flexible, resilient,
               | fast access control system. It powers 99% of ACLs at
               | Google. It's /damn/ good for the whole world and /damn
               | good/ for developers' sanity and company security.
               | 
               | Corporate endeavors may fail, but they are often intense
               | in direction and can raise the bar in terms of UX and
               | security. Even if it's just a whitepaper, it still cannot
               | be discounted. Besides, the larger places focusing on
               | security aren't getting a big blast radius hack all that
               | often, yeah?
               | 
               | I'm curious though, you've intrigued me. What kind of
               | evidence or just lightweight ideas are you thinking of
               | wrt volunteer led being more secure? No need to dig up
               | sources if it's hard to find, but the general direction
               | of what makes you feel that would be useful.
        
         | toast0 wrote:
         | Of course not. OpenSSH comes with no warranty, read the
         | license.
         | 
         | Historically, it's been pretty good though.
         | 
         | If you would consider a commercial alternative, consider how
         | much you would need to pay to get an actionable warranty, and
         | consider if you could find someone to do a warrantied audit of
         | OpenSSH (or especially of OpenSSH in your environment) for that
         | amount. It might be a better use of your money.
        
         | dns_snek wrote:
         | Did you forget to disclose that you're a founder of a YC-backed
         | commercial solution that wants to compete with SSH?
        
           | cirrus3 wrote:
           | David Fiala CEO & Founder at Teclada Inc | Former Google
           | Security Leader | Supercomputing PhD
           | 
           | https://www.linkedin.com/company/teclada?trk=public_profile_.
           | ..
        
             | davidfiala wrote:
             | @dns_snek, it's right there in my real name username,
             | comment history, and profile. :)
             | 
             | My entire youth and professional life I've seen nothing but
             | footguns with actual practical use of SSH. The HN community
             | loves to hate, but the reality is that almost no one uses
             | SSH safely. It's near impossible. Especially when it comes
             | to configuration, keys, passwords, and network factors.
             | 
             | I observed the common SSH failure patterns, and I made the
             | most obvious footguns less than lethal. Looking a step
             | further, I made remote terminal access a pleasure to use
             | safely even for absolute novices.
             | 
             | So to your point about being in YC: In doing so, I thought
             | it would be beneficial to join a community that supports
             | one another (YC) so that an option (Teclada) can scale to
             | make a real impact in the world WRT the warts and footguns
             | of SSH.
        
               | kstrauser wrote:
               | Asking genuinely, what common footguns do you see? What
               | are the usual failure patterns?
        
           | tptacek wrote:
           | _Please don 't post insinuations about astroturfing,
           | shilling, brigading, foreign agents, and the like. It
           | degrades discussion and is usually mistaken. If you're
           | worried about abuse, email hn@ycombinator.com and we'll look
           | at the data._
           | 
           | https://news.ycombinator.com/newsguidelines.html
        
             | dns_snek wrote:
             | It's not an insinuation, or mistaken - it's stated on their
             | HN profile.
             | 
             | You should verify claims before mistakenly downvoting and
             | linking to site's guidelines.
        
               | tptacek wrote:
               | The point of the guideline is that accusing people of
               | commenting in bad faith --- _another guideline here_ ---
               | makes for bad, boring conversation. What 's worse, the
               | comment you responded to made a bad argument that I think
               | is easy to knock down on its merits, and by writing like
               | this you weakened those rebuttals. Don't write like this
               | here.
        
               | dns_snek wrote:
               | I'm sorry you feel like I don't live up to your
               | standards, I believe that transparency about affiliations
               | and conflicts of interest is paramount to healthy and
               | productive discussion. Disclosing these things when
               | criticizing competitors is really basic etiquette.
               | 
               | And look, it was just a simple nudge for them to disclose
               | their affiliations more plainly in the future, while also
               | providing relevant (and apparently appreciated) context
               | to other readers. It was a footnote, not an argument.
        
               | DiggyJohnson wrote:
               | It's still an insinuation even if its true. You should
               | consider rephrasing your comment and not starting it with
               | "did you forget to say...". Presumably they did not
               | forget to mention this.
        
               | HeatrayEnjoyer wrote:
               | If we can't point something out even if it's true then
               | the rules are bad and we have an obligation to disregard
               | them.
        
         | whydoyoucare wrote:
         | We must first precisely define "level of security" that is
         | expected from OpenSSH and a commerical version. Only then the
         | discussion about who can guarantee what would make sense.
        
         | dessimus wrote:
         | Has any open source project taken down the majority of single
         | OS install base as quickly as CrowdStrike? Seems like they
         | would have the "dedicated resources and a financial stake" to
         | prevent such as situation.
        
         | tptacek wrote:
         | OpenSSH, as load-bearing infrastructure for much of the
         | Internet, is heavily sponsored by tech companies. It
         | empirically has one of the best security records in all of
         | general-purpose off-the-shelf software. If for some benighted
         | reason I found myself competing directly with OpenSSH, the very
         | last thing I would pick as a USP would be security.
        
           | SoftTalker wrote:
           | USP is Unique Selling Proposition?
        
             | tptacek wrote:
             | Yep. I would not attempt to differentiate against OpenSSH
             | based on security track records. It's one of the most
             | trusted pieces of software in the industry.
        
         | GJim wrote:
         | Why the hell is a genuine question being downvoted?
         | 
         | Downvoters, what are you trying to achieve?
        
         | gmuslera wrote:
         | Sometimes commercial companies have "incentives" to put
         | backdoors, like i.e. secret orders by intelligence agencies.
         | Snowden papers and all related information from that time set a
         | baseline on what you may consider safe.
        
         | mmsc wrote:
         | Juniper had a backdoor in their firmware in 2015 which gave ssh
         | access, inserted by hackers:
         | https://blog.cryptographyengineering.com/2015/12/22/on-junip...
         | 
         | There were some updates:
         | https://www.zdnet.com/article/congress-asks-juniper-for-the-...
         | and according to https://www.reuters.com/article/world/spy-
         | agency-ducks-quest... China was behind it (but without
         | evidence).
        
       | Vecr wrote:
       | Was there ever a writeup of exactly how the XZ exploit worked? I
       | mean _exactly_ , I get the general overview and even quite a few
       | of the specifics, but last time I checked no one had credibly
       | figured out exactly how all the obfuscated components went
       | together.
        
         | 4llan wrote:
         | Gynvael Coldwind made a great analysis about it:
         | https://gynvael.coldwind.pl/?lang=en&id=782
         | 
         | https://news.ycombinator.com/item?id=39878681
         | 
         | xz/liblzma: Bash-stage Obfuscation Explained
        
           | mananaysiempre wrote:
           | That is, as it says in the title, about the Bash-stage
           | obfuscation. That's fun but it'd also be interesting to know
           | what capabilities the exploit payload actually provided to
           | the attacker. Last I looked into that a month or so ago there
           | were at least two separate endpoints already discovered, and
           | the investigation was still in progress.
        
         | kva-gad-fly wrote:
         | https://www.openwall.com/lists/oss-security/2024/03/29/4
         | 
         | ?
        
           | Vecr wrote:
           | Yeah, what's posted by you and other users so far is stuff I
           | know, build scripts, injection, obfuscation. I'm more looking
           | for a careful reverse engineering of the actual payload.
        
             | EvanAnderson wrote:
             | I haven't looked again in months, but I'd be interested in
             | the same thing you're looking for. I poked at the payload
             | with Ghidra for a little bit, realized it was miles above
             | my pay grade, and stepped away. Everybody was wowed by the
             | method of delivery but the payload itself seems to have
             | proved fairly inscrutable.
        
               | Vecr wrote:
               | I'd also like to see the timeline of XZ's landlock
               | implementation, I haven't seen that discussed much.
        
             | rwmj wrote:
             | https://www.youtube.com/watch?v=Q6ovtLdSbEA
             | 
             | This talk by Denzel Farmer at Columbia isn't a complete
             | disassembly of the payload but it's the best I've seen so
             | far.
             | 
             | Slides if you don't want to watch the video:
             | https://cs4157.github.io/www/2024-1/lect/21-xz-utils.pdf
        
               | EvanAnderson wrote:
               | Thanks for posting that. A quick perusal of those slides
               | looks good. I know what I'm going to be reading and
               | watching this evening!
        
             | deathanatos wrote:
             | https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78
             | b...
             | 
             | The link you want from that is this https://bsky.app/profil
             | e/filippo.abyssdomain.expert/post/3ko... ; that set of
             | tweets has the high level overview.
             | 
             | They in turn links to https://github.com/amlweems/xzbot
             | which has more details.
             | 
             | The TL;DR is that is hooks the RSA bits to look for an RSA
             | cert with a public key that isn't really an RSA public key;
             | the pubkey material contains a signed & encrypted request
             | from the attacker, signed & encrypted with an ed448 key. If
             | the signature checks out, system() is called, i.e., RCEaaS
             | for the attacker.
        
         | toast0 wrote:
         | I think this is sufficiently detailed?
         | 
         | https://lwn.net/Articles/967192/
         | 
         | But if there's a part that's still unclear, maybe there's
         | another writeup somewhere that addresses it?
        
         | Lammy wrote:
         | The social-engineering aspect of pressuring the old maintainer
         | is way more interesting than the actual software IMHO
         | https://securelist.com/xz-backdoor-story-part-2-social-engin...
        
           | Vecr wrote:
           | I already got all that. Yes, I think it's interesting, but I
           | wanted to see a final (non-interim) analysis of the payload
           | going byte-by-byte.
        
           | pdonis wrote:
           | I agree 1000% with this. One thing I don't see addressed in
           | the article you reference, though, is whether any OpenSSH
           | maintainers spotted the addition of a co-maintainer to xz
           | utils and did any due diligence about it.
        
             | toast0 wrote:
             | Seems unlikely. xz is not a dependency of OpenSSH.
             | 
             | It's only a transitive dependency of sshd on Linux
             | distributions that patch OpenSSH to include libsystemd
             | which depends on xz.
             | 
             | It's wholy unreasonable to expect OpenSSH maintainers to
             | vet contributors of transitive dependencies added by
             | distribution patches that the OpenSSH maintainers clearly
             | don't support.
        
               | pdonis wrote:
               | _> It 's only a transitive dependency of sshd on Linux
               | distributions that patch OpenSSH to include libsystemd
               | which depends on xz._
               | 
               | Ah, ok. Then my question should really be about the
               | distros--did any of _them_ spot the co-maintainer being
               | added and do due diligence?
               | 
               | As for the "libsystemd" part, there's another reason for
               | me to migrate to non-systemd distros.
        
       | rwmj wrote:
       | > However, it's interesting to note that in both 2002 and 2024 we
       | got a backdoor rather than a bugdoor.
       | 
       |  _As far as we know_.
       | 
       | Related, there was a pretty interesting backdoor-by-bug attempt
       | on the Linux kernel (at least, _one that we know of_ ) back in
       | 2003: https://lwn.net/Articles/57135/
       | 
       | The Linux "bug" was unsophisticated by modern standards, but you
       | could imagine a modern equivalent that's harder to spot:
       | 
       | Make the "bug" happen across several lines of code, especially if
       | some of those lines are part of existing code (so don't appear in
       | the patch being reviewed). Ensure the compiler doesn't warn about
       | it. Make the set of triggering events very unlikely unless you
       | know the attack. It would be very surprising to me if three
       | letter agencies hadn't done or attempted this.
        
         | mmsc wrote:
         | And in 2010, a similar backdoor appeared in UnrealICRD:
         | https://lwn.net/Articles/392201/. Also in proftpd:
         | https://www.aldeid.com/wiki/Exploits/proftpd-1.3.3c-backdoor.
         | Both were done by ac1db1tch3z who the author of OP's post, Ben
         | Hawkes, got a shoutout from for another local privilege
         | escalation vulnerability from over a decade ago :-).
         | 
         | Anyways, in response to the backdoor in unrealircd, Core
         | Security came up with a "hiding backdoors in plain sight"
         | challenge: https://seclists.org/fulldisclosure/2010/Jul/66
         | 
         | "Bugdoors" are not new, and I'm sure some have been patched
         | without anybody realizing they were introduced maliciously.
        
           | baby wrote:
           | And there was the socat backdoor
        
         | fouronnes3 wrote:
         | To think that there's a safe somewhere in a TLA basement with a
         | "how to get root anywhere" tutorial.
        
         | ehhthing wrote:
         | The problem with these is that bugdoors require you to target
         | way more valuable stuff compared to backdoors. With a backdoor
         | you can target practically any library or binary that is being
         | run with root privileges, while with a bugdoor you can only
         | really target code that is directly interacting with a network
         | connection.
         | 
         | Direct network facing code is much more likely to have
         | stringent testing and code review for all changes, so as of now
         | it seems a bit easier to target codebases with very little
         | support and maintenance compared to an attack that would target
         | higher value code like OpenSSH or zlib.
        
       | egberts1 wrote:
       | Ya think that the tests subditectory would contain sufficient
       | "integrity" test cases to ensure that it doesn't fail, but alas
       | ... nooooooo.
        
         | tedunangst wrote:
         | What does this mean?
        
       | jmakov wrote:
       | At this point what makes us think all major contributors are not
       | on a payroll of one or the other state agency? The attack surface
       | of the whole SW supply chain is huge.
        
         | Sesse__ wrote:
         | All major, and not a single one of them has leaked it?
        
         | tedunangst wrote:
         | Sounds like a good social experiment. Work your way up to major
         | contributor for a project, see how long until you're approached
         | by the MIB.
        
       | mmh0000 wrote:
       | Whenever these stories come up, I like to remind people about the
       | Underhanded C Code Contest[1] and the IOCCC[2]
       | 
       | TL;DR: Clever C programmers can hide very evil shit in plain
       | sight.
       | 
       | [1] https://www.underhanded-c.org/ [2] https://www.ioccc.org/
        
         | dkga wrote:
         | Wow, I was definitely not aware of this.
        
       | daghamm wrote:
       | Even further back, someone claimed that a three letter agency had
       | paid some developer to introduce backdoor to openbsd (or possibly
       | openssh).
       | 
       | Theo did not belive this and publicly disputed these claim and
       | even revealed the name of the whistle-blower. But I have always
       | felt the story rang true and Theo sound not have been so
       | dismissing.
       | 
       | Can't find the story, but it should be on the mailing lists
       | somewhere
        
         | jmclnx wrote:
         | I remember this, it was the FBI. OpenBSD people did a huge
         | audit and nothing was found. That was also like 20 years ago.
         | 
         | Also, other articles stated that never happened.
         | 
         | Plus, the "backdoor" in OpenSSH was a Linux only thing possibly
         | related to systemd. It never affected OpenBSD. That is because
         | of Linux people patching OpenSSH with "dependency hell". I
         | believe systemd people is doing something about these
         | dependency chains.
        
           | Arch-TK wrote:
           | The "thing to do" about the dependencies is not to have them
           | in the first place. Distributions where patching OpenSSH to
           | add a libsystemd dependency instead of adding 15 lines of
           | code.
        
       ___________________________________________________________________
       (page generated 2024-08-23 23:00 UTC)