[HN Gopher] RegreSSHion: RCE in OpenSSH's server, on glibc-based...
       ___________________________________________________________________
        
       RegreSSHion: RCE in OpenSSH's server, on glibc-based Linux systems
        
       Author : robinhoodexe
       Score  : 632 points
       Date   : 2024-07-01 08:40 UTC (14 hours ago)
        
 (HTM) web link (www.qualys.com)
 (TXT) w3m dump (www.qualys.com)
        
       | megous wrote:
       | In our experiments, it takes ~10,000 tries on average to win this
       | race         condition, so ~3-4 hours with 100 connections
       | (MaxStartups) accepted         per 120 seconds (LoginGraceTime).
       | Ultimately, it takes ~6-8 hours on         average to obtain a
       | remote root shell, because we can only guess the         glibc's
       | address correctly half of the time (because of ASLR).
       | 
       | MaxStartups default is 10
        
         | Haemm0r wrote:
         | Question regarding this from a non-guru: - Is it correct that
         | this only works for user root if login with password/key for
         | root is allowed? - Is it correct, that this only works if the
         | attacker knows a login name valid for ssh?
        
           | aflukasz wrote:
           | I believe knowing existing user name or using host-depended
           | value does not matter.
           | 
           | The exploit tries to interrupt handlers that are being run
           | due to login grace period timing out - so we are already at a
           | point where authentication workflow has ended without passing
           | all the credentials.
           | 
           | Plus, in the "Practice" section, they discuss using user name
           | value as a way to manipulate memory at a certain address, so
           | they want/need to control this value.
        
         | vesinisa wrote:
         | Even if it means the attack takes 10x as long, it doesn't seem
         | to be limited by bandwidth, only time. Might not take long
         | before the bots appear that try to automatically exploit this
         | on scale.
        
         | ale42 wrote:
         | Such an amount of connections should anyway trigger all
         | possible logging & IDS systems, right?
        
           | megous wrote:
           | I doubt most servers use any such thing.
        
           | stefan_ wrote:
           | If you don't value your time, sure? There's thousands of
           | systems trying to log into publicly accessible SSH servers
           | all the time.
        
             | XorNot wrote:
             | Yeah slow bruteforces are running all over the net all the
             | time. This means there's no reason not to throw this attack
             | into the mix.
        
           | Piskvorrr wrote:
           | It should trigger fail2ban, that's for sure.
           | 
           | Alerting is useless, with the volume of automated exploits
           | attempted.
        
             | TacticalCoder wrote:
             | > It should trigger fail2ban, that's for sure.
             | 
             | But people here are going to explain that fail2ban is
             | security theater...
        
               | Piskvorrr wrote:
               | It's a doorstop, not a fix. Useful nonetheless.
        
               | DEADMINCE wrote:
               | Can you link to any comment in this thread of someone
               | actually claiming that?
        
               | BrandoElFollito wrote:
               | I am one of the people who see fail2ban as a nuisance
               | _for the average administrator_. Average means that they
               | know things on average and sooner or later fail2ban will
               | block unexpectedly. Usually when you are away canoeing in
               | the wilderness.
               | 
               | This is all a matter of threat and risk management. If
               | you know what you are doing then fail2ban or portknocking
               | is another layer on your security.
               | 
               | Security theater in my opinion is something else:
               | nonsense password policies, hiding your SSID,
               | whitelisting MACs, ...
        
           | johnklos wrote:
           | If you have a public facing Internet server, you're probably
           | already running something like blocklistd or fail2ban. They
           | reduce abuse, but they don't do anything to avoid an issue
           | like this except from naive attackers.
           | 
           | More resourceful attackers could automate attempted exploit
           | using a huge botnet, and it'd likely look similar to the
           | background of ssh brute force bots that we already see
           | 24/7/365.
        
         | JackSlateur wrote:
         | Default is 100: https://github.com/openssh/openssh-
         | portable/blob/master/serv...
        
           | JeremyNT wrote:
           | The config option called MaxStartups accepts a tuple to set 3
           | associated variables in the code. It wasn't clear to me which
           | value people were referring to.
        
         | jmclnx wrote:
         | The default for MaxStartups is 10:30:100
         | 
         | 10:30:60 is mentioned in the man for start:rate:full, so I set
         | mine to that value.
         | 
         | Thanks for the quote
        
       | djmdjm wrote:
       | OpenSSH release notes: https://www.openssh.com/txt/release-9.8
       | 
       | Minimal patches for those can't/don't want to upgrade:
       | https://marc.info/?l=oss-security&m=171982317624594&w=2
        
         | morsch wrote:
         | > Exploitation on 64-bit systems is believed to be possible but
         | has not been demonstrated at this time.
        
           | djmdjm wrote:
           | I'm confident that someone will make a workable exploit
           | against 64-bit systems.
        
             | runjake wrote:
             | Context here: djmdjm is Daniel Miller, an OpenSSH/OpenBSD
             | developer.
        
               | l9i wrote:
               | _Damien_ Miller
        
           | aaronmdjones wrote:
           | Exploits only ever get better. Today's possible is next
           | month's done.
        
       | 0x0 wrote:
       | Patch out for Debian 12; Debian 11 not affected.
       | 
       | https://security-tracker.debian.org/tracker/CVE-2024-6387
        
         | nubinetwork wrote:
         | Can confirm, Pi OS bullseye also has the updated openssh.
        
         | wiredfool wrote:
         | Looks like Focal (20.04) isn't on an affected version. Jammy
         | (22.04) looks like it is.
        
           | feurio wrote:
           | My procrastination pays off ...
        
           | metadat wrote:
           | What about, uh, 18.04?
           | 
           | Edit: 18.04 Bionic is unaffected, the ssh version is 7.6
           | which is too old.
        
             | creshal wrote:
             | If you have extended support: Just update (if it's not so
             | old that it's not even affected in the first place)
             | 
             | If you don't have extended support: You're vulnerable to
             | worse, easier to exploit bugs :)
        
           | urza wrote:
           | On 22.04 apt update && upgrade doesn't help.. yet?
        
         | theandrewbailey wrote:
         | Just ran an apt update and upgrade on my Debian 12 server.
         | OpenSSH packages were the only ones upgraded.
        
           | hgs3 wrote:
           | Yes, the Debian 12 fix is out. You can verify you're patched
           | by running 'ssh -V' and verifying you see 'deb12u3'. If you
           | see 'deb12u2' then you're vulnerable [1].
           | 
           | [1] https://security-tracker.debian.org/tracker/CVE-2024-6387
        
       | nubinetwork wrote:
       | I haven't seen an increase of ssh traffic yet, but the alert only
       | went out a couple hours ago... hopefully distros will ship the
       | patches quickly.
        
         | booi wrote:
         | i would assume all the distros have patches ready to go
         | awaiting the embargo lift.
        
         | cperciva wrote:
         | This is the sort of bug which pre-announcement coordination is
         | designed for. Anyone who doesn't have patches ready was either
         | forgotten (I've seen a few instances of "I thought _you_ were
         | going to tell them! ") or isn't on the ball.
        
           | nubinetwork wrote:
           | Gentoo announced it at the same time as qualys, but they're
           | currently trying to backport and bump users to a patched
           | version. https://bugs.gentoo.org/935271
        
             | nubinetwork wrote:
             | Gentoo has pushed the patched version now.
        
       | cperciva wrote:
       | Patch out for FreeBSD. Not clear if affected (it has only known
       | to be exploitable with glibc, which we don't use) but best to be
       | safe.
       | 
       | https://www.freebsd.org/security/advisories/FreeBSD-SA-24:04...
        
       | jesprenj wrote:
       | > Finally, if sshd cannot be updated or recompiled, this signal
       | handler race condition can be fixed by simply setting
       | LoginGraceTime to 0 in the configuration file. This makes sshd
       | vulnerable to a denial of service (the exhaustion of all
       | MaxStartups connections), but it makes it safe from the remote
       | code execution presented in this advisory.
        
       | letters90 wrote:
       | > In our experiments, it takes ~10,000 tries on average to win
       | this race condition, so ~3-4 hours with 100 connections
       | (MaxStartups) accepted per 120 seconds (LoginGraceTime).
       | Ultimately, it takes ~6-8 hours on average to obtain a remote
       | root shell, because we can only guess the glibc's address
       | correctly half of the time (because of ASLR).
       | 
       | Mitigate by using fail2ban?
       | 
       | Nice to see that Ubuntu isn't affected at all
        
         | simonjgreen wrote:
         | For servers you have control over, as an emergency bandaid,
         | sure. Assumes you are not on an embedded system though like a
         | router.
        
           | letters90 wrote:
           | I didn't consider embedded, probably the biggest target for
           | this.
        
         | ulrikrasmussen wrote:
         | Where do you see that Ubuntu isn't affected?
        
           | rs_rs_rs_rs_rs wrote:
           | >Side note: we discovered that Ubuntu 24.04 does not re-
           | randomize the ASLR of its sshd children (it is randomized
           | only once, at boot time); we tracked this down to the patch
           | below, which turns off sshd's rexec_flag. This is generally a
           | bad idea, but in the particular case of this signal handler
           | race condition, it prevents sshd from being exploitable: the
           | syslog() inside the SIGALRM handler does not call any of the
           | malloc functions, because it is never the very first call to
           | syslog().
           | 
           | No mention on 22.04 yet.
        
         | djmdjm wrote:
         | Ubuntu isn't affected _by this exploit_
        
           | jgalt212 wrote:
           | as opposed to the other exploits not being discussed.
        
         | mmsc wrote:
         | >Mitigate by using fail2ban?
         | 
         | In theory, this could be used (much quicker than the mentioned
         | days/weeks) to get local privilege escalation to root, if you
         | already have some type of shell on the system already. I would
         | assume that fail2ban doesn't block localhost.
        
           | udev4096 wrote:
           | How is local privilege escalation relevant here? Fail2ban
           | should be able to block the RCE
        
             | mmsc wrote:
             | How is it not?
             | 
             | If fail2ban isn't going to blocklist localhost, then it
             | isn't a mitigation for this vulnerability because RCE
             | implies LPE.
        
               | DEADMINCE wrote:
               | People are generally not trying to get root via an SSH
               | RCE over localhost. That's going to be a pretty small
               | sample of people that applies to.
               | 
               | But, sure, in that case fail2ban won't mitigate, but
               | that's pretty damn obviously implied. For 99% of people
               | and situations, it will.
        
               | mmsc wrote:
               | >People are generally not trying to get root via an SSH
               | RCE over localhost. That's going to be a pretty small
               | sample of people that applies to
               | 
               | It's going to apply to the amount of servers that an
               | attacker has low-privileged access (think: www-data) and
               | an unpatched sshd. Attackers don't care if it's an RCE or
               | not: if a public sshd exploit can be used on a system
               | with a Linux version without a public Linux LPE, it will
               | be used. Being local also greatly increases the
               | exploitability.
               | 
               | Then consider the networks where port 22 is blocked from
               | the internet but sshd is running in some internal network
               | (or just locally for some reason).
        
               | DEADMINCE wrote:
               | > It's going to apply to the amount of servers that an
               | attacker has low-privileged access (think: www-data) and
               | an unpatched sshd.
               | 
               | Right, which is almost none. www-data should be set to
               | noshell 99% of the time.
               | 
               | > or just locally for some reason).
               | 
               | This is all that would be relevant, and this is also very
               | rare.
        
               | infotogivenm wrote:
               | Think "illegitimate" access to www-data. It's very common
               | on linux pentests to need to privesc from some lower-
               | privileged foothold (like a command injection in an httpd
               | cgi script). Most linux servers run openssh. So yes I
               | would expect this turns out to be a useful privesc in
               | practice.
        
               | DEADMINCE wrote:
               | > Think "illegitimate" access to www-data.
               | 
               | I get the point.
               | 
               | My point was the example being given is less than 1% of
               | affected cases.
               | 
               | > It's very common on linux pentests to need to privesc
               | from some lower-privileged foothold
               | 
               | Sure. Been doing pentests for 20+ years :)
               | 
               | > So yes I would expect this turns out to be a useful
               | privesc in practice.
               | 
               | Nah.
        
               | infotogivenm wrote:
               | > Nah
               | 
               | I don't get it then... Do you never end up having to
               | privesc in your pentests on linux systems? No doubt it
               | depends on customer profile but I would guess personally
               | on at least 25% of engagements in Linux environments I
               | have had to find a local path to root.
        
               | DEADMINCE wrote:
               | > Do you never end up having to privesc in your pentests
               | on linux systems?
               | 
               | Of course I do.
               | 
               | I'm not saying privsec isn't useful, I'm saying the cases
               | where you will ssh to localhost to get root are very
               | rare.
               | 
               | Maybe you test different environment or something, but on
               | most corporate networks I test the linux machines are dev
               | machines just used for compiling/testing and basically
               | have shared passwords, or they're servers for webapps or
               | something else where normal users most who have a windows
               | machine won't have a shell account.
               | 
               | If there's a server where I only have a local account and
               | I'm trying to get root and it's running an ssh server
               | vulnerable to this attack, of course I'd try it. I just
               | don't expect to be in that situation any time soon, if
               | ever.
        
               | mmsc wrote:
               | >I test the linux machines are dev machines just used for
               | compiling/testing and basically have shared passwords, or
               | they're servers for webapps or something else where
               | normal users most who have a windows machine won't have a
               | shell account.
               | 
               | And you don't actually pentest the software which those
               | users on the windows machine are using on the Linux
               | systems? So you find a Jenkins server which can be used
               | to execute Groovy scripts to execute arbitrary commands,
               | the firewall doesn't allow connections through port 22,
               | and it's just a "well, I got access, nothing more to
               | see!"?
        
               | DEADMINCE wrote:
               | > And you don't actually pentest the software which those
               | users on the windows machine are using on the Linux
               | systems?
               | 
               | You really love your assumptions, huh?
               | 
               | > it's just a "well, I got access, nothing more to see!"?
               | 
               | I said nothing like that, and besides that, if you were
               | not just focused on arguing for the sake of it, you would
               | see MY point was about the infrequency of the situation
               | you were talking about (and even then your original point
               | seemed to be contrarian in nature more than anything).
        
               | mmsc wrote:
               | >www-data should be set to noshell 99% of the time.
               | 
               | Huh? execve(2), of course, lets to execute arbitrary
               | files. No need to spawn a tty at all. https://swisskyrepo
               | .github.io/InternalAllTheThings/cheatshee...
               | 
               | >This is all that would be relevant, and this is also
               | very rare.
               | 
               | Huh? Exploiting an unpatched vulnerability on a server to
               | get access to a user account is.. very rare? That's
               | exactly what lateral movement is about.
        
               | DEADMINCE wrote:
               | Instead of taking the time to reply 'huh' multiple times,
               | you should make sure you read what you're replying to.
               | 
               | For example:
               | 
               | > Huh? Exploiting an unpatched vulnerability on a server
               | to get access to a user account is.. very rare?
               | 
               | The 'this' I refer to is _very clearly not_ what you 've
               | decided to map it to here. The 'this' I refer to, if you
               | follow the comment chain, refers to a subset of something
               | you said which was relevant to your point - the rest was
               | not.
        
           | sgt wrote:
           | Confirmed - fail2ban doesn't block localhost.
        
         | nubinetwork wrote:
         | Ubuntu has pushed an updated openssh.
        
         | paulmd wrote:
         | > Ultimately, it takes ~6-8 hours on average to obtain a remote
         | root shell, because we can only guess the glibc's address
         | correctly half of the time (because of ASLR).
         | 
         | AMD to the rescue - fortunately they decided to leave the take-
         | a-way and prefetch-type-3 vulnerability unpatched, and continue
         | to recommend that the KPTI mitigations be disabled by default
         | due to performance costs. This breaks ASLR on all these
         | systems, so these systems can be exploited in a much shorter
         | time ;)
         | 
         | AMD's handling of these issues is WONTFIX, despite (contrary to
         | their assertion) the latter even providing actual kernel data
         | leakage at a higher rate than meltdown itself...
         | 
         | (This one they've outright pulled down their security bulletin
         | on) https://pcper.com/2020/03/amd-comments-on-take-a-way-
         | vulnera...
         | 
         | (This one remains unpatched in the third variant with
         | prefetch+TLB) https://www.amd.com/en/resources/product-
         | security/bulletin/a...
         | 
         | edit: there is a third now building on the first one with an
         | unpatched vulnerabilities in all zen1/zen2 as well... so this
         | one is WONTFIX too it seems, like most of the defects TU Graz
         | has turned up.
         | 
         | https://www.tomshardware.com/news/amd-cachewarp-vulnerabilit...
         | 
         | Seriously I don't know why the community just tolerates these
         | defenses being known-broken on the most popular brand of CPUs
         | within the enthusiast market, while allowing them to knowingly
         | disable the defense that's already implemented that would
         | prevent this leakage. Is defense-in-depth not a thing anymore?
         | 
         | Nobody in the world would ever tell you to explicitly turn off
         | ASLR on an intel system that is exposed to untrusted
         | attackers... yet that's exactly the spec AMD continues to
         | recommend and everyone goes along without a peep. It's
         | literally a kernel option that is already running and tested
         | and hardens you against ASLR leakage.
         | 
         | The "it's only metadata" is so tired. Metadata is more
         | important than regular data, in many cases. We kill people,
         | convict people, control all our security and access control via
         | metadata. Like yeah it's just your ASLR layouts leaking, what's
         | the worst that could happen? And I mean real data goes too in
         | several of these exploits too, but that's not a big deal
         | either... not like those ssh keys are important, right?
        
           | JackSlateur wrote:
           | What are you talking about ? My early-2022 ryzen 5625U shows:
           | Vulnerabilities:                   Gather data sampling:
           | Not affected         Itlb multihit:          Not affected
           | L1tf:                   Not affected         Mds:
           | Not affected         Meltdown:               Not affected
           | Mmio stale data:        Not affected         Reg file data
           | sampling: Not affected         Retbleed:               Not
           | affected         Spec rstack overflow:   Vulnerable: Safe
           | RET, no microcode         Spec store bypass:      Mitigation;
           | Speculative Store Bypass disabled via prctl         Spectre
           | v1:             Mitigation; usercopy/swapgs barriers and
           | __user pointer sanitization         Spectre v2:
           | Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP
           | always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not
           | affected         Srbds:                  Not affected
           | Tsx async abort:        Not affected
           | 
           | Only regular stuff
        
             | SubzeroCarnage wrote:
             | KPTI won't be default enabled on Linux on AMD CPUs is the
             | issue here.
             | 
             | Yet it provides valuable separation between kernel and
             | userspace address ranges.
             | 
             | iirc the predecessor to KPTI was made before these hw flaws
             | were announced as a general enhancement to ASLR.
             | 
             | AMD aside, Spectre V2 isn't even default mitigated for
             | userspace across the board, you must specify spectre_v2=on
             | for userspace to be protected.
             | 
             | https://www.kernel.org/doc/html/latest/admin-guide/kernel-
             | pa...
        
               | paulmd wrote:
               | > KPTI won't be default enabled on Linux on AMD CPUs is
               | the issue here. Yet it provides valuable separation
               | between kernel and userspace address ranges.
               | 
               | AMD's security bulletin is actually _incredibly_ weaselly
               | and in fact acknowledges this as the reason further
               | mitigation is not necessary, and then goes on to
               | recommend that KPTI remain disabled anyway.
               | 
               | https://www.amd.com/en/resources/product-
               | security/bulletin/a...
               | 
               | > The attacks discussed in the paper do not directly leak
               | data across address space boundaries. As a result, AMD is
               | not recommending any mitigations at this time.
               | 
               | That's literally the entire bulletin, other than naming
               | the author and recommending you follow security best-
               | practices. Two sentences.
               | 
               | Like it's all _very_ carefully worded to avoid
               | acknowledging the CVE in any way, but to also avoid
               | saying anything that 's _technically_ false. If you do
               | not enable KPTI then there is no address space boundary,
               | and leakage from the kernel can occur. And specifically
               | that leakage is page-table layouts - which AMD considers
               | "only metadata" and therefore not important.
               | 
               | But it is a building block which amplifies all these
               | _other_ attacks, including Specter itself. Specter was
               | tested in the paper itself and - contrary to AMD 's
               | statement (one of the actual falsehoods they make despite
               | their weaseling) - does result in actual leakage of
               | kernel data and not just metadata (the author notes that
               | this is a more severe leak than meltdown itself). And
               | leaking metadata is bad enough by itself - leaking page-
               | table layouts is quite important!
               | 
               | AMD's interest is in shoving it under the rug as quietly
               | as possible - the solution is flushing the caches every
               | time you enter/leave kernel space, just like with
               | Meltdown. That's what KPTI is/does, you flush caches to
               | isolate the pages. And AMD has leaned _much_ more heavily
               | on large last-level caches than Intel has, so this hurts
               | correspondingly more.
               | 
               | But I don't know why the kernel team is playing along
               | with this. The sibling commenter is right in the sense
               | that this is not something that is being surfaced to
               | users to let them know they are vulnerable, and that the
               | kernel team continues to follow the AMD recommendation of
               | insecure-by-default and letting the issue go quietly
               | under the rug at the expense of their customers'
               | security. This undercuts something that the kernel team
               | has put _significant_ engineering effort into mitigating
               | - not as important as AMD cheating on benchmarks with an
               | insecure configuration I guess.
               | 
               | There has always been a weird sickly affection for AMD in
               | the enthusiast community, and you can see it every time
               | there's an AMD vulnerability. When the AMD vulns really
               | started to flow a couple years ago, there was basically a
               | collective shrug and we just decided to ignore them
               | instead of mitigating. So much for "these vulnerabilities
               | only exist because [the vendor] decided to cut corners in
               | the name of performance!". Like that's _explicitly_ the
               | decision AMD has made with their customers ' security.
               | And everyone's fine with it, same weird sickly affection
               | for AMD as ever among the enthusiast community. This is a
               | billion-dollar company cutting corners on their
               | customers' security so they can win benchmarks. It's bad.
               | It shouldn't need to be said, but it does.
        
             | SubzeroCarnage wrote:
             | Also if you don't have a bios update available for that
             | newer microcode, give my real-ucode package a try:
             | https://github.com/divestedcg/real-ucode
             | 
             | The linux-firmware repo does not provide AMD microcode
             | updates to consumer platforms unlike Intel.
        
             | paulmd wrote:
             | these are the tests you need to run:
             | https://github.com/amdprefetch/amd-prefetch-
             | attacks/blob/mas...
             | 
             | you probably want to do `export WITH_TLB_EVICT=1` before
             | you make, then run ./kaslr. The power stuff is patched (by
             | removing the RAPL power interface) but there is still
             | timing differences visible on my 5700G and the
             | WITH_TLB_EVICT makes this fairly obvious/consistent:
             | 
             | https://pastebin.com/1n0QbHTH
             | 
             | ```csv
             | 
             | 452,0xffffffffb8000000,92,82,220
             | 
             | 453,0xffffffffb8200000,94,82,835
             | 
             | 454,0xffffffffb8400000,110,94,487
             | 
             | 455,0xffffffffb8600000,83,75,114
             | 
             | 456,0xffffffffb8800000,83,75,131
             | 
             | 457,0xffffffffb8a00000,109,92,484
             | 
             | 458,0xffffffffb8c00000,92,82,172
             | 
             | 459,0xffffffffb8e00000,110,94,499
             | 
             | 460,0xffffffffb9000000,92,82,155
             | 
             | ```
             | 
             | those timing differences are the presence/nonpresence of
             | kernel pages in the TLB, those are the KASLR pages, they're
             | slower when the TLB eviction happens because of the extra
             | bookkeeping.
             | 
             | then we have the stack protector canary on the last couple
             | pages of course:
             | 
             | ```csv
             | 
             | 512,0xffffffffbf800000,91,82,155
             | 
             | 513,0xffffffffbfa00000,92,82,147
             | 
             | 514,0xffffffffbfc00000,92,82,151
             | 
             | 515,0xffffffffbfe00000,91,82,137
             | 
             | 516,0xffffffffc0000000,112,94,598
             | 
             | 517,0xffffffffc0200000,110,94,544
             | 
             | 518,0xffffffffc0400000,110,94,260
             | 
             | 519,0xffffffffc0600000,110,94,638
             | 
             | ```
             | 
             | edit: the 4 pages at the end of the memory space are very
             | consistent between tests and across reboots, and the higher
             | lookup time goes away if you set the kernel boot option
             | "pti=on" manually at startup, that's the insecure behavior
             | as described in the paper.
             | 
             | log with pti=on kernel option:
             | https://pastebin.com/GK5KfsYd
             | 
             | ```csv
             | 
             | 513,0xffffffffbfa00000,92,82,147
             | 
             | 514,0xffffffffbfc00000,92,82,123
             | 
             | 515,0xffffffffbfe00000,92,82,141
             | 
             | 516,0xffffffffc0000000,91,82,134
             | 
             | 517,0xffffffffc0200000,91,82,140
             | 
             | 518,0xffffffffc0400000,91,82,151
             | 
             | 519,0xffffffffc0600000,91,82,141
             | 
             | ```
             | 
             | environment: ubuntu 22.04.4 live-usb, 5700G, b550i aorus
             | pro ax latest bios
        
         | skeetmtp wrote:
         | Ubuntu released patches though
         | 
         | https://ubuntu.com/security/notices/USN-6859-1
        
       | rfmoz wrote:
       | From the report:
       | 
       | > Finally, if sshd cannot be updated or recompiled, this signal
       | handler race condition can be fixed by simply setting
       | LoginGraceTime to 0 in the configuration file. This makes sshd
       | vulnerable to a denial of service (the exhaustion of all
       | MaxStartups connections), but it makes it safe from the remote
       | code execution presented in this advisory.
       | 
       | Setting 'LoginGraceTime 0' in sshd_config file seems to mitigate
       | the issue.
        
         | yjftsjthsd-h wrote:
         | Hang on, https://www.man7.org/linux/man-
         | pages/man5/sshd_config.5.html says
         | 
         | > If the value is 0, there is no time limit.
         | 
         | Isn't that _worse_?
        
           | sodality2 wrote:
           | It sounds like at the end of the default 600 seconds is when
           | the race condition occurs. Set no limit, and there is no end
           | 
           | > In our experiments, it takes ~10,000 tries on average to
           | win this race condition; i.e., with 10 connections
           | (MaxStartups) accepted per 600 seconds (LoginGraceTime), it
           | takes ~1 week on average to obtain a remote root shell.
        
           | pm215 wrote:
           | The bug is a race condition which is triggered by code which
           | runs when the timeout expires and the SIGALRM handler is run.
           | If there is no time limit, then the SIGALRM handler will
           | never run, and the race doesn't happen.
           | 
           | (As the advisory notes, you do then have to deal with the DoS
           | which the timeout setting is intended to avoid, where N
           | clients all connect and then never disconnect, and they
           | aren't timed-out and forcibly disconnected on the server end
           | any more.)
        
             | yjftsjthsd-h wrote:
             | Thanks for the explanation; I'd skimmed a little too fast
             | and assumed that this was the more traditional "how many
             | attempts can we squeeze in each connection" rather than
             | something at the end. I guess this makes the hardening
             | advice about lowering that time limit kind of unfortunate.
        
           | toast0 wrote:
           | If you can turn on TCP keepalive for the server connections,
           | you would still have a timeout, even if it's typically 2
           | hours. Then if someone wants to keep connections open and run
           | you out of sockets and processes, they've got to keep sockets
           | open on their end (but they might run a less expensive
           | userspace tcp)
           | 
           | You can belt and suspenders with an external tool that
           | watches for sshd in pre-auth for your real timeout and kills
           | it or drops the tcp connection [1] (which will make the sshd
           | exit in a more orderly fashion)
           | 
           | [1] https://man.freebsd.org/cgi/man.cgi?query=tcpdrop
        
       | fanf2 wrote:
       | It's also worth reading the release notes
       | https://www.openssh.com/releasenotes.html
       | 
       | This is actually an interesting variant of a signal race bug. The
       | vulnerability report says, "OpenBSD is notably not vulnerable,
       | because its SIGALRM handler calls syslog_r(), an async-signal-
       | safer version of syslog() that was invented by OpenBSD in 2001."
       | So a signal-safety mitigation encouraged OpenBSD developers to
       | put non-trivial code inside signal handlers, which becomes unsafe
       | when ported to other systems. They would have avoided this bug if
       | they had done one of their refactoring sweeps to minimize the
       | amount of code in signal handlers, according to the usual wisdom
       | and common unix code guidelines.
        
         | djmdjm wrote:
         | Theo de Raadt made an, I think, cogent observation about this
         | bug and how to prevent similar ones: no signal handler should
         | call any function that isn't a signal-safe syscall. The
         | rationale is that, over time, it's too way easy for any
         | transitive call (where it's not always clear that it can be
         | reached in signal context) to pick up some call that isn't
         | async signal safe.
        
           | fanf2 wrote:
           | Exactly, yes :-) Signal handlers have so many hazards it's
           | vital to keep them as simple as possible.
        
             | growse wrote:
             | I'm not overly familiar with the language and tooling
             | ecosystem, but how trivial is this to detect on a static
             | analysis?
        
             | rwmj wrote:
             | A rule I try to follow: either set a global variable or
             | write to a self pipe (using the write syscall), and handle
             | the signal in the main loop.
        
               | cesarb wrote:
               | > either set a global variable
               | 
               | IIRC, the rule is also that said global variable must
               | have the type "volatile sig_atomic_t".
        
           | ralferoo wrote:
           | I'm kind of surprised advocating calling any syscall other
           | than signal to add the handler back again. It's been a long
           | time since I looked at example code, but back in the mid 90s,
           | everything I saw (and so informed my habits) just set a flag,
           | listened to the signal again if it was something like SIGUSR1
           | and then you'd pick up the flag on the next iteration of your
           | main loop. Maybe that's also because I think of a signal like
           | an interrupt, and something you want to get done as soon as
           | possible to not cause any stalls to the main program.
           | 
           | I notice that nowadays signalfd() looks like a much better
           | solution to the signal problem, but I've never tried using
           | it. I think I'll give it a go in my next project.
        
             | qhwudbebd wrote:
             | In practice when I tried it, I wasn't sold on signalfd's
             | benefits over the 90s style self-pipe, which is reliably
             | portable too. Either way, being able to handle signals in a
             | poll loop is much nicer than trying to do any real work in
             | an async context.
        
             | formerly_proven wrote:
             | This isn't the case for OpenSSH but because a lot of
             | environments (essentially all managed runtimes) actually do
             | this transparently for you when you register a signal
             | "handler" it might be that less people are aware that
             | actual signal handlers require a ton of care. On the other
             | hand "you can't even call strcmp in a signal handler or
             | you'll randomly corrupt program state" used to be a
             | favorite among practicing C lawyers.
        
               | lilyball wrote:
               | Why can't you call strcmp? I think a general practice of
               | "only call functions that are explicitly blessed as
               | async-signal-safe" is a good idea, which means not
               | calling strcmp as it hasn't been blessed, but surely it
               | doesn't touch any global (or per-thread) state so how can
               | it corrupt program state?
               | 
               | Update: according to https://man7.org/linux/man-
               | pages/man7/signal-safety.7.html strcmp() actually is
               | async-signal-safe as of POSIX.1-2008 TC2.
        
         | INTPenis wrote:
         | So it's very likely that some young sysadmin or intern that
         | will have to patch for this vuln was not even born when OpenBSD
         | implemented the solution.
        
           | creshal wrote:
           | n>=1, one of our juniors is indeed younger than the OpenBSD
           | fix and dealing with this bug.
        
       | FiloSottile wrote:
       | Interestingly, the RCE fix was "smuggled" in public almost a
       | month ago.                   When PerSourcePenalties are enabled,
       | sshd(8) will monitor the exit         status of its child pre-
       | auth session processes. Through the exit         status, it can
       | observe situations where the session did not         authenticate
       | as expected. These conditions include when the client
       | repeatedly attempted authentication unsucessfully (possibly
       | indicating         an attack against one or more accounts, e.g.
       | password guessing), or         when client behaviour caused sshd
       | to crash (possibly indicating         attempts to exploit sshd).
       | When such a condition is observed, sshd will record a penalty of
       | some         duration (e.g. 30 seconds) against the client's
       | address.
       | 
       | https://github.com/openssh/openssh-portable/commit/81c1099d2...
       | 
       | It's not really a reversable patch that gives anything away to
       | attackers: it changes the binary architecture in a way that has
       | the side-effect of removing the specific vulnerability _and also_
       | mitigates the whole exploit class, if I understand it correctly.
       | Very clever.
        
         | fanf2 wrote:
         | That's not the RCE fix, this is the RCE fix
         | https://news.ycombinator.com/item?id=40843865
         | 
         | That's a previously-announced feature for dealing with junk
         | connections that also happens to mitigate this vulnerability
         | because it makes it harder to win the race. Discussed
         | previously https://news.ycombinator.com/item?id=40610621
        
           | djmdjm wrote:
           | No, it's a fix. It completely removes the signal race as well
           | as introducing a mitigation for similar future bugs
        
           | FiloSottile wrote:
           | The ones you link are the "minimal patches for those
           | can't/don't want to upgrade". The commit I am linking to is
           | taken straight from the advisory.                   On June
           | 6, 2024, this signal handler race condition was fixed by
           | commit         81c1099 ("Add a facility to sshd(8) to
           | penalise particular problematic         client behaviours"),
           | which moved the async-signal-unsafe code from         sshd's
           | SIGALRM handler to sshd's listener process, where it can be
           | handled synchronously:
           | https://github.com/openssh/openssh-
           | portable/commit/81c1099d22b81ebfd20a334ce986c4f753b0db29
           | Because this fix is part of a large commit (81c1099), on top
           | of an even         larger defense-in-depth commit (03e3de4,
           | "Start the process of splitting         sshd into separate
           | binaries"), it might prove difficult to backport. In
           | that case, the signal handler race condition itself can be
           | fixed by         removing or commenting out the async-signal-
           | unsafe code from the         sshsigdie() function
           | 
           | The cleverness here is that this commit is _both_ "a
           | previously-announced feature for dealing with junk
           | connections", _and_ a mitigation for the exploit class
           | against similar but unknown vulnerabilities, _and_ a patch
           | for the specific vulnerability because it  "moved the async-
           | signal-unsafe code from sshd's SIGALRM handler to sshd's
           | listener process, where it can be handled synchronously".
           | 
           | The cleverness is that it fixes the vulnerability as part of
           | doing something that makes sense on its own, so you wouldn't
           | know it's the patch even looking at it.
        
           | unixpickle wrote:
           | These lines from the diff linked above are the fix:
           | - /\* Log error and exit. \*/         - sigdie("Timeout
           | before authentication for %s port %d",         -
           | ssh_remote_ipaddr(the_active_state),         -
           | ssh_remote_port(the_active_state));         +
           | _exit(EXIT_LOGIN_GRACE);
        
         | loeg wrote:
         | Has this fix been pushed to / pulled by distributions yet?
        
           | loeg wrote:
           | Fedora: not yet.
           | 
           | https://bugzilla.redhat.com/show_bug.cgi?id=CVE-2024-6387
           | (tracking task)
           | 
           | https://bugzilla.redhat.com/show_bug.cgi?id=2294905 (Fedora
           | 39 issue)
        
       | CJefferson wrote:
       | This is a really good find.
       | 
       | One thing which (as an independant person, who isn't doing any of
       | the work!) is it often feels like in order to 'win', people are
       | expected to find a full chain which gives them remote access,
       | rather than just finding one issue, and getting it fixed /
       | getting paid for it.
       | 
       | It feels to me like finding a single hole should be sufficient --
       | one memory corruption, one sandbox escape. Maybe at the moment
       | there are just too many little issues, that you need a full end-
       | to-end hack to really convince people to take you seriously, or
       | pay out bounties?
        
         | rlpb wrote:
         | There are many wannabe security researchers who find issues
         | that are definitely not exploitable, and then demand CVE
         | numbers and other forms of recognition or even a bounty. For
         | example, there might be an app that crashes when accepting
         | malformed _trusted_ input, but the nature of the app is that it
         | 's never intended to and realistically never will be exposed to
         | an adversary. In most people's eyes, these are simply bugs, not
         | security bugs, and while are nice to fix, aren't on the same
         | level. It's not very difficult to find one of these!
         | 
         | So there is a need to differentiate between "real" security
         | bugs [like this one] and non-security-impacting bugs, and
         | demonstrating how an issue is exploitable is therefore very
         | important.
         | 
         | I don't see the need to demonstrate this going away any time
         | soon, because there will always be no end of non-security-
         | impacting bugs.
        
           | nubinetwork wrote:
           | > There are many wannabe security researchers who find issues
           | that are definitely not exploitable, and then demand CVE
           | numbers and other forms of recognition or even a bounty
           | 
           | I believe this has happened to curl several times recently.
        
             | umanwizard wrote:
             | It happens constantly to any startup with a security@ email
             | address.
        
           | PhilipRoman wrote:
           | Agreed, I've seen all kinds of insane stuff, like "setting
           | this public field of a java class to a garbage value will
           | cause a null pointer exception"
        
           | tetha wrote:
           | So many "Security Researchers" are just throwing ZAP at
           | websites and dumping the result into the security@ mail,
           | because there might be minor security improvements by setting
           | yet another obscure browser security header for cases that
           | might not even be applicable.
           | 
           | Or there is no real consideration if that's actually an
           | escalation of context. Like, "Oh if I can change these
           | postgres configuration parameters, I can cause a problem", or
           | "Oh if I can change values in this file I can cause huge
           | trouble". Except, modifying that file or that config
           | parameter requires root/supervisor access, so there is no
           | escalation because you have full access already anyhow?
           | 
           | I probably wouldn't have to look at documentation too much to
           | get postgres to load arbitrary code from disk if I have
           | supervisor access to the postgres already. Some COPY into
           | some preload plugin, some COPY / ALTER SYSTEM, some query to
           | crash the node, and off we probably go.
           | 
           | But yeah, I'm frustrated that we were forced to route our
           | security@ domain to support to filter out this nonsense. I
           | wouldn't be surprised if we miss some actually important
           | issue unless demonstrated like this, but it costs too much
           | time otherwise.
        
         | leftcenterright wrote:
         | Having been on the reporting side, "an exploitable
         | vulnerability" and "security weakness which could eventually
         | result in an exploitable vulnerability" are two very different
         | things. Bounties always get paid for the first category.
         | Reports falling in the second category might even cause
         | reputation/signal damage for a lack of proof of
         | concept/exploitability.
         | 
         | There are almost always various weaknesses which do not become
         | exploitable until and unless certain conditions are met. This
         | also becomes evident in contests like Pwn2Own where multiple
         | vulnerabilities are often chained to eventually take the device
         | over and remain un-patched for years. Researchers often sit on
         | such weaknesses for a long time to eventually maximize the
         | impact.
         | 
         | Sad but that is how it is.
        
         | lenlorijn wrote:
         | As the security maxim goes: POC || GTFO
        
         | michaelt wrote:
         | _> Maybe at the moment there are just too many little issues,
         | that you need a full end-to-end hack to really convince people
         | to take you seriously, or pay out bounties?_
         | 
         | Let me give you a different perspective.
         | 
         | Imagine I make a serialisation/deserialisation library which
         | would be vulnerable _if_ you fed it untrusted data. This is by
         | design, users can serialise and deserialise anything, including
         | lambda functions. My library is only intended for processing
         | data from trusted sources.
         | 
         | To my knowledge, nobody uses my library to process data from
         | untrusted sources. One popular library does use mine to load
         | configuration files, they consider those a trusted data source.
         | And it's not my job to police other people's use of my library
         | anyway.
         | 
         | Is it correct to file a CVE of the highest priority against my
         | project, saying my code has a Remote Code Execution
         | vulnerability?
        
           | mtrantalainen wrote:
           | I think that if the documented interface of your library is
           | "trusted data only", then one shouldn't even file a bug
           | report against your library if somebody passes it untrusted
           | data.
           | 
           | However, if you (or anybody else) catch a program passing
           | untrusted data to any library that says "trusted data only",
           | that's definitely CVE worthy in my books even if you cannot
           | demonstrate full attack chain. However, that CVE should be
           | targeted at the program that passes untrusted data to trusted
           | interface.
           | 
           | That said, if you're looking for bounty instead of just some
           | publicity in reward for publishing the vulnerability, you
           | must fullfil the requirements of the bounty and those
           | typically say that bounty will be paid for complete attack
           | chain only.
           | 
           | I guess that's because companies paying bounties are
           | typically interested in real world attacks and are not
           | willing to pay bounties for theoretical vulnerabilities.
           | 
           | I think this is problematic because it causes bounty hunters
           | to keep theoretical vulnerabilities secret and wait for
           | possible future combination of new code that can be used to
           | attack the currently-theoretical vulnerability.
           | 
           | I would argue that it's much better to fix issues while they
           | are still theoretical only. Maybe pay lesser bounty for
           | theoretical vulnerabilities and pay reduced payment for the
           | full attack chain if it's based on publicly known theoretical
           | vulnerability. Just make sure that the combination pays at
           | least equally good to publishing full attack chain for 0day
           | vulnerability. That way there would be incentive to publish
           | theoretical vulnerabilities immediately for maximum pay
           | because otherwise somebody else might catch the theoretical
           | part and publish faster than you can.
        
           | brazzy wrote:
           | That sounds... familiar. Are you perchance the maintainer of
           | SnakeYAML?
           | 
           | Yes, it is correct to file a CVE of the highest priority
           | against your project, because "only intended for processing
           | data from trusted sources" is a frankly ridiculous policy for
           | a serialization/deserialization library.
           | 
           | If it's your toy project that you never expected anyone to
           | use anyway, you don't care about CVEs. If you want to be
           | taken seriously, you cannot play pass-the-blame and ignore
           | the fact that your policy turns the entire project into a
           | security footgun.
        
             | michaelt wrote:
             | _> "only intended for processing data from trusted sources"
             | is a frankly ridiculous policy for a
             | serialization/deserialization library._
             | 
             | Truly, it's a design decision so ridiculous nobody else has
             | made it. Except Python's pickle, Java's serialization,
             | Ruby's Marshal and PHP's unserialize of course. But other
             | than that, nobody!
        
           | Elucalidavah wrote:
           | > Imagine I make a serialisation/deserialisation library
           | which would be vulnerable if you fed it untrusted data
           | 
           | No need to imagine, the PyYAML has that situation. There have
           | been attempts to use the safe deserialization by default,
           | with an attempt to release a new major version (rolled back),
           | and it settled on having a required argument of which mode /
           | loader to use. See: https://cve.mitre.org/cgi-
           | bin/cvekey.cgi?keyword=PyYAML
        
         | bobmcnamara wrote:
         | > It feels to me like finding a single hole should be
         | sufficient -- one memory corruption, one sandbox escape.
         | 
         | It should be.
         | 
         | > Maybe at the moment there are just too many little issues...
         | 
         | There are so many.
        
         | tptacek wrote:
         | Buyers pay for outcomes. Vendors do pay for individual links in
         | the chain.
        
       | INTPenis wrote:
       | Correct me if I'm wrong but it seems like sshd on RHEL-based
       | systems is safe because they never call syslog.
       | 
       | They run sshd with the -D option already, logging everything to
       | stdout and stderr, as their systemd already catches this output
       | and sends it to journal for logging.
       | 
       | So I don't see anywhere they would be calling syslog, unless sshd
       | does it on its own.
       | 
       | At most maybe add OPTIONS=-e into /etc/sysconfig/sshd.
        
         | betaby wrote:
         | Same question. Aren't all systemd based distros use
         | stdin/out/err for logging and won't call syslog?
        
       | j16sdiz wrote:
       | Now, how many remote exploit do we have in openbsd?
        
         | cyberpunk wrote:
         | No more than before this; openbsd is not vulnerable to this
         | exploit due to a different syslog() implementation.
        
           | ZiiS wrote:
           | Worth being explicit here. The OpenBSD syslog is not just
           | 'different' enough that it was luckily uneffected. It was
           | intentionally designed to avoid this situation more than 20
           | years ago.
        
           | fulafel wrote:
           | Also there's no publicly known exploit for this one yet even
           | for Linux. The advisory says Qualys put exploit development
           | on hold to coordinate the fix.
        
         | yjftsjthsd-h wrote:
         | Two in living memory? If you know something with a better track
         | record do speak up.
        
           | DEADMINCE wrote:
           | SEL4 and derivatives.
           | 
           | For starters.
           | 
           | And if you want to simply go by vulnerability counts, as
           | though that meant something, let's throw in MenuetOS and
           | TempleOS.
        
             | yjftsjthsd-h wrote:
             | Okay, let's say if you know something _useful_ with a
             | better record. TempleOS doesn 't have network, so while
             | it's genuinely cool it's not useful to most people.
             | MenuetOS does have network but poor software compatibility.
             | I would actually love to see a seL4 distro but AFAIK it's
             | pretty much only ever used as a hypervisor with a "real"
             | (normal) OS under it, often (usually?) Linux-based. We can
             | certainly consider to what degree OpenBSD is useful with
             | just the base system, but it _does_ include everything out
             | of the box to be a web server with zero extra software
             | added, including sshd in its native environment.
        
               | DEADMINCE wrote:
               | > Okay, let's say if you know something useful with a
               | better record.
               | 
               | Oh, SEL4 is without any doubt _useful_ , it wouldn't be
               | as popular and coveted if it wasn't, but I think you are
               | trying to say _widespread_.
               | 
               | However, you seem to have taken my examples literally and
               | missed my point, which is trying to judge the security of
               | an OS by its vulnerabilities is a terrible, terrible
               | approach.
               | 
               | > but it does include everything out of the box to be a
               | web server
               | 
               | Sure, and so do plenty of minimal linux distros, and if
               | you use the same metrics and config as OpenBSD then
               | they'll have a similar security track record.
               | 
               | And honestly, Linux with one of the RBAC solutions puts
               | OpenBSD's security _to shame_.
               | 
               | Do yourself a favor and watch the CCC talk someone else
               | linked in the thread.
        
               | yjftsjthsd-h wrote:
               | > Oh, SEL4 is without any doubt useful, it wouldn't be as
               | popular and coveted if it wasn't, but I think you are
               | trying to say widespread.
               | 
               | There is a laptop running OpenIndiana illumos on my desk.
               | I mean useful, though through the lens of my usecases
               | (read: if it can't run a web browser or server, I don't
               | generally find it useful). I've only really heard of seL4
               | being popular in embedded contexts (mostly cars?), not
               | general-purpose computers.
               | 
               | > However, you seem to have taken my examples literally
               | and missed my point, which is trying to judge the
               | security of an OS by its vulnerabilities is a terrible,
               | terrible approach.
               | 
               | No, I think your examples were excellent for illustrating
               | the differences in systems; you can get a more secure
               | system by severely limiting how much it can do (seL4 is a
               | good choice for embedded systems, but in itself currently
               | useless as a server OS), or a more useful system that has
               | more attack surface, but OpenBSD is a weirdly good ratio
               | of high utility for low security exposure. And yes of
               | course I judge security in terms of realized exploits;
               | theory and design is fine, but at some point the rubber
               | has to hit the road.
               | 
               | > Sure, and so do plenty of minimal linux distros, and if
               | you use the same metrics and config as OpenBSD then
               | they'll have a similar security track record.
               | 
               | Well no, that's the point - they'll be better than "fat"
               | distros, but they absolutely will not match OpenBSD. See,
               | for example, this specific sshd vuln, which will affect
               | any GNU/Linux distro and not OpenBSD, because OpenBSD's
               | libc goes out of its way to solve this problem and glibc
               | didn't.
               | 
               | > Do yourself a favor and watch the CCC talk someone else
               | linked in the thread.
               | 
               | I don't really do youtube - is it the one that handwaves
               | at allegedly bad design without ever actually showing a
               | single exploit? Because I've gotten really tired of
               | people loudly proclaiming that this thing is _so_ easy to
               | exploit but they just don 't have time to actually do it
               | just now but trust them it's definitely easy and a real
               | thing that they could do even though somehow it never
               | seems to actually happen.
        
               | DEADMINCE wrote:
               | > I mean useful, though through the lens of my usecases
               | 
               | Better to stick to standard definitions in the future so
               | you won't have to explain your personal definitions later
               | on.
               | 
               | > No, I think your examples were excellent for
               | illustrating the differences in systems; you can get a
               | more secure system by severely limiting how much it can
               | do
               | 
               | So you not only missed the point but decided to take away
               | an entirely different message. Interesting.
               | 
               | Yes, limiting attack surface is a basic security
               | principle. The examples I gave were not to demonstrate
               | this basic principle, but to show that trying to gauge
               | security by amount of vulnerabilities is foolish.
               | 
               | > seL4 is a good choice for embedded systems, but in
               | itself currently useless as a server OS
               | 
               | Plan 9 then. Or any of other numerous OS projects that
               | have less vulns than OpenBSD and can meet your arbitrary
               | definition of 'useful'. The point is that trying to
               | measure security by vuln disclosures is a terrible,
               | terrible method and only something someone with no clue
               | about security would use.
               | 
               | > but OpenBSD is a weirdly good ratio of high utility for
               | low security exposure.
               | 
               | OpenBSD is just niche, that's it. Creating OpenSSH
               | brought a lot of good marketing, but if you really look
               | at the OS from a security perspective and look at
               | features, it's lacking.
               | 
               | > Well no, that's the point - they'll be better than
               | "fat" distros, but they absolutely will not match
               | OpenBSD.
               | 
               | They absolutely will be better than OpenBSD, because they
               | have capabilities to limit what an attacker can do in the
               | event they get access, as opposed to putting all the eggs
               | in the 'find all the bugs before they get exploited'
               | basket. OpenBSD _isn 't anything special_ when it comes
               | to security. That, really, is the point. Anything
               | otherwise is marketing or people who have fell for
               | marketing IMO.
               | 
               | > I don't really do youtube
               | 
               | There's a lot of good content only on that platform.
               | Surely you can use yt-dlp or freetube or something.
               | 
               | > is it the one that handwaves at allegedly bad design
               | without ever actually showing a single exploit?
               | 
               | That summary isn't remotely accurate, so I'd have to say
               | no.
               | 
               | > Because I've gotten really tired of people loudly
               | proclaiming that this thing is so easy to exploit but
               | they just don't have time to actually do it just now but
               | trust them it's definitely easy and a real thing that
               | they could do even though somehow it never seems to
               | actually happen.
               | 
               | They have remote holes listed on their homepage. Both
               | those cases led to remote root and this supposedly secure
               | OS had nothing to offer, while most Linux distros did.
               | Let's make this simple. Linux allows you to contain a
               | remote root exploit with tools like RBAC and MAC
               | extensions. OpenBSD offers nothing. In the event both
               | systems have the same vulnerability (of which this
               | titular instance is not an example of) allowing remote
               | root, Linux will be the safer system if set up correctly.
               | 
               | But honestly, I've gotten really tired of OpenBSD stans
               | regurgitating that it's supposedly secure and thinking
               | that being able to point to a lack of vulnerabilities in
               | a barebones default install is some kind of proof of
               | that.
        
               | cbrozefsky wrote:
               | I'm an OpenBSD fanboi, and the review of mitigations,
               | their origins, efficacy, and history is well worth the
               | time to watch or just review slides. Its not about some
               | claim of vulz.
        
       | djernie wrote:
       | RedHat put an 8.1 score on it:
       | https://access.redhat.com/security/cve/cve-2024-6387
        
         | zshrc wrote:
         | Doesn't affect RHEL7 or RHEL8.
        
           | chasil wrote:
           | Or RHEL9.                 $ rpm -q openssh
           | openssh-8.7p1-38.0.1.el9.x86_64
        
             | Ianvdl wrote:
             | > Statement
             | 
             | > The flaw affects RHEL9 as the regression was introduced
             | after the OpenSSH version shipped with RHEL8 was published.
        
               | chasil wrote:
               | However, we see the -D option on the listening parent:
               | $ ps ax | grep sshd | head -1          1306 ?        Ss
               | 0:01 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100
               | startups
               | 
               | As mentioned elsewhere here, is -D sufficient to avoid
               | exploitation, or is -e necessary as well?
               | $ man sshd | sed -n '/ -[De]/,/^$/p'          -D
               | When this option is specified, sshd will not
               | detach and does not become a daemon.  This
               | allows easy monitoring of sshd.               -e
               | Write debug logs to standard error instead
               | of the system log.
               | 
               | RHEL9 is also 64-bit only, and we see from the notice:
               | 
               | "we have started to work on an amd64 exploit, which is
               | much harder because of the stronger ASLR."
               | 
               | On top of writing the exploit to target 32-bit
               | environments, this also requires a DSA key that
               | implements multiple calls to free().
               | 
               | There is a section on "Rocky Linux 9" near the end of the
               | linked advisory where unsuccessful exploit attempts are
               | discussed.
        
               | Arnavion wrote:
               | >As mentioned elsewhere here, is -D sufficient to avoid
               | exploitation, or is -e necessary as well?
               | 
               | https://github.com/openssh/openssh-
               | portable/blob/V_9_8_P1/ss...
               | 
               | sshd.c handles no_daemon (-D) and log_stderr (-e)
               | independently. log_stderr is what is given to log_init in
               | log.c that gates the call to syslog functions. There is a
               | special case to set log_stderr to true if debug_flag (-d)
               | is set, but nothing for no_daemon.
               | 
               | I can't test it right now though so I may be missing
               | something.
        
               | chasil wrote:
               | I'm on Oracle Linux, and they appear to have already
               | issued a patch for this problem:
               | openssh-8.7p1-38.0.2.el9.x86_64.rpm       openssh-
               | server-8.7p1-38.0.2.el9.x86_64.rpm       openssh-
               | clients-8.7p1-38.0.2.el9.x86_64.rpm
               | 
               | The changelog addresses the CVE directly. It does not
               | appear that adding the -e directive is necessary with
               | this patch.                 $ rpm -q --changelog openssh-
               | server | head -3       * Wed Jun 26 2024 Alex Burmashev
               | <alexander.burmashev@oracle.com> - 8.7p1-38.0.2       -
               | Restore dropped earlier ifdef condition for safe _exit(1)
               | call in sshsigdie() [Orabug: 36783468]         Resolves
               | CVE-2024-6387
        
             | indigodaddy wrote:
             | Versions from 4.4p1 up to, but not including, 8.5p1 are not
             | vulnerable.
             | 
             | The vulnerability resurfaces in versions from 8.5p1 up to,
             | but not including, 9.8p1
             | 
             | https://blog.qualys.com/vulnerabilities-threat-
             | research/2024...
        
       | nj5rq wrote:
       | OpenBSD is notably not vulnerable, because its         SIGALRM
       | handler calls syslog_r(), an async-signal-safer version of
       | syslog() that was invented by OpenBSD in 2001.
       | 
       | Saving the day once again.
        
         | apache101 wrote:
         | Theo and team way ahead of their time like always.
        
           | pjmlp wrote:
           | Not always,
           | 
           | 36C3 - A systematic evaluation of OpenBSD's mitigations
           | 
           | https://www.youtube.com/watch?v=3E9ga-CylWQ
        
             | zshrc wrote:
             | Not always, but they make it their goal to be.
             | 
             | Code standards are very strict in OpenBSD and security is
             | always a primary thought...
        
             | daneel_w wrote:
             | Wouldn't a _good_ systematic evaluation need (or at least
             | benefit from) a few actual working exploits /PoCs? I keep
             | asking this as a long-time OpenBSD user who is genuinely
             | interested in seeing it done, but so far everyone who has
             | said "it's flawed" also reserved themselves the convenience
             | of not having to prove their point in a practical sense.
        
               | DEADMINCE wrote:
               | > Wouldn't a good systematic evaluation need (or at least
               | benefit from) a few actual working exploits/PoCs?
               | 
               | Sure, see any of the previous exploits for sshd, or any
               | other software shipped in the OpenBSD default install.
               | 
               | > I keep asking this as a long-time OpenBSD user who is
               | genuinely interested in seeing it done, but so far
               | everyone who has said "it's flawed" also reserved
               | themselves the convenience of not having to prove their
               | point in a practical sense.
               | 
               | The point is they have very little in the way of
               | containing attackers and restricting what they can. Until
               | pledge and unveil, almost all their focus in on
               | eliminating bugs which hey, great, but let's have a
               | little more in case you miss a bug and someone breaks in,
               | eh?
               | 
               | An insecure dockerized webserver protected with SELinux
               | is safer than Apache on a default OpenBSD install.
        
               | daneel_w wrote:
               | _> Sure, see any of the previous exploits for sshd, or
               | any other software shipped in the OpenBSD default
               | install._
               | 
               | Would you like to point to one that successfully utilizes
               | a weakness in _OpenBSD itself_ , which is the topic and
               | implied statement of the video, rather than a weakness in
               | some application running under the superuser?
               | 
               | Just to underline, I'm not interested in discussing the
               | hows and whys of containing arbitrary applications where
               | one or more portions are running under euid 0. I'm
               | interested in seeing OpenBSD successfully attacked by an
               | unprivileged process/user.
        
               | yjftsjthsd-h wrote:
               | Now to be fair, sshd on OpenBSD is part of OpenBSD rather
               | than an add-on application and I think it would be fair
               | to count exploits in it against the OS, _if it had
               | vulnerabilities there._
        
               | DEADMINCE wrote:
               | Any vulns in any package in OpenBSD's package
               | repositories that they audited should count as a vuln
               | against OpenBSD itself.
               | 
               | If OpenBSD users installed it through OpenBSD
               | repositories and are running it will they be affected?
               | Yes? Then it counts against the system itself.
        
               | yjftsjthsd-h wrote:
               | I'm not sure that's fair; was log4j a vulnerability in
               | Ubuntu itself? How about libwebp (
               | https://news.ycombinator.com/item?id=37657746 )?
        
               | DEADMINCE wrote:
               | > Would you like to point to one that successfully
               | utilizes a weakness in OpenBSD itself, which is the topic
               | and implied statement of the video, rather than a
               | weakness in some application running under the superuser?
               | 
               | I'm sorry, what? What kind of nonsense distinction is
               | this?
               | 
               | Are you trying to _very disingenuously_ try and claim
               | only kernel exploits count as attacks against OpenBSD?
               | 
               | Why the hell wouldn't a webserver zero-day count? If an
               | OS that claims to be security focused can't constrain a
               | misbehaving web server running as root then it's sure as
               | hell not any type of secure OS.
               | 
               | > I'm interested in seeing OpenBSD successfully attacked
               | by an unprivileged process/user.
               | 
               | You realize there is very little that OpenBSD does to
               | protect against LPE if there is any LPE vuln on their
               | system, right? Surely you're not just advocating for
               | OpenBSD based on their own marketing? If you want to
               | limit the goalposts to kernel vulns or LPE's that already
               | require an account you're free to do so, but that's
               | rather silly and not remotely indicative of real world
               | security needs.
               | 
               | If it's a security focused OS, it should provide ways to
               | limit the damage an attacker can do. OpenBSD had very
               | very little in that regard and still does, although
               | things are slightly better now and they have a few toys.
               | 
               | And hey, fun fact, if you apply the same OpenBSD
               | methodology and config of having a barebones install,
               | you'll suddenly find at least dozens of other operating
               | systems with equivalent or better track records.
               | 
               | Plan 9 has had less vulnerabilities than OpenBSD and has
               | had more thought put into its security architecture[0],
               | so by your metric it's the more secure OS, yeah?
               | 
               | [0] http://9p.io/sys/doc/auth.html
        
               | daneel_w wrote:
               | _> I 'm sorry, what? What kind of nonsense distinction is
               | this?
               | 
               | _> Are you trying to very disingenuously try and claim
               | only kernel exploits count as attacks against OpenBSD?
               | 
               | Not at all. I clearly underlined that I'm not looking for
               | cases fitting that specific scenario. The only moving of
               | goalposts is entirely on your behalf by _very
               | disingenously_ misrepresenting my question in a poor
               | attempt to try make your answer or whatever point fit.
               | And on top of that, the tasteless pretending to be
               | baffled...
        
               | DEADMINCE wrote:
               | > Not at all. I clearly underlined that I'm not looking
               | for cases fitting that specific scenario
               | 
               | The thing is, we're trying to talk about the security of
               | OpenBSD compared to its competition.
               | 
               | But you're trying to avoid letting anyone do that by
               | saying only an attack against something in the default
               | install you can do with a user account counts, which is
               | absolutely ridiculous.
               | 
               | I'm not moving the goalposts nor am I pretending in any
               | sense. Your approach just doesn't make sense, measure or
               | indicate anything useful or relevant about the security
               | of OpenBSD. I stated so and explained why.
               | 
               | But hey, keep believing whatever you want buddy.
        
               | daneel_w wrote:
               | _> "The thing is, we're trying to talk about the security
               | of OpenBSD compared to its competition."_
               | 
               |  _> "But you're trying to avoid letting anyone do that by
               | saying only an attack against something in the default
               | install you can do with a user account counts, which is
               | absolutely ridiculous."_
               | 
               | I don't know who "we" are. The question _I_ asked another
               | poster, where _you_ decided to butt in, regarded
               | escalation from an unprivileged position and nothing
               | else.
               | 
               | Nobody but yourself said anything along the lines of
               | "only attacks against things in the default install
               | 'count'", nor drew drew up comparisons against "the
               | competition". You clearly have some larger axe to grind,
               | but you're doing it in a discourse playing out only in
               | your head, without reading what others actually wrote.
        
         | fmbb wrote:
         | "async-signal-safer"
         | 
         | Just this morning was the first time I read the words MT-Safe,
         | AS-Safe, AC-Safe. But I did not know there were "safer"
         | functions as well.
         | 
         | Is there also a "safest" syslog?
        
           | OJFord wrote:
           | For a word like 'safe', or at least in CS, I would assume
           | that the 'safe' one actually _is_ 'safest'; that 'safer' is
           | ehh it's not _safe_ but it 's an improvement on the unsafe
           | one. It's saf _er_.
        
             | ralferoo wrote:
             | Similarly, safest is normal English means not completely
             | safe, but more safe than the other options. So safe >
             | safest > safer > safe-ish > unsafe.
        
               | p51-remorse wrote:
               | Wait, that seems backwards to me as a native English
               | speaker. The superlative version feels more safe. Safest
               | > Safe > (...)
        
               | nj5rq wrote:
               | I would assume he refers to "safe" being absolutely safe,
               | while "safest" refers to the safest of the existing
               | alternatives?
        
               | DEADMINCE wrote:
               | Nah. If something is 'safe', it's safe, period. If
               | something is safest is, it's only the best of the
               | available options and not necessarily 'safe'.
        
               | ralferoo wrote:
               | One example to help think about this. Say you have 3
               | friends. Your friend Bob has a net worth of $1 - he is
               | the least rich. Your friend Alex has a net worth $10 - he
               | is richer. Another friend Ben has a net worth of $100 -
               | he is the richest. Richest here is comparative against
               | all 3 of them, but none of them are actually rich. Bill
               | Gates is rich. Bezos is rich. Musk is rich. Someone with
               | a net worth of $100 isn't.
               | 
               | You can still have comparisons between the rich too, so
               | Bezos is richer than Gates and he's also the richest if
               | you're just considering the pair. But add Musk to the
               | mix, and he's no longer the richest.
               | 
               | I guess that last example looks like you have two
               | attributes - rich as some objective "has a lot of money"
               | and comparatively rich (richer, richest). For safe, it's
               | kind of similar, except that as soon as you are saying
               | one thing is safer than the other, then you are
               | implicitly acknowledging that there are areas where the
               | thing isn't safe, and if you're admitting that you can't
               | also call it safe without contradicting yourself.
        
               | ralferoo wrote:
               | A better example is "pure water". By it's definition,
               | that's just H2O molecules floating around with nothing
               | else.
               | 
               | If you add a single grain of salt to a glass of that
               | water, it's no longer pure. Drinking it you probably
               | wouldn't notice, and some people might colloquially call
               | it "pure", but we _know_ it isn 't because we added some
               | salt to it.
               | 
               | If you add a teaspoon of salt to to a different glass of
               | pure water, it's also no longer pure, and now most people
               | would probably notice the salt and recognise it's not
               | pure.
               | 
               | If you add a tablespoon of salt to to a different glass
               | of pure water, it's definitely not pure and you probably
               | wouldn't want to drink it either.
               | 
               | You could say the teaspoon of salt glass is purer than
               | the tablespoon of salt glass, the grain of salt glass is
               | purer than both of them and so the purest of the three.
               | And yet, we know that it isn't pure water, because we
               | added something else to it.
               | 
               | So pure > purest > purer > less pure. Also note that I
               | was required to use "less pure" for the last one, because
               | all of them except pure are "impure" or "not pure", even
               | though were what I originally thought of writing.
        
             | fmbb wrote:
             | I'm would assume the same. Hence my question.
        
       | marcus0x62 wrote:
       | Patch out for Arch Linux
       | 
       | https://archlinux.org/packages/core/x86_64/openssh/
       | 
       |  _edit_ be sure to manually restart sshd after upgrading; my
       | systems fail during key exchange after package upgrade until
       | restarting the sshd service:
       | 
       | % ssh -v 192.168.1.254
       | 
       | OpenSSH_9.8p1, OpenSSL 3.3.1 4 Jun 2024
       | 
       | ... output elided ...
       | 
       | debug1: Local version string SSH-2.0-OpenSSH_9.8
       | 
       | kex_exchange_identification: read: Connection reset by peer
       | 
       | Connection reset by 192.168.1.254 port 22
        
         | jiripospisil wrote:
         | Same here. It's caused by the sshd daemon being split into
         | multiple binaries. In fact, the commit which introduced the
         | change mentions this explicitly:
         | 
         | > NB. if you're updating via source, please restart sshd after
         | installing, otherwise you run the risk of locking yourself out.
         | 
         | https://github.com/openssh/openssh-portable/commit/03e3de416...
         | 
         | Edit: Already reported at
         | https://gitlab.archlinux.org/archlinux/packaging/packages/op...
        
       | qhwudbebd wrote:
       | Once I'd finished upgrading my openssh instances (which are
       | linked against musl not glibc) I thought it'd be interesting to
       | have a poke at musl's syslog(3) and see if it allocates too and
       | so is easily exploitable in the same way. But as far as I can
       | see, it doesn't:
       | 
       | https://github.com/bminor/musl/blob/master/src/misc/syslog.c
       | 
       | Everything there is either on stack or in static variables
       | protected from reentrancy by the lock. The {d,sn,vsn}printf()
       | calls there don't allocate in musl, although they might in glibc.
       | Have I missed anything here?
        
         | singron wrote:
         | If you are right about the allocations, then I think the worst
         | it can do is deadlock since the locks aren't recursive.
         | Deadlock in sigalrm could still lead to a DOS since that might
         | prevent it from cleaning up connections.
        
           | mananaysiempre wrote:
           | Heretical opinion: signal handler activations should count as
           | separate threads for the purposes of recursive locking.
        
             | bhawks wrote:
             | How would be done without introducing deadlock?
        
               | mananaysiempre wrote:
               | You'd get a deadlock, absolutely. But I'm fine with that:
               | if the thread wants to access some state protected by a
               | mutex, then while holding it (effectively) spawns a
               | signal handler activation and waits for it to complete,
               | and the signal handler tries to accept some state
               | protected by the same mutex, then the program has just
               | deadlocked (mutex - signal handler - mutex) and deserves
               | to hang (or die, as this is a very simple situation as
               | far as deadlock detection goes). That's in any case
               | better than corrupted state.
        
           | qhwudbebd wrote:
           | Yes, true: if the alarm signal arrives right in the middle of
           | another syslog() call, this should deadlock. (Safer that it
           | not deadlocking and accessing the static state in the signal
           | handler of course!)
        
         | qhwudbebd wrote:
         | Confirmation from Rich:
         | https://fosstodon.org/@musl/112711796005712271
        
       | yjftsjthsd-h wrote:
       | > Exploitation on non-glibc systems is conceivable but has not
       | been examined.
       | 
       | ( https://www.openssh.com/txt/release-9.8 )
       | 
       | Darn - here I was hoping Alpine was properly immune, but it
       | sounds more like "nobody's checked if it works on musl" at this
       | point.
        
         | _ikke_ wrote:
         | > OpenSSH sshd on musl-based systems is not vulnerable to RCE
         | via CVE-2024-6387 (regreSSHion).
         | 
         | https://fosstodon.org/@musl/112711796005712271
        
       | TacticalCoder wrote:
       | And who was notoriously _not_ exploitable? The ones hiding sshd
       | behind port knocks. And fail2ban: would work too. And a
       | restrictive firewall: would help too.
       | 
       | I don't use port-knocking but I really just don't get all those
       | saying: _" It's security theater"_.
       | 
       | We had not one but two major OpenSSH "near fiasco" (this RCE and
       | the xz lib thing) that were both rendered unusable for attackers
       | by using port knocking.
       | 
       | To me port-knocking is not "security theater": it _adds_ one
       | layer of defense. It 's defense-in-depth. Not theater.
       | 
       | And the port-knocking sequence doesn't have to be always the
       | same: it can, say, change every 30 seconds, using TOTP style
       | secret sequence generation.
       | 
       | How many exploits rendered cold dead in their tracks by port-
       | knocking shall we need before people stop saying port-knocking is
       | security theater?
       | 
       | Other measures do also help... Like restrictive firewalling
       | rules, which many criticize as "it only helps keep the logs
       | smaller": no, they don't just help keep the logs smaller. I'm
       | whitelisting the three ISP's IP blocks anyone can reasonably be
       | needing to SSH from: now the attacker needs not only the zero-
       | day, but it also need to know he needs to be on one of those
       | three ISPs' IPs.
       | 
       | The argument that consists in saying: _" sshd is unexploitable,
       | so nothing else must be done to protect the server"_ is...
       | 
       | Dead.
        
         | _joel wrote:
         | Those not notoriously exploitable were those using gated ssh
         | access only via known IPs or connecting via tailnets/vpn.
        
         | growse wrote:
         | What benefits does port knocking give over and above a simple
         | VPN? They're both additional layers of authentication, except a
         | VPN seems much more rigorous and brings potentially other
         | benefits.
         | 
         | In a world where tailscale etc. have made quality VPNs trivial
         | to implement, why would I both with port knocking?
        
           | jgalt212 wrote:
           | VPNs drop your bandwidth speeds by 50% on average. And if
           | tailscale has to use a relay server, instead of a direct
           | connection, bandwidth will drop by 70-80%.
        
             | gruez wrote:
             | >VPNs drop your bandwidth speeds by 50% on average
             | 
             | Source? Wireguard can do 1GB/s on decade old processors[1].
             | Even openvpn can do 258 Mb/s, which realistically can
             | saturate the average home internet connection. Also, if
             | we're talking about SSH connections, why does throughput
             | matter? Why do you need 1 gigabit of bandwidth to transfer
             | a few keystrokes a second?
             | 
             | [1] https://www.wireguard.com/performance/
        
               | jgalt212 wrote:
               | We ran iperf on our multi-cloud and on prem network.
               | 
               | > Also, if we're talking about SSH connections, why does
               | throughput matter?
               | 
               | scp, among other things, runs over ssh.
        
               | gruez wrote:
               | >scp, among other things, runs over ssh.
               | 
               | Ironically scp/sftp caused me more bandwidth headaches
               | than wireguard/openvpn. I frequently experienced cases
               | where scp/sftp would get 10% or even less of the transfer
               | speed compared to a plain http(s) connection. Maybe it
               | was due to packet loss, buffer size, or qos/throttling,
               | but I wasn't able to figure out a definitive solution.
        
           | noname120 wrote:
           | Port-knocking is way simpler and relies on extremely basic
           | network primitives. As such the attack surface is
           | considerably smaller than OpenSSH or OpenVPN and their
           | authentication mechanisms.
        
           | johnklos wrote:
           | Are you assuming that VPNs are more secure than ssh?
        
             | growse wrote:
             | No?
        
             | INTPenis wrote:
             | Yes, inherently yes. Because they have a lot less features
             | than SSH.
             | 
             | It's in the name; Secure Shell, vs. Virtual Private
             | Network. One of them has to deal with users,
             | authentication, shells, chroots. The other mostly deals
             | with the network stack and encryption.
        
         | benterix wrote:
         | Sure, if you can't afford a more robust access control to your
         | SSH server and for some reason need to make it publicly
         | available then port knocking etc. can be a deterring feature
         | that reduces the attack rate.
        
         | cies wrote:
         | I use port knocking to keep my ssh logs clean. I dont think it
         | adds security (I even brag about using it in public). It allows
         | me to read ssh's logs without having to remove all the script
         | kiddie login attempt spam.
        
           | boxed wrote:
           | Saying you use it publicly doesn't defeat the security it
           | gives though. Unless you publicly say the port knocking
           | sequence. Which would be crazy.
        
             | cies wrote:
             | I meant to say: I dont use it for security, I use it for
             | convenience.
        
         | abofh wrote:
         | Port knocking works if you have to ssh to your servers, there
         | are many solutions that obviate even that, and leave you with
         | no open ports, but a fully manageable server. I'm guilty of ssm
         | in aws, but the principal applies - the cattle phone home, you
         | only have to call pets.
        
         | ugjka wrote:
         | I put all my jank behind wireguard
        
         | philodeon wrote:
         | Port knocking is a ludicrous security measure compared to the
         | combination of: * configuring sshd to only listen over a
         | Wireguard tunnel under your control ( or letting something like
         | Tailscale set up the tunnel for you) * switching to ssh
         | certificate authn instead of passwords or keys
        
           | kazinator wrote:
           | Does Wireguard work in such a way that there is no trace of
           | its existence to an unauthorized contacting entity?
           | 
           | I used port knocking for a while many years ago, but it was
           | just too fiddly and flaky. I would run the port knocking
           | program, and see the port not open or close.
           | 
           | If I were to use a similar solution today (for whatever
           | reason), I'd probably go for web knocking.
           | 
           | In my case, I didn't see it as a security measure, but just
           | as a way to cut the crap out of sshd logs. Log monitoring and
           | banning does a reasonable job of reducing the crap.
        
             | zekica wrote:
             | That's one of the original design requirements for
             | wireguard. Unless a packet is signed with the correct key,
             | it won't respond at all.
        
         | DEADMINCE wrote:
         | > I don't use port-knocking but I really just don't get all
         | those saying: "It's security theater".
         | 
         | It's not security theater but it's kind of outdated. Single
         | Packet Authentication[0] is a significant improvement.
         | 
         | > How many exploits rendered cold dead in their tracks by port-
         | knocking shall we need before people stop saying port-knocking
         | is security theater?
         | 
         | Port knocking is one layer, but it shouldn't be the only one,
         | or even a heavily relied upon one. Plenty of people might be in
         | a position to see the sequence of ports you knock, for example.
         | 
         | Personally, I think if more people bothered to learn tools like
         | SELinux instead of disabling it due to laziness or fear, that
         | is what would stop most exploits dead. Containers are the
         | middleground everyone attached to instead, though.
         | 
         | [0] https://www.cipherdyne.org/fwknop/docs/SPA.html
        
         | throw0101b wrote:
         | > [...] _rendered unusable for attackers by using port
         | knocking._
         | 
         | Port knocking renders SSH unusable: I'm not going to tell my
         | users " _do this magic network incantation before running ssh_
         | ". They want to open a terminal and simply run _ssh_.
         | 
         | See the _A_ in the CIA triad, as well as _U_ in the Parkerian
         | hexad.
        
         | daneel_w wrote:
         | Camouflage is after all one of nature's most common defenses.
         | Always be quick with patching, though.
        
         | JackSlateur wrote:
         | Port-knocking is a PITA in theory and even worse in real world
         | : people do not have time nor the will to do wild invocations
         | before getting job done.
         | 
         | Unless you are talking about your own personal use-case, in
         | which case, feel free to follow your deepest wishes
         | 
         | Firewall is a joke, too. Who can manage hundreds and thousands
         | of even-changing IP ? Nobody. Again: I'm not talking about your
         | personal use-case (yet I enjoy connecting to my server through
         | 4G, whereever I am)
         | 
         | Fail2ban, on the other hand, is nice: every systems that relies
         | on some known secret benefits from an anti-bruteforce
         | mechanism. Also, and this is important: fail2ban is quick to
         | deploy, and not a PITA for users. Good stuff.
        
           | fullspectrumdev wrote:
           | I've written a few toy port knocking implementations, and
           | doing it right is _hard_.
           | 
           | If your connections crap and there's packet loss, some of the
           | sequence may be lost.
           | 
           | Avoiding replay attacks is another whole problem - you want
           | the sequence to change based on a shared secret and time or
           | something similar (eg: TOTP to agree the sequence).
           | 
           | Then you have to consider things like NAT...
        
           | password4321 wrote:
           | I am interested in any tools for managing IP allow lists on
           | Azure and AWS. It seems like there should be something fairly
           | polished, perhaps with an enrollment flow for self-management
           | and a few reports/warnings...
        
       | MaximilianEmel wrote:
       | Was this abused in the wild?
        
       | matthewcroughan wrote:
       | For my own setup, I'm looking into Path Aware Networking (PAN)
       | architectures like SCION to avoid exposing paths to my sshd,
       | without having to set up a VPN or port knocking.
       | 
       | https://scion-architecture.net
        
         | pgraf wrote:
         | Genuinely curious, how would you block an attacker from getting
         | to your SSH port without knowing the path you will connect from
         | (which is the case for remote access) at configuration time? I
         | don't see how Path-Aware Networking would replace a VPN
         | solution
        
           | matthewcroughan wrote:
           | The SCION Book goes over a lot of potential solutions that
           | are possible because of the architecture, but my favorite is
           | hidden paths.
           | https://scion.docs.anapaya.net/en/latest/hidden-paths.html
           | 
           | > Hidden path communication enables the hiding of specific
           | path segments, i.e. certain path segments are only available
           | for authorized ASes. In the common case, path segments are
           | publicly available to any network entity. They are fetched
           | from the control service and used to construct forwarding
           | paths.
        
       | ttul wrote:
       | TLDR: this vulnerability does appear to allow an attacker to
       | potentially gain remote root access on vulnerable Linux systems
       | running OpenSSH, with some important caveats:
       | 
       | 1. It affects OpenSSH versions 8.5p1 to 9.7p1 on glibc-based
       | Linux systems.
       | 
       | 2. The exploit is not 100% reliable - it requires winning a race
       | condition.
       | 
       | 3. On a modern system (Debian 12.5.0 from 2024), the researchers
       | estimate it takes: - ~3-4 hours on average to win the race
       | condition - ~6-8 hours on average to obtain a remote root shell
       | (due to ASLR)
       | 
       | 4. It requires certain conditions: - The system must be using
       | glibc (not other libc implementations) - 100 simultaneous SSH
       | connections must be allowed (MaxStartups setting) -
       | LoginGraceTime must be set to a non-zero value (default is 120
       | seconds)
       | 
       | 5. The researchers demonstrated working exploits on i386 systems.
       | They believe it's likely exploitable on amd64 systems as well,
       | but hadn't completed that work yet.
       | 
       | 6. It's been patched in OpenSSH 9.8p1 released in June 2024.
        
         | jonaslejon wrote:
         | OpenSSH 9.8p1 was released July 1, 2024 according to
         | https://www.openssh.com/releasenotes.html#9.8p1
        
       | lostmsu wrote:
       | Out of curiosity, does Windows have anything as disruptive as
       | signals? I assume it is also not vulnerable, because SSH server
       | there do not use glibc.
        
       | acatton wrote:
       | Yearly reminder to run your ssh server behind spiped.[1] [2] [3]
       | 
       | [1] https://www.tarsnap.com/spiped.html
       | 
       | [2] https://news.ycombinator.com/item?id=29483092
       | 
       | [3] https://news.ycombinator.com/item?id=28538750
        
         | gruez wrote:
         | What's the advantage of this relatively obscure tool compared
         | to something standard like wireguard or stunnel?
        
           | acatton wrote:
           | * The tool is not obscure, it's packaged in most
           | distributions.[1][2][3] It was written and maintained by
           | Colin Percival, aka "the tarnsnap guy" or "the guy who
           | invented scrypt". He is the security officer for FreeBSD.
           | 
           | * spiped can be used transparently by just putting a
           | "ProxyCommand" in your ssh_config. This means you can connect
           | to a server just by using "ssh", normally. (as opposed to
           | wireguard where you need to always be on your VPN, otherwise
           | connnect to your VPN manually before running ssh)
           | 
           | * As opposed to wireguard which runs in the kernel, spiped
           | can easily be set-up to run as a user, and be fully hardened
           | by using the correct systemd .service configuration [4]
           | 
           | * The protocol is much more lightweight than TLS (used by
           | stunnel), it's just AES, padded to 1024 bytes with a 32 bit
           | checksum. [5]
           | 
           | * The private key is much easier to set up than stunnel's TLS
           | certificate, "dd if=/dev/urandom count=4 bs=1k of=key" and
           | you're good to go.
           | 
           | [1] https://packages.debian.org/bookworm/spiped
           | 
           | [2] https://www.freshports.org/sysutils/spiped/
           | 
           | [3] https://archlinux.org/packages/extra/x86_64/spiped/
           | 
           | [4] https://ruderich.org/simon/notes/systemd-service-
           | hardening
           | 
           | [5] https://github.com/Tarsnap/spiped/blob/master/DESIGN.md
        
             | SparkyMcUnicorn wrote:
             | Wireguard can also run in userspace (e.g. boringtun[0],
             | wireguard-go[1], Tailscale).
             | 
             | [0] https://github.com/cloudflare/boringtun
             | 
             | [1] https://git.zx2c4.com/wireguard-go/about/
        
             | sisk wrote:
             | > The private key is much easier to set up than stunnel's
             | TLS certificate, "dd if=/dev/urandom count=4 bs=1k of=key"
             | and you're good to go.
             | 
             | The spiped documentation recommends a key size with a
             | minimum of 256b of entropy. I'm curious why you've chosen
             | such a large key size (4096b) here? Is there anything to
             | suggest 256b is no longer sufficient for the general case?
        
         | betaby wrote:
         | I run sshd behind the HAProxy
         | https://www.haproxy.com/blog/route-ssh-connections-with-hapr...
        
       | poikroequ wrote:
       | After the xz backdoor a few months ago, I decided to turn off SSH
       | everywhere I don't need it, either by disabling it or
       | uninstalling it entirely. While SSH is quite secure, it's too
       | lucrative a target, so it will always pose a risk.
        
         | lupusreal wrote:
         | I now only bind services to wireguard interfaces. The bet is
         | that a compromise in both the service _and_ wireguard at the
         | same time is unlikely (and I have relatively high confidence in
         | wireguard.)
        
           | devsda wrote:
           | I'm confident in making ssh changes while logged in via ssh.
           | 
           | Compared to ssh, wireguard configs feel too easy to mess up
           | and risk getting locked out if its the only way of accessing
           | the device.
        
           | someplaceguy wrote:
           | > The bet is that a compromise in both the service _and_
           | wireguard at the same time is unlikely
           | 
           | An RCE in wireguard would be enough -- no need to compromise
           | both.
        
         | rwmj wrote:
         | So .. how do you handle remote logins?
        
           | daneel_w wrote:
           | "everywhere I don't need it" likely implies computers he or
           | she only accesses directly on the console.
        
         | kraftverk_ wrote:
         | What do you use in place of it?
        
           | daneel_w wrote:
           | A keyboard and a display, aka "the console". For example when
           | using one's laptop or sitting at their stationary PC.
        
           | password4321 wrote:
           | https://www.aurga.com/ ?
           | 
           | Because an $80 black box wireless KVM from a foreign country
           | is way more secure! (Just kidding, though it is not internet-
           | accesible by default.)
        
       | nfriedly wrote:
       | I stopped exposing SSH to the internet years ago. Now I connect
       | over WireGuard, and then run SSH through that when I need to
       | remotely admin something.
        
       | gavinhoward wrote:
       | As someone who does unspeakable, but safe, things in signal
       | handlers, I can confirm that it is easy to stray off the path of
       | async-signal-safety.
        
         | guerby wrote:
         | I agree and I'm surprised OpenSSH developpers did not remove
         | the use of SIGALRM and replace it by select/poll timer and
         | explicitly managed future event list. Likely more portable and
         | safe by default from this class of bugs that has bitten ssh
         | code more than one time now...
         | 
         | Defensive programming tells us to minize code in signal
         | handlers and the safest is to avoid using the signal at all
         | when possible :).
        
       | NelsonMinar wrote:
       | One interesting comment in the OpenSSH release notes
       | 
       | > Successful exploitation has been demonstrated on 32-bit
       | Linux/glibc systems with ASLR. Under lab conditions, the attack
       | requires on average 6-8 hours of continuous connections up to the
       | maximum the server will accept. Exploitation on 64-bit systems is
       | believed to be possible but has not been demonstrated at this
       | time. It's likely that these attacks will be improved upon.
       | 
       | https://www.openssh.com/releasenotes.html
        
       | sneak wrote:
       | Why are we all still running an ssh server written in an unsafe
       | language in 2024?
        
         | bauruine wrote:
         | Because nobody has written an sshd in a memory safe language
         | with the same track record of safety as OpenSSH. I personally
         | wouldn't trust a new sshd for a few years at least.
        
           | fullspectrumdev wrote:
           | There's a Rust library that implements most of the protocol,
           | but I've not found a "drop in replacement" using said library
           | yet.
           | 
           | Might actually make for a fun side project to build a SSH
           | server using that library and see how well it performs.
        
             | SoftTalker wrote:
             | Does Rust have some invulnerability to race conditions?
        
               | pornel wrote:
               | It does have invulnerability to data races. However, that
               | guarantee applies only to data types and code in Rust.
               | 
               | The dangerous interaction between signals and other
               | functions is outside of what Rust can help with.
        
               | xmodem wrote:
               | There are several crates available which implement the
               | dangerous parts of signal handling safely for you.
        
               | pornel wrote:
               | There are, but safety of their implementation is not
               | checked by the language.
               | 
               | Rust doesn't have an effect system nor a similar facility
               | to flag what code is not signal-handler-safe. A Rust
               | implementation could just as likely call something
               | incompatible.
               | 
               | Rust has many useful guarantees, and is a significant
               | improvement over C in most cases, but let's be precise
               | about what Rust can and can't do.
        
               | ekimekim wrote:
               | > Rust doesn't have an effect system nor a similar
               | facility to flag what code is not signal-handler-safe.
               | 
               | My understanding is that a sound implementation of signal
               | handling in Rust will require the signal handler to be
               | Send, requiring it only has access to shared data that is
               | Sync (safe to share between threads). I guess thread-safe
               | does not nessecarily imply signal-safe, though.
               | 
               | And of course you could still call to a signal-unsafe C
               | function but that requires an unsafe block, explicitly
               | acknowledging that Rust's guarentees do not apply.
        
               | px43 wrote:
               | This bug seems to be exploitable due to a memory
               | corruption triggered by the race condition, it's the
               | memory corruption that rust would protect from.
        
         | hosteur wrote:
         | What is the better working alternative?
        
       | maxmalkav wrote:
       | Quoting some ska tune in a SSH vulnerability report really caught
       | me off ward, but I loved it.
        
       | betaby wrote:
       | In some setups I decided to have jumphost via HAproxy ssl as
       | described there https://www.haproxy.com/blog/route-ssh-
       | connections-with-hapr... so no ssh directly exposed at all.
        
         | aflukasz wrote:
         | So this is effectively like ProxyJump, just with the jump node
         | exposed over SSL and backed by HAProxy binary instead of
         | OpenSSH?
         | 
         | What benefits do you see? I mean, you still expose some binary
         | that implements authentication and authorization using
         | cryptography.
         | 
         | I think that even RBAC scenarios described in the link above
         | should be achievable with OpenSSH, right?
        
       | ementally wrote:
       | https://dustri.org/b/notes-on-regresshion-on-musl.html
        
       | jamilbk wrote:
       | From the diff introducing the bug [1], the issue according to the
       | analysis is that the function was refactored from this:
       | void       sigdie(const char *fmt,...)       {       #ifdef
       | DO_LOG_SAFE_IN_SIGHAND        va_list args;
       | va_start(args, fmt);        do_log(SYSLOG_LEVEL_FATAL, fmt,
       | args);        va_end(args);       #endif        _exit(1);       }
       | 
       | to this:                 void       sshsigdie(const char *file,
       | const char *func, int line, const char *fmt, ...)       {
       | va_list args;               va_start(args, fmt);
       | sshlogv(file, func, line, 0, SYSLOG_LEVEL_FATAL, fmt, args);
       | va_end(args);        _exit(1);       }
       | 
       | which lacks the #ifdef.
       | 
       | What could have prevented this? More eyes on the pull request?
       | It's wild that software nearly the entire world relies on for
       | secure access is maintained by seemingly just two people [2].
       | 
       | [1] https://github.com/openssh/openssh-
       | portable/commit/752250caa...
       | 
       | [2] https://github.com/openssh/openssh-
       | portable/graphs/contribut...
        
         | ghostpepper wrote:
         | > It's wild that software nearly the entire world relies on for
         | secure access is maintained by seemingly just two people
         | 
         | obligatory xkcd https://xkcd.com/2347/
        
         | unilynx wrote:
         | It's always easy with hindsight to tell how to prevent
         | something. In this case, a comment might have helped why the
         | #ifdef was needed, eg                 void
         | CloseAllFromTheHardWay(int firstfd) //Code here must be async-
         | signal-safe! Locks may be in indeterminate state       {
         | struct rlimit lim;         getrlimit(RLIMIT_NOFILE,&lim);
         | for (int fd=(lim.rlim_cur == RLIM_INFINITY ? 1024 :
         | lim.rlim_cur);fd>=firstfd;--fd)           close(fd);       }
         | 
         | Although to be honest, getrlimit isn't actually on the list
         | here: https://man7.org/linux/man-pages/man7/signal-
         | safety.7.html
         | 
         | But I hope that removing the comment or modifying code with a
         | comment about async-signal-safe might have been noticed in
         | review. The code you quoted only has the mention
         | SAFE_IN_SIGHAND to suggest that this code might need to be
         | async-signal-safe
        
           | loeg wrote:
           | The ifdef name was a big clue! "SIGHAND" is short for signal
           | handler. Sure, there is an implicit connection here from
           | "signal handler" to "all code must be async signal safe," but
           | that association is pretty well known to most openssh authors
           | and code reviewers. Oh well, mistakes happen.
        
       | sharpshadow wrote:
       | How refreshing to read a pure txt on the phone. It displays text
       | better than a dozen websites.
        
       | thenickdude wrote:
       | There's a purported PoC exploit that delivers shellcode available
       | on GitHub, but I saw someone comment the link here, and then
       | their comment disappeared on the next refresh.
        
       | Wowfunhappy wrote:
       | They note that OpenBSD is not vulnerable. Is macOS also
       | (probably?) safe then?
        
       ___________________________________________________________________
       (page generated 2024-07-01 23:00 UTC)