[HN Gopher] This shouldn't have happened: A vulnerability postmo...
       ___________________________________________________________________
        
       This shouldn't have happened: A vulnerability postmortem
        
       Author : trulyrandom
       Score  : 465 points
       Date   : 2021-12-01 18:56 UTC (4 hours ago)
        
 (HTM) web link (googleprojectzero.blogspot.com)
 (TXT) w3m dump (googleprojectzero.blogspot.com)
        
       | yjftsjthsd-h wrote:
       | Ah; the title means "this shouldn't have happened [because the
       | vender was in fact doing everything right]", not "this shouldn't
       | have happened [because it's so stupid]".
        
         | [deleted]
        
         | zelon88 wrote:
         | Don't think for a minute this wasn't on purpose.
         | 
         | Project Zero exists for the sole purpose of trashing and
         | defacing Google competition.
         | 
         | In the absence of actual process failure to report on they just
         | resort to a disparagingly memorable title.
        
           | ziddoap wrote:
           | > _This wasn't a process failure, the vendor did everything
           | right. Mozilla has a mature, world-class security team. They
           | pioneered bug bounties, invest in memory safety, fuzzing and
           | test coverage._
           | 
           | Yep, definitely sounds like Project Zero is trashing Mozilla
           | in this blog post.
        
             | zelon88 wrote:
             | They checked for process failure didn't they?
             | 
             | Nobody will remember that line. Everyone is going to
             | remember the title.
        
               | cjbprime wrote:
               | There's probably a disconnect between safety culture and
               | PR culture here. It is true that this shouldn't have
               | happened. It's extremely important to work out how to
               | avoid it happening in a general sense. No-one here is
               | angry at NSS or Mozilla. This safety culture pose reads
               | to you as an attack because you're reading it in PR
               | culture pose.
        
               | ziddoap wrote:
               | The title doesn't name a vendor... So you'd have to read
               | the article to see the vendor, where you would presumably
               | read the line where they say they have a "world-class
               | security team" among other praise.
               | 
               | I don't like Google one bit, but my god these are some
               | extraordinary hoops people are jumping through just so
               | they can yell "Google's evil!".
        
               | zelon88 wrote:
               | I mean Google has this blog specifically to report on
               | security vulnerabilities.
               | 
               | That is literally like Volvo running a YT channel where
               | they crash test cars from other companies and assess the
               | damage to the dummy. "In the name of safety."
               | 
               | I'm not the one stretching here.
        
             | [deleted]
        
           | tehlike wrote:
           | Ex googler.
           | 
           | Project zero is amazing and well respected. I don't get why
           | the hate.
        
           | schmichael wrote:
           | This is baseless. As per the article Google Chrome used NSS
           | by default for years during which this vulnerability existed,
           | so they're admitting their own product was affected. The
           | article goes into detail about how Google's oss-fuzz project
           | neglected to find this bug.
           | 
           | The author was even so kind as to boldface the first sentence
           | here saying "the vendor did everything right":
           | 
           | > This wasn't a process failure, the vendor did everything
           | right. Mozilla has a mature, world-class security team. They
           | pioneered bug bounties, invest in memory safety, fuzzing and
           | test coverage.
           | 
           | I don't know how anyone could find a more gracious way to
           | find and publish a vulnerability.
        
           | sp332 wrote:
           | > Until 2015, Google Chrome used NSS, and maintained their
           | own testsuite and fuzzing infrastructure independent of
           | Mozilla. Today, Chrome platforms use BoringSSL, but the NSS
           | port is still maintained...
           | 
           | > Did Mozilla/chrome/oss-fuzz have relevant inputs in their
           | fuzz corpus? YES.
        
       | edoceo wrote:
       | Buffer Overflow is a classic right? (queue Rust enthusiasts)
        
         | [deleted]
        
         | petters wrote:
         | They are not wrong
        
         | howdydoo wrote:
         | "This shouldn't have happened," says user of the only language
         | where this regularly happens.
         | 
         | https://www.theonion.com/no-way-to-prevent-this-says-only-na...
        
           | fulafel wrote:
           | Yep, Rust at best eliminates some already weak excuses to
           | keep doing security critical parsing in the chainsaw-juggling
           | traditon, when we've known better for 20+ years.
        
           | tialaramex wrote:
           | I've been meaning for some time to write one of these (with
           | auto-generation whereas I believe The Onion actually has
           | staff write a new one each time they run this article) for
           | password database loss. [Because if you use Security Keys
           | this entire problem dissipates. Your "password database" is
           | just a bunch of public data, stealing it is maybe mildly
           | embarrassing but has no impact on your users and is of no
           | value to the "thieves"]
           | 
           | But a Rust one for memory unsafety would be good too.
        
         | 2OEH8eoCRo0 wrote:
         | It's wild how ahead of it's time Ada was.
        
       | JulianMorrison wrote:
       | Why isn't static analysis taint-checking the boundedness of data?
       | Unbounded data should be flagged as unbounded and that flag
       | should propagate through checking until it can be proven to be
       | bounded.
        
       | mjw1007 wrote:
       | I think the main surprising thing here is that people are putting
       | smallish arbitrary limits on the sizes of inputs that they let
       | their fuzzer generate.
       | 
       | With the benefit of a little hindsight, that does feel rather
       | like saying "please try not to find any problems involving
       | overflows".
        
       | galadran wrote:
       | The disclosure and test cases:
       | https://www.openwall.com/lists/oss-security/2021/12/01/4
        
       | fulafel wrote:
       | Good lesson also about how much our security relies on these
       | largish ongoing fuzzing efforts, and makes you think what's going
       | on at even larger fuzzing efforts that are less public.
        
       | DantesKite wrote:
       | This sounds like a very good argument for switching over to Rust.
        
         | donkarma wrote:
         | More than just one memory safe language
        
           | paavohtl wrote:
           | Not very many with 1) no garbage collection and 2) any
           | meaningful open source adoption.
        
             | fulafel wrote:
             | This component (NSS) would work fine with GC.
        
           | graton wrote:
           | But Mozilla is already using Rust. They are a major proponent
           | of Rust, so if they did switch to a memory safe language it
           | would seem like Rust would be the most likely choice for
           | them.
        
         | criddell wrote:
         | Rust has been around for more than a decade now and is still
         | very niche. Evidently, it isn't a good enough argument.
        
           | gostsamo wrote:
           | Actually, we are talking the creators of rust here. The same
           | guys who were owning it with the idea to rewrite the entire
           | browser in it. The more plausible reason might be that the
           | rewrite to rust haven't advanced to this component yet.
        
             | criddell wrote:
             | Yeah, that could be. I was speaking about the wider
             | development ecosystem. Rust is doing well in a few places
             | and that's enough for it to survive and exist long term, or
             | at least as long as Mozilla is relevant.
        
               | Arnavion wrote:
               | Mozilla's relevance hasn't mattered to Rust for a while.
        
               | criddell wrote:
               | So if Mozilla decided to step back from Rust it wouldn't
               | be a major blow to Rust? I was under the impression that
               | they were still important.
        
               | varajelle wrote:
               | They already stepped back a year ago. They fired most of
               | the rust team.
               | https://blog.mozilla.org/en/mozilla/changing-world-
               | changing-...
        
           | lucb1e wrote:
           | I guess the ecosystem that might make a language attractive
           | was not built overnight. I'm not sure looking at the
           | popularity since an initial release is the best way to
           | measure how good a language is for a particular purpose.
        
           | Gigachad wrote:
           | A decade seems like an appropriate amount of time for a
           | language to mature and take off. Ruby was very niche for 10
           | years until rails came out. Rust now seems to be spreading
           | pretty steadily and smaller companies are trying it out.
        
           | sophacles wrote:
           | I don't understand your argument.
           | 
           | In 1982 C was a decade old and still very niche.
           | 
           | In 1992 C++ was a decade old and still very niche.
           | 
           | In 2002 Python were both about a decade old and very niche.
           | 
           | In 2005 Javascript was a decade old and still very niche
           | (only used on some webpages, the web was usable without
           | javascript for the most part).
           | 
           | I think it's safe to say that all of them went on to enjoy
           | quite a bit of success/widespread use.
           | 
           | Some languages take off really fast and go strong for a long
           | time (php and java come to mind).
           | 
           | Some languages take off really fast and disappear just as
           | fast (scala, clojure).
           | 
           | Some languages get big and have a long tail, fading into
           | obscurity (tcl, perl).
           | 
           | Some languages go through cycles of ascendancy and
           | descendancy (FP languages come to mind for that).
           | 
           | Dismissing a language because of it's adoption rate seems
           | kinda silly - no one says "don't use python because it wasn't
           | really popular til it had existed for over a decade".
        
             | ebruchez wrote:
             | > Some languages take off really fast and disappear just as
             | fast (scala, clojure).
             | 
             | I don't know about Clojure but I don't think that Scala has
             | "disappeared". The hype has subsided, certainly. I for one
             | certainly hope that one of the best programming languages
             | in existence doesn't disappear.
        
             | criddell wrote:
             | Languages take off broadly because there's something
             | compelling about them (and it isn't necessarily a technical
             | reason). One of most compelling reasons for adopting Rust
             | is memory safety and that may not be terribly compelling.
             | 
             | The comment about it being over a decade old was mostly
             | that it wasn't some new thing that people are unsure about
             | where it can be used. It's mature and has been successful
             | in some niches (and keep in mind that niches can be large).
        
         | rfoo wrote:
         | Genuine question: How to switch codes written in 2003 to Rust?
        
           | [deleted]
        
           | masklinn wrote:
           | librsvg 1.0 was in 2001. Federico Mena-Quintero started
           | switching it to Rust in 2017 (well technically October 2016),
           | the rewrite of the core was finished early 2019, though the
           | test suite was only finished converting (aside from the C API
           | tests) late 2020.
           | 
           | So... carefully and slowly.
        
             | rfoo wrote:
             | Thanks, that's exactly what I'm seeking for.
             | 
             | Before that I've never heard of projects which successfully
             | did C to Rust transition, keeping its C API intact and
             | could be used as a drop-in replacement. Glad to hear that
             | there are already some success stories.
        
           | nix23 wrote:
           | By work? It's pure risk-management, do you need it? Is it
           | worth the potential risk/work?
        
           | howdydoo wrote:
           | Slowly and steadily.
        
           | varajelle wrote:
           | Maybe this is not so much about switching individual existing
           | projects to Rust, but about switching the "industry". Some
           | new projects are still written in C.
        
             | rfoo wrote:
             | Yeah, this makes sense. I'm optimistic and would like to
             | say we're already half-way there. From my PoV very few new
             | projects were written in C in recent years, except those
             | inherently-C (their purpose was to call some other
             | libraries written in C, or libc) and/or embedded (code size
             | and portability requirements).
        
           | jandrese wrote:
           | The same way you would write code written in 2021 over to
           | Rust: by rewriting it from the ground up.
           | 
           | Auto-translation won't work because Rust won't allow you to
           | build it in the same way you would have built it in C. It
           | requires a full up redesign of the code to follow the Rust
           | development model.
        
             | masklinn wrote:
             | > Auto-translation won't work because Rust won't allow you
             | to build it in the same way you would have built it in C.
             | 
             | That is not entirely true, but if you translate the C code
             | to Rust, you get C code, in Rust, with similar issues (or
             | possibly worse).
             | 
             | Of course the purpose would be to clean it up from there
             | on, but it's unclear whether that's a better path than
             | doing the conversion piecemeal by hand. The C2Rust people
             | certainly seem to think so, but I don't know if there are
             | good "client stories" about that path so far, whereas the
             | manual approach does have some (e.g. librsvg), though it's
             | not for the faint of heart.
        
               | lucb1e wrote:
               | > That is not entirely true, if you translate the C code
               | to Rust, you get C code, in Rust, with similar issues (or
               | possibly worse).
               | 
               | thus it was basically true after all? Like, sure, Rust is
               | turing-complete so you can simulate whatever C did and
               | thus _technically_ you can translate anything that C can
               | do into Rust. But if it doesn 't fix any problems, then
               | have you really translated it into Rust?
        
               | masklinn wrote:
               | > thus it was basically true after all?
               | 
               | No?
               | 
               | > Like, sure, Rust is turing-complete so you can simulate
               | whatever C
               | 
               | It's not simulating anything, and has nothing to do with
               | turing completeness.
               | 
               | > But if it doesn't fix any problems, then have you
               | really translated it into Rust?
               | 
               | Yeees? Unless your definition of "translated" has nothing
               | to do with the word or the concept.
               | 
               | You end up with a project full of rust code which builds
               | using rust's toolchains. That sounds like a translation
               | to me.
        
           | [deleted]
        
       | oleganza wrote:
       | Usually people say "oh, it's just another typical failure of
       | writing in memory-unsafe C", but here's a slightly different
       | angle: why is this common error is not happening under a single
       | abstraction like "data structure that knows it size"? If C was
       | allowing for such things, then 100000 programs would be using
       | same 5-10 standard structures where the copy-and-overflow bug
       | would be fixed already.
       | 
       | Languages like Rust, of course, provide basic memory safety out
       | of the box, but most importantly they also provide means to
       | package unsafe code under safe API and debug it once and for all.
       | And ecosystem of easy to use packages help reusing good code
       | instead of reinventing your own binary buffers every single damn
       | time, as it's usually done in C.
       | 
       | So maybe it's not the unsafeness itself, but rather inability to
       | build powerful reusable abstractions that plagues C? Everyone has
       | to step on the same rake again and again and again.
        
         | spullara wrote:
         | But performance! Rust and other languages with bounds checking
         | go out of their way to not do it once it is proven that they
         | don't need to. It would be hard to do that as a data structure.
        
           | oleganza wrote:
           | Well, here comes the type system, so your fancy data
           | structure has zero cost. Rust recently got more support for
           | const generics, so you could encode size bounds right in the
           | types and skip unnecessary checks.
        
       | thrdbndndn wrote:
       | Kinda tangent, but when I was browsing NSS' repo (
       | https://hg.mozilla.org/projects/nss or mirror:
       | https://github.com/nss-dev/nss/commits/master ) I found that the
       | latest commit has a much older date (7 weeks ago) than the
       | following ones. Why is that? (Sorry I don't know much about git
       | other than push/pull.)
        
         | spullara wrote:
         | Committed locally long ago and recently pushed?
        
         | er4hn wrote:
         | The date of the commit is metadata which can be pushed later
         | than it was made or even altered.
         | 
         | If you look around you can find cute tools to alter your repo
         | history and have the github commit history graph act as a
         | pixelated billboard.
        
       | rurban wrote:
       | Now take a deep look at the POSIX C standard, Annex K. The bounds
       | checked extensions. Using these would have definitely avoided the
       | problem. memcpy_s requires the size of dest to be defined. The
       | applause goes to the glibc maintainers, who still think they are
       | above that.
        
       | albntomat0 wrote:
       | Since this comes up whenever there is a Project Zero article,
       | here is a summary I made in summer 2020 on the distribution of
       | the bugs they find/report:
       | 
       | Since this always comes up, here's an overview I made several
       | weeks ago about where Project Zero focuses their efforts: All
       | counts are rough numbers. Project zero posts:
       | 
       | Google: 24
       | 
       | Apple: 28
       | 
       | Microsoft: 36
       | 
       | I was curious, so I poked around the project zero bug tracker to
       | try to find ground truth about their bug reporting:
       | https://bugs.chromium.org/p/project-zero/issues/list For all
       | issues, including closed:
       | 
       | product=Android returns 81 results
       | 
       | product=iOS returns 58
       | 
       | vendor=Apple returns 380
       | 
       | vendor=Google returns 145 (bugs in Samsung's Android kernel,etc.
       | are tracked separately)
       | 
       | vendor=Linux return 54
       | 
       | To be fair, a huge number of things make this not an even
       | comparison, including the underlying bug rate, different products
       | and downstream Android vendors being tracked separately. Also, #
       | bugs found != which ones they choose to write about.
        
       | Jyaif wrote:
       | For at least a decade, and in the teams I was in, C"++" written
       | like this would not pass code review precisely because it is
       | incredibly brittle.
        
         | tialaramex wrote:
         | Uhuh. On cue a C++ programmer arrives to tell us that a _true_
         | Scotsman wouldn 't have introduced this bug... Where can we see
         | "at least a decade" of this code you and your teams wrote?
        
       | [deleted]
        
       | nielsole wrote:
       | > Issue #2 Arbitrary size limits.[...]
       | 
       | > A reasonable choice might be 2^24-1 bytes, the largest possible
       | certificate
       | 
       | How does one treat untrusted input whose length might exceed
       | available memory? I am working on a patch for a jwks
       | implementation which does not even have upper bounds in the spec.
       | Accepting any valid input until OOMing seems like a suboptimal
       | solution.
        
         | duped wrote:
         | In a sense, reducing the error case to the physical limitations
         | of a device is a perfectly "optimal" solution
        
       | lucb1e wrote:
       | A title that actually describes the post, mostly paraphrasing the
       | first paragraph:
       | 
       |  _Reasons why this buffer overflow wasn 't caught earlier despite
       | doing all the right things_
       | 
       | And then to give those reasons:
       | 
       | - "each component is fuzzed independently" ... "This fuzzer might
       | have produced a SECKEYPublicKey that could have reached the
       | vulnerable code, but as the result was never used to verify a
       | signature, the bug could never be discovered."
       | 
       | - "There is an arbitrary limit of 10000 bytes placed on fuzzed
       | input. There is no such limit within NSS; many structures can
       | exceed this size. This vulnerability demonstrates that errors
       | happen at extremes"
       | 
       | - "combined [fuzzer] coverage metrics [...]. This data proved
       | misleading, as the vulnerable code is fuzzed extensively but by
       | fuzzers that could not possibly generate a relevant input."
       | 
       | The conclusion is, of course, to fix those problems if your code
       | base also has them, but also "even extremely well-maintained
       | C/C++ can have fatal, trivial mistakes".
        
         | jandrese wrote:
         | > - "There is an arbitrary limit of 10000 bytes placed on
         | fuzzed input. There is no such limit within NSS; many
         | structures can exceed this size. This vulnerability
         | demonstrates that errors happen at extremes"
         | 
         | This is the one that seemed short sighted to me. It's a
         | completely arbitrary (and small!) limit that blinded the fuzzer
         | to this very modest sized buffer overflow.
        
           | js2 wrote:
           | The buffer holds 2K, so this limit alone which exceeds the
           | buffer by 8K'ish didn't blind the fuzzer. It's not clear a
           | larger input would've caught anything due to other "what went
           | wrong" items, specifically "each component is fuzzed
           | independently."
        
           | a-priori wrote:
           | The problem is that the search space grows (exponentially?)
           | as you increase the fuzzer's limit. So there's a cost, and
           | likely diminishing returns, to raising that limit.
        
             | UncleMeat wrote:
             | Coverage-guided fuzzing dramatically mitigates the
             | exponential nature of the search space. It used to be that
             | searching for magic bits was impossible with fuzzing but
             | now it is nearly trivial.
        
             | jandrese wrote:
             | Are they checking every possible overflow up to the max?
             | Like no overflow at 7377 bytes, lets try 7378...
             | 
             | While I can see targeting near natural boundaries (1025
             | bytes for example), you should be able to skip over most of
             | the search space and verify that it doesn't blow up on
             | enormous values like 16777216 bytes.
        
               | lucb1e wrote:
               | It's not that simple though. Single-variable integer
               | overflows can be checked like that, but when the critical
               | byte in a buffer might be at positions 1 through
               | {bufferlength}, you have to do a shotgun approach and see
               | if anything sticks, and at some point the number of
               | possible combinations grows too big even for that.
               | 
               | I'm not an expert on fuzzing myself, but generally I do
               | see the point of having a limit here. Why, then, that
               | limit was not chosen to be the max size for each of the
               | length-checked inputs, I don't know. That does seem a bit
               | more obvious, but also I just read this article so I
               | can't prove that I wouldn't have made the same mistake.
        
           | jjoonathan wrote:
           | Oh, the fools! If only they'd built it with 6001 hulls! When
           | will they learn?
        
             | 6chars wrote:
             | Thank you! So many hindsight fortune tellers here.
        
         | [deleted]
        
         | rfoo wrote:
         | What's special here is the bug is a memory corruption, and
         | memory corruption bugs in such libraries are usually instantly
         | security bugs.
         | 
         | Otherwise, the same story could be told as a generic software
         | testing joke: "unit-tests are short-sighted and coverage lies",
         | i.e. an "extremely well-maintained codebase, with extensive
         | unittest, >98% test coverage and constantly scanned by all-
         | static-analyzers-you-may-come-up" can have fatal, trivial bugs.
        
           | bluejekyll wrote:
           | Ah, brings to mind one of my favorite Dijsktra quotes,
           | "Program testing can be used to show the presence of bugs,
           | but never to show their absence!"
           | 
           | I've never understood that to mean that he wasn't in favor of
           | automated testing, only that it's got its limits. In this
           | case, they now know a test case that was missing.
        
           | lucb1e wrote:
           | > What's special here is the bug is a memory corruption, and
           | memory corruption bugs in such libraries are usually
           | instantly security bugs.
           | 
           | Is that special? Are there buffer overflow bugs that are not
           | security bugs? It could be just my bubble as a security
           | consultant, since (to me) "buffer overflow" assumes remote
           | code execution is a given. It's not my area of expertise,
           | though, so perhaps indeed not all reachable buffer overflows
           | are security issues. (Presuming no mitigations, of course,
           | since those are separate from the bug itself.)
        
             | rfoo wrote:
             | Sorry, I mean the special part is "the bug itself is a
             | memory corruption". The second sentence is a quick
             | explanation for those not in our bubble.
        
             | PeterisP wrote:
             | As a crude example, there sometimes are off-by-one bugs
             | which allow the buffer to overflow by one byte, and that
             | single overflowing byte is always 0 (the last bye in a
             | zero-terminated string), and it overwrites data in a
             | variable that doesn't affect anything meaningful, giving
             | you a buffer overflow with no security impact.
        
           | Lhiw wrote:
           | Unit tests aren't really for bug catching, they're to ensure
           | you haven't changed behavior when you don't expect to.
           | 
           | They enable refactoring code in ways not possible without
           | them.
        
         | Veserv wrote:
         | How do you know that it was actually "extremely well-
         | maintained"? Everybody thought OpenSSL was well-maintained
         | since it was used as critical infrastructure by multi-billion
         | dollar megacorporations, but it was actually maintained by two
         | full-time employees and a few part-time volunteers with maybe a
         | quick once-over before a commit if they were lucky. How about
         | sudo, a blindly trusted extremely sensitive program [1], which
         | is maintained by basically one person who has nearly
         | 3,000,000(!) changes[2] over 30 or so years?
         | 
         | Assuming that something is well-maintained because it is
         | important is pure wishful thinking. Absent a specific detailed
         | high quality process, or an audit that they conform to a well-
         | established process that has demonstrably produced objectively
         | high-quality output in a large percentage of audited
         | implementations of that process (thus establishing nearly every
         | instance of the audited process -> high quality output) all
         | evidence indicates that you should assume that these code bases
         | are poorly maintained until proven otherwise[3]. And, even the
         | ones that are demonstrably maintained usually use very low
         | quality processes as demonstrated by the fact that almost
         | nobody working on those projects would be comfortable using
         | their processes on safety-critical systems [4] which is the
         | minimum bar for a high quality process (note the bar is
         | "believe it is okay for safety-critical"). In fact, most would
         | be terrified of the thought and comfort themselves knowing that
         | their systems are not being used in safety-critical systems
         | because they are absolutely not taking adequate precautions,
         | which is a completely reasonable and moral choice of action as
         | they are not designing for those requirements, so it is totally
         | reasonably to use different standards on less important things.
         | 
         | [1] https://news.ycombinator.com/item?id=25919235
         | 
         | [2] https://github.com/sudo-project/sudo/graphs/contributors
         | 
         | [3] https://xkcd.com/2347/
         | 
         | [4] https://xkcd.com/2030/
        
           | jcranmer wrote:
           | OpenSSL is a project which is treated as a "somebody else's
           | problem" dependency by everybody, and the extent to which
           | everybody cares about TLS support, it's basically an "open up
           | a TLS socket and what do you mean it's more complicated than
           | that" situation.
           | 
           | By contrast, NSS is maintained by Mozilla as part of Firefox,
           | and, furthermore, its level of concern is deep into the "we
           | don't want to enable certain cipher suites, and we have very
           | exacting certificate validation policies that we are part of
           | the effort in _defining_ "--that is to say, NSS _isn 't_ a
           | "somebody else's problem" dependency for Mozilla but a very
           | "Mozilla's problem" dependency.
           | 
           | That said, this is CERT_VerifyCertificate, not mozilla::pkix,
           | and since this is not used in Firefox's implementation of
           | certificate validation, I would expect that this _particular_
           | code in the library would be less well-maintained than other
           | parts. But the whole library itself wouldn 't be in the same
           | camp as OpenSSL.
        
           | antod wrote:
           | _> Everybody thought OpenSSL was well-maintained since it was
           | used as critical infrastructure by multi-billion dollar
           | megacorporations_
           | 
           | I wasn't under the impression anybody who knew the project
           | ever really thought that. Some other people may have assumed
           | that as a default if they hadn't looked into it.
           | 
           | This article spells out a whole bunch of reasoning why this
           | particular library was well maintained though. There's a
           | difference between reasoning based on evidence and
           | assumptions.
        
           | tptacek wrote:
           | I don't know anybody who thought OpenSSL was well-maintained
           | in and before the Heartbleed era (it's a fork of SSLeay,
           | which was Eric Young's personal project). Post-Heartbleed ---
           | a decade ago, longer than the time lapse between SSLay and
           | OpenSSL --- maintenance of OpenSSL has improved dramatically.
        
           | nullityrofl wrote:
           | Tavis explains clearly why he thinks it's well maintained in
           | the post, complete with linkages to source code. To
           | paraphrase:
           | 
           | [...] NSS was one of the very first projects included with
           | oss-fuzz [...]
           | 
           | [...] Mozilla has a mature, world-class security team. They
           | pioneered bug bounties, invest in memory safety, fuzzing and
           | test coverage. [... all links to evidence ...]
           | 
           | Did Mozilla have good test coverage for the vulnerable areas?
           | YES.
           | 
           | Did Mozilla/chrome/oss-fuzz have relevant inputs in their
           | fuzz corpus? YES.
           | 
           | Is there a mutator capable of extending ASN1_ITEMs? YES.
           | 
           | I don't think at any point anyone assumed anything.
        
       | oxfeed65261 wrote:
       | I don't understand why the "lessons learned" doesn't recommend
       | always* passing the destination buffer size (using memcpy_s or
       | your own wrapper). It has been a long time since I wrote C++, but
       | when I did this would have been instantly rejected in code
       | review.
       | 
       | *...with, I suppose, potential exceptions in performance-critical
       | code when you control and trust the input; I don't believe that
       | this code qualifies on either count.
        
         | AnimalMuppet wrote:
         | Counterexample: msgrcv(). This expects you to not be passing
         | raw buffers, but messages with a particular structure: a long
         | mtype, to specify what type of message it is, and then a char
         | (byte, since this is C) array that is the buffer that contains
         | the rest of the message. You pass these structures to msgsnd()
         | and msgrcv(), along with a size. But the size is the size of
         | the buffer component of the structure, not the size of the
         | structure as a whole. If you pass the size of the structure, it
         | will read sizeof(long) more than your structure can hold. Been
         | bit by that...
         | 
         | So, just passing the size of the destination is something that
         | you can still get wrong, in the case of data more complicated
         | than just a single buffer.
        
         | rfoo wrote:
         | That's because these are "lessons learned" for how to catch
         | these bugs, instead of "how to write more secure code".
         | 
         | Because you can't.
        
           | jmull wrote:
           | You catch the bug by flagging the use of memcpy instead of
           | something that takes the dest buffer size (like memcpy_s or
           | whatever).
           | 
           | It seems to me linters have been flagging this kind of thing
           | since forever. This code is using a wrapper, "PORT_memcpy",
           | so a default ruleset isn't going to flag it.
           | 
           | So here I guess no one noticed PORT_memcpy == memcpy (or
           | maybe noticed but didn't take the initiative to add a lint
           | rule or deprecation entry or just created an issue to at
           | least port existing code).
        
       | InfiniteRand wrote:
       | Always place char arrays at the end of a struct - rule of thumb I
       | heard somewhere, maybe from CERT-C
       | 
       | That way if you do have memory corruption, the memory following
       | your buffer is less predictable.
        
       | aidenn0 wrote:
       | I'm really curious why static analysis didn't catch this. If they
       | weren't doing static analysis, I would probably have asserted
       | that static analysis would catch this fairly easily.
       | 
       | My guess would be too many (false?) positives on bounds-checking
       | causing them to disable that check, but I can't be sure.
        
       | Stampo00 wrote:
       | Why don't we have linters that would complain about copying
       | memory without bounds checking?
        
       | nomoreusernames wrote:
       | so who wants to tell linus to rewrite everything in rust?
        
       | Veserv wrote:
       | This absolutely should have happened. "Mature, world-class
       | security teams" are, as a general rule, objectively terrible at
       | creating products that meet any meaningful, objective definition
       | of security.
       | 
       | Remember a few years ago when Apple, the world's most valuable
       | company, released a version of macOS that not only let you log
       | into root with no password(!), but actually helpfully created a
       | root account with the password supplied for the first person who
       | tried to login to root[1]? Zerodium can purchase a vulnerability
       | of similar severity to the one described in the article in
       | Mozilla's premier product, Firefox, which undoubtedly has the
       | best engineers at Mozilla and has had hundreds of millions if not
       | billions spent on its development for $100k [2]. Even if we
       | lowball the consulting rates for a skilled engineer at ~$500k,
       | that means that we should expect a single, skilled engineer to,
       | on average, find such a vulnerability with ~2 months of fulltime
       | work otherwise the supply would have dried up.
       | 
       | By no objective metric does taking 2 months of a single
       | engineer's time to completely defeat the security of a widely
       | used product constitute a meaningful, objective level of
       | security. Even a two order of magnitude underestimation,
       | literally 100x more than needed, still puts it in the range of a
       | small team working for a year which still does not qualify as
       | meaningful security. And, we can verify that this assessment is
       | fairly consistent with the truth because we can ask basically any
       | security professional if they believe a single person or a small
       | team can completely breach their systems and they will invariably
       | be scared shitless by the thought.
       | 
       | The processes employed by the large, public, commercial tech
       | companies that are viewed as leaders in security systemically
       | produce software with security that is not only imperfect, it is
       | not even good; it is terrible and is completely inadequate for
       | any purpose where even just small scale criminal operations can
       | be expected as seen by the rash of modern ransomware. Even the
       | engineers who made these systems openly admit to this state of
       | affairs [3] and many will even claim that it can not be made
       | materially better. If the people making it are saying it is bad
       | as a general rule, you should run away, fast.
       | 
       | To achieve adequate protection against threat actors who actually
       | act against these products would require not mere 100%
       | improvements, it would require 10,000% or even 100,000%
       | improvements in their processes. To give some perspective on
       | that, people who tout Rust say that it if we switch to it we will
       | remove the memory safety defects which are 70% of all security
       | defects. If we use quantity of security defects as a proxy for
       | security (which is an okay proxy to first order), that would
       | require 6 successive switches to technologies each as much better
       | than the last as people who like Rust say Rust is better than
       | C++. That is how far away it all is, the security leaders do not
       | need just a silver bullet, they need a whole silver revolver.
       | 
       | In summary, a vulnerability like this is totally expected and not
       | because they failed to have "world-class security" but because
       | that is what "world-class security" actually means.
       | 
       | [1] https://arstechnica.com/information-
       | technology/2017/11/macos...
       | 
       | [2] https://zerodium.com/program.html (ZERODIUM Payouts for
       | Desktops/Servers:Firefox RCE+LPE)
       | 
       | [3] https://xkcd.com/2030/
       | 
       | [4] https://www.zdnet.com/article/microsoft-70-percent-of-all-
       | se...
        
         | 2OEH8eoCRo0 wrote:
         | I mostly agree with you. I think it's going to take some rough
         | years or decades before we re-architect all the things we have
         | grown accustomed to.
         | 
         | https://dwheeler.com/essays/apple-goto-fail.html
        
       | _wldu wrote:
       | The sooner we can rewrite our programs in Go and Rust, the more
       | secure we will be. Our shells, coreutils, mail readers and web
       | browsers have to be written in safer languages.
        
         | throwaway894345 wrote:
         | Also, far, _far_ easier to build than all of these C programs
         | with their own bespoke build systems and implicit dependency
         | management. The more of the software stack that can be built by
         | mere mortals, the better.
        
           | lucb1e wrote:
           | Honestly I don't like the build process of most
           | go/rust/javascript software _any_ better than C++.
           | 
           | It's harder to find the dependencies for building the latter,
           | but the former has its own version of dependency hell. I have
           | real trouble building both types of projects, though
           | admittedly (especially when the building instructions don't
           | work when followed to the letter) C++ a bit more than the
           | strategy of "everything is just pulled from github, you only
           | have to make sure you've got gigabytes of free space in
           | ~/.cache/, a build environment that was released in the past
           | four to nine days, and have appropriate isolation or simply
           | not care about potentially vulnerable or compromised code
           | being run on your system".
           | 
           | On a rare occasion, I will find a nice and small program
           | using only standard libraries that compiles simply with `cc
           | my.c && ./a.out` or runs simply with `python3 my.py`,
           | demonstrating it doesn't depend on the language to have an
           | easy time building it, but in both categories it's the
           | exception for some reason. I see so much software that needs
           | only standard libraries and runs on literally any python
           | version released in the last decade, but to run it you have
           | to setup some environment or globally install it with
           | setuptools or something.
        
             | throwaway894345 wrote:
             | > everything is just pulled from github
             | 
             | I hear this a lot, but I can't divine any substance from
             | it. Why is GitHub a less-secure repository medium than
             | SourceForge + random website downloads + various Linux
             | package managers? Maybe this is a red herring and your real
             | complaint is that the Rust ecosystem is less secure than
             | the C/++ ecosystem?
             | 
             | > you only have to make sure you've got gigabytes of free
             | space in ~/.cache/
             | 
             | I cleared my $GOCACHE relatively recently (I thought maybe
             | I had a cache issue, but I was mistaken), but it's
             | currently at 75M while my .cargo directory weighs 704M. If
             | these ever really got too big I would just `go clean
             | -cache` and move on with life. If this is one of the
             | biggest issues with Go/Rust/etc then I think you're arguing
             | my point for me.
             | 
             | > a build environment that was released in the past four to
             | nine days
             | 
             | What does this even mean? You can compile Go or Rust
             | programs on any Linux machine with the build tools. On the
             | contrary, C/C++ dependencies are _very tightly coupled_ to
             | the build environment.
             | 
             | > have appropriate isolation or simply not care about
             | potentially vulnerable or compromised code being run on
             | your system
             | 
             | Not sure about Rust programs, but Go programs absolutely
             | don't run arbitrary code at compile/install time. C
             | programs on the other hand absolutely _do_ run arbitrary
             | code (e.g., CMake scripts, Makefiles, random bash scripts,
             | etc).
             | 
             | > I see so much software that needs only standard libraries
             | and runs on literally any python version released in the
             | last decade, but to run it you have to setup some
             | environment or globally install it with setuptools or
             | something.
             | 
             | Yeah, Python package management is a trashfire; however,
             | this is _entirely_ because it is so tightly coupled to C
             | dependencies (many Python libraries are thin wrappers
             | around various C programs, each with their own bespoke
             | build system). Python package management tries to paper
             | over the universe of C packages and it kind of works as
             | long as you 're on a handful of well-supported
             | distributions and your dependencies have been well-vetted
             | and well-maintained.
        
               | dmz73 wrote:
               | I don't think the exact URL is the problem, it is the
               | fact that it is so easy to include dependencies from
               | external repository that is the problem.
               | 
               | In Rust every non-trivial library pulls in 10s or even
               | 100s of dependencies.
               | 
               | I don't think anyone can expect that all of these
               | libraries are of good quality but how would one even try
               | to verify that? And you have to verify it every time you
               | update your project.
               | 
               | Then there is the issue of licencing - how to verify that
               | I am not using some library in violation of its licence
               | and what happens if the licence changes down the road and
               | I don't notice it because I am implicitly using 500
               | dependencies due to my 3 main libraries?
               | 
               | Rust and Go have solved memory safety compared to C and
               | C++ but have introduced dependency hell of yet unknown
               | proportions.
               | 
               | Python and other dynamically typed languages are in a
               | league of their own in that on top of the dependency hell
               | they also do not provide compiler checks that would allow
               | user to see the problem before the exact conditions occur
               | at runtime. They are good for scripting but people keep
               | pumping out full applications and to be honest there is
               | not much difference between giant Python application,
               | giant maze of Excel VBA and giant Node.js heap of code.
               | Of those, Excel VBA is most likely to work for 5 years
               | and across 5 versions of the product yet it is also the
               | most likely one to receive the most negative comments.
        
               | throwaway894345 wrote:
               | > I don't think the exact URL is the problem, it is the
               | fact that it is so easy to include dependencies from
               | external repository that is the problem. In Rust every
               | non-trivial library pulls in 10s or even 100s of
               | dependencies.
               | 
               | But it's also quite a lot easier to _audit_ those
               | dependencies, even automatically (incidentally, GitHub
               | provides dependency scanning for free for many
               | languages).
               | 
               | > Then there is the issue of licencing - how to verify
               | that I am not using some library in violation of its
               | licence and what happens if the licence changes down the
               | road and I don't notice it because I am implicitly using
               | 500 dependencies due to my 3 main libraries?
               | 
               | This is also an automated task. For example,
               | https://github.com/google/go-licenses: "go-licenses
               | analyzes the dependency tree of a Go package/binary. It
               | can output a report on the libraries used and under what
               | license they can be used. It can also collect all of the
               | license documents, copyright notices and source code into
               | a directory in order to comply with license terms on
               | redistribution."
               | 
               | > Rust and Go have solved memory safety compared to C and
               | C++ but have introduced dependency hell of yet unknown
               | proportions.
               | 
               | I mean, it's been a decade and things seem to be going
               | pretty well. Also, I don't think anyone who has actually
               | used these languages seriously has ever characterized
               | their dependency management as "dependency hell";
               | however, lots of people talk about the "dependency hell"
               | of managing C and C++ dependencies.
               | 
               | > Python and other dynamically typed languages are in a
               | league of their own in that on top of the dependency hell
               | they also do not provide compiler checks that would allow
               | user to see the problem before the exact conditions occur
               | at runtime.
               | 
               | I won't argue with you there.
        
               | jcranmer wrote:
               | > In Rust every non-trivial library pulls in 10s or even
               | 100s of dependencies.
               | 
               | You're exaggerating here. The most recent project I've
               | been working on pulls in 6 dependencies. The anyhow crate
               | has no dependencies, regex 3 (recursively!), clap and csv
               | each 8. Only handlebars and palette pull in 10s of
               | dependencies, and I can trim a fair few dependencies of
               | palette by opting out of named color support (dropping
               | the phf crate for perfect hash functions).
        
           | rfoo wrote:
           | > far, far easier to build than all of these C programs
           | 
           | One of my friends who work on AIX machines without direct
           | Internet access does not share the same view, though.
        
             | throwaway894345 wrote:
             | Why is indirect Internet access less of a problem for C
             | than Rust/Go/etc? Seems like for modern systems, you just
             | run a pre-populated caching proxy on your target and `cargo
             | install` like you normally would. In C, you're manually
             | checking versions and putting files in the right spot on
             | disk for every stage of the build (this can be alleviated a
             | bit if you can find pre-built binaries and so on, but even
             | in the best case it's far behind "advanced" systems).
        
           | jschwartzi wrote:
           | Autotools is the de facto build system for most of the GNU
           | system programs. The bit about dependency management mostly
           | fits but I would argue that letting us figure out how to
           | build and install the dependencies is fairly UNIXy. It's also
           | unclear to me that centralized package managers are
           | necessarily better for security, though they're easier to
           | use. Also a lot of more modern tools I've tried to build in
           | recent months do not give a crap about cross compilation as a
           | use case. At least with autotools its supported by default
           | unless the library authors did something egregious like hard
           | coding the sysroot or toolchain paths.
        
             | throwaway894345 wrote:
             | EDIT: Just re-read the below and realized it might sound
             | terse and argumentative; apologies, I was typing quickly
             | and didn't mean to be combative. :)
             | 
             | > I would argue that letting us figure out how to build and
             | install the dependencies is fairly UNIXy
             | 
             | Crumby build systems _force_ you to figure out how to build
             | and install dependencies (or die trying). Modern build
             | systems _allow_ you to figure out how to build and install
             | dependencies. If the former is  "more UNIXy" than the
             | latter, then I strongly contend that "UNIXy" is not a
             | desirable property.
             | 
             | > It's also unclear to me that centralized package managers
             | are necessarily better for security, though they're easier
             | to use.
             | 
             | "Centralized" is irrelevant. Go's package manager is
             | decentralized, for example. Moreover, many folks in the C
             | world rely heavily on centralized repositories. Further, I
             | would be _shocked_ if manually managing your dependencies
             | was somehow _less_ error prone (and thus more secure) than
             | having an expert-developed program automatically pull and
             | verify your dependencies.
             | 
             | > Also a lot of more modern tools I've tried to build in
             | recent months do not give a crap about cross compilation as
             | a use case.
             | 
             | I mean, C doesn't care about _anything_ , much less cross
             | compilation. It puts the onus on the developer to figure
             | out how to cross compile. Some build system generators
             | (e.g., CMake, Autotools) purport to solve cross
             | compilation, but I've always had problems. Maybe I just
             | don't possess the mental faculties or years of experience
             | required to master these tools, but I think that supports
             | my point. By comparison, cross compilation in Go is trivial
             | (set `CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build` works
             | _virtually_ every time from any platform). I haven 't done
             | much Rust cross-compilation, but I would be surprised if it
             | were harder than C/C++.
        
         | throwaway984393 wrote:
         | A century ago, houses were very likely to kill you in all sorts
         | of ways. You know what made houses safer? It wasn't using
         | "safer" building materials. It was the development of building
         | codes. Even using flammable building materials, people adapted
         | _the way they built_ to result in safer outcomes. But even
         | using the  "safest" materials _and_ a building code, a house
         | can still kill you.
        
       | jmull wrote:
       | To me, PORT_Memcpy is one problem here.
       | 
       | There are two buffers and one size -- the amount of memory to
       | copy.
       | 
       | There should be PORT_Memcpy2(pDest, destSize, pSource,
       | numBytesToCopy) (or whatever you want to call it) which at least
       | prompts the programmer to account for the size destination
       | buffer.
       | 
       | Then flag all calls to PORT_Memcpy and at least make a dev look
       | at it. (Same for the various similar functions like strcpy, etc.)
        
         | twodayslate wrote:
         | Of course it would just end up being
         | PORT_Memcpy2(cx->u.buffer, sigLen, sig->data, sigLen);
        
       | [deleted]
        
       | slownews45 wrote:
       | Wow.
       | 
       | We continue to be reminded that it's hard to write fully memory
       | secure code in a language that is not memory secure?
       | 
       | And by hard, I mean, very hard even for folks with lots of money
       | and time and care (which is rare).
       | 
       | My impression is that Apple's imessage and other stacks also have
       | memory unsafe languages in the api/attack surface, and this has
       | led to remote one click / no click type exploits.
       | 
       | Is there a point at which someone says, hey, if it's very
       | security sensitive write it in a language with a GC (golang?) or
       | something crazy like rust? Or are C/C++ benefits just too high to
       | ever give up?
       | 
       | And similarly, that simplicity is a benefit (ie, BoringSSL etc
       | has some value).
        
         | jandrese wrote:
         | It's hard to fault a project written in 2003 for not using Go,
         | Rust, Haskell, etc... It is also hard to convince people to do
         | a ground up rewrite of code that is seemingly working fine.
        
           | zionic wrote:
           | >seemingly worked fine
           | 
           | That's just it though, it never was. That C/C++ code base is
           | like a giant all-brick building on a fault line. It's going
           | to collapse eventually, and your users/the people inside will
           | pay the price.
        
           | hwbehrens wrote:
           | > _It is also hard to convince people to do a ground up
           | rewrite of code that is seemingly working fine._
           | 
           | I think this is an understatement, considering that it's a
           | core cryptographic library. It appears to have gone through
           | at least five audits (though none since 2010), and includes
           | integration with hardware cryptographic accelerators.
           | 
           | Suggesting a tabula rasa rewrite of NSS would more likely be
           | met with genuine concern for your mental well-being, than by
           | incredulity or skepticism.
        
             | IshKebab wrote:
             | To be fair you don't need to rewrite the whole thing at
             | once. And clearly the audits are not perfect, so I don't
             | think it's insane to want to write it in a safer language.
             | 
             | It may be too much work to be worth the time, but that's an
             | entirely different matter.
        
             | fulafel wrote:
             | The article says Chromium replaced this in 2015 in their
             | codebase. (With another memory-unsafe component,
             | granted...)
        
               | vlovich123 wrote:
               | BoringSSL started as a stripped down OpenSSL. That's very
               | different from a ground-up replacement. The closest
               | attempt here is https://github.com/briansmith/ring but
               | even that borrows heavily the cryptographic operations
               | from BoringSSL. Those algorithms themselves are generally
               | considered to be more thoroughly vetted than the pieces
               | like ASN.1 validation.
        
               | VWWHFSfQ wrote:
               | nss was also generally considered to be thoroughly vetted
               | though
        
           | slownews45 wrote:
           | What's somewhat interesting is memory safety is not a totally
           | new concept.
           | 
           | I wonder if memory safety had mattered more, whether other
           | languages might have caught on a bit more, developed more
           | etc. Rust is the new kid, but memory safety in a language is
           | not a totally new concept.
           | 
           | The iphone has gone down the memory unsafe path including for
           | high sensitivity services like messaging (2007+). They have
           | enough $ to re-write some of that if they had cared to, but
           | they haven't.
           | 
           | Weren't older language like Ada or Erlang memory safe way
           | back?
        
             | xiphias2 wrote:
             | Memory safe language that can compete with C/C++ in
             | performance and resource usage is a new concept.
             | 
             | AFAIK ADA guarantees memory safety only if you statically
             | allocate memory, and other languages have GC overhead.
             | 
             | Rust is really something new.
        
               | openasocket wrote:
               | There's different classes of memory un-safety: buffer
               | overflow, use after free, and double free being the main
               | ones. We haven't seen a mainstream language capable of
               | preventing use and free and double free without GC
               | overhead until Rust. And that's because figuring out when
               | an object is genuinely not in use anymore, at compile
               | time, is a really hard problem. But a buffer overflow
               | like from the article? That's just a matter of saving the
               | length of the array alongside the pointer and doing a
               | bounds check, which a compiler could easily insert if
               | your language had a native array type. Pascal and its
               | descendants have been doing that for decades.
        
               | senderista wrote:
               | > We haven't seen a mainstream language capable of
               | preventing use and free and double free without GC
               | overhead until Rust.
               | 
               | Sorry, that just isn't the case. It is simple to design
               | an allocator that can detect any double-free (by
               | maintaining allocation metadata and checking it on free),
               | and prevent any use-after-free (by just zeroing out the
               | freed memory). (Doing so efficiently is another matter.)
               | It's not a language or GC issue at all.
        
               | slownews45 wrote:
               | The trick is not that the language support a safe
               | approach (C++ has smart pointers / "safe" code in various
               | libraries) in my view but simply that you CAN'T cause a
               | problem even being an idiot.
               | 
               | This is where the GC languages did OK.
        
               | shakna wrote:
               | > That's just a matter of saving the length of the array
               | alongside the pointer and doing a bounds check, which a
               | compiler could easily insert if your language had a
               | native array type. Pascal and its descendants have been
               | doing that for decades.
               | 
               | GCC has also had an optional bounds checking branch since
               | 1995. [0]
               | 
               | GCC and Clang's sanitisation switches also support bounds
               | checking, for the main branches, today, unless the
               | sanitiser can't trace the origin or you're doing double-
               | pointer arithmetic or further away from the source.
               | 
               | AddressSanitizer is also used by both Chrome & Firefox,
               | and failed to catch this very simple buffer overflow from
               | the article. It would have caught the bug, if the objects
               | created were actually used and not just discarded by the
               | testsuite.
               | 
               | [0] https://gcc.gnu.org/extensions.html
        
             | dagmx wrote:
             | AFAIK the issue with messaging isn't that the core app
             | itself is written in an unsafe language , but that many
             | components it interacts with are unsafe. E.g file format
             | parsers using standard libraries to do it.
             | 
             | Granted those should also be rewritten in safer languages
             | but often they're massive undertakings
        
             | IshKebab wrote:
             | The issue isn't really that there was a shortage of memory
             | safe languages, it's that there was a shortage of memory
             | safe languages that you can easily use from C/C++ programs.
             | Nobody is going to ship a JVM with their project just so
             | they can have the "fun" experience of using Java FFI to do
             | crypto.
             | 
             | Realistically Rust is still the only memory safe language
             | that you could use, so it's not especially surprising that
             | nobody did it 18 years ago.
        
         | jtchang wrote:
         | I'd like to write go or rust but embedded constraints are
         | tough. I tried and the binaries are just too big!
        
           | jrockway wrote:
           | How big is too big? I haven't run into any size issues
           | writing very unoptimized Go targeting STM32F4 and RP2040
           | microcontrollers, but they do have a ton of flash. And for
           | that, you use tinygo and not regular go, which is technically
           | a slightly different language. (For some perspective, I
           | wanted to make some aspect of the display better, and the
           | strconv was the easiest way to do it. That is like 6k of
           | flash! An unabashed luxury. But still totally fine, I have
           | megabytes of flash. I also have the time zone database in
           | there, for time.Format(time.RFC3339). Again, nobody does that
           | shit on microcontrollers, except for me. And I'm loving it!)
           | 
           | Full disclosure, Python also runs fine on these
           | microcontrollers, but I have pretty easily run out of RAM on
           | every complicated Python project I've done targeting a
           | microcontroller. It's nice to see if some sensor works or
           | whatever, but for production, Go is a nice sweet spot.
        
           | dochtman wrote:
           | This is not true. Lots of people are putting Rust on
           | microcontrollers now - just have to stick to no_std.
        
           | steveklabnik wrote:
           | The smallest binary rustc has produced is 138 bytes.
           | 
           | It is true that it's not something you just get for free, you
           | have to avoid certain techniques, etc. But rust can fit just
           | fine.
        
             | cjg wrote:
             | Do you have a link to an article / report about that 138
             | byte program? I'd be interested how to achieve that.
        
               | eat_veggies wrote:
               | https://github.com/tormol/tiny-rust-executable got it to
               | 137, but here's a blog post explaining the process to get
               | it to 151: http://mainisusuallyafunction.blogspot.com/201
               | 5/01/151-byte-...
        
               | haberman wrote:
               | I'd also like to see the smallest Rust binaries that are
               | achieved by real projects. When the most size-conscious
               | users use Rust to solve real problems, what is the
               | result?
        
         | masklinn wrote:
         | > or something crazy like rust?
         | 
         | There's nothing crazy about rust.
         | 
         | If your example was ATS we'd be talking.
        
         | zionic wrote:
         | C/C++ don't really have "benefits", they have inertia. In a
         | hypothetical world where both came into being at the same time
         | as modern languages no one would use them.
         | 
         | Sadly, I'm to the point that I think a lot of people are going
         | to have to die off before C/C++ are fully replaced if ever.
         | It's just too ingrained in the current status quo, and we all
         | have to suffer for it.
        
           | kwertyoowiyop wrote:
           | C/C++ will be around for at least a hundred years. Our
           | descendants will be writing C/C++ code on Mars.
        
             | zionic wrote:
             | I don't know about that, I can see Rust having a certain
             | aesthetic appeal to martians.
        
               | axelf4 wrote:
               | Nah, they definitely use Zig.
        
           | nickelpro wrote:
           | On any given platform, C tends to have the only lingua franka
           | ABI. For that reason it will be around until the sun burns
           | out.
        
             | pornel wrote:
             | The C ABI will outlive C, like the term "lingua franca"
             | outlived the Franks. Pretty much every other language has
             | support for the C ABI.
        
         | jmull wrote:
         | Well, there's no time machine.
         | 
         | Also, as far as I know, a full replacement for C doesn't exists
         | yet.
        
           | Gigachad wrote:
           | Are you suggesting that this crypto library would not be
           | possible or practical to be built with rust? What features of
           | C enable this library which Rust does not?
           | 
           | There is no time machine to bring rust back to when this was
           | created, but as far as I know, there is no reason it
           | shouldn't be Rust if it was made today.
        
         | JulianMorrison wrote:
         | FWIW, Go absolutely would not stop you writing unbounded data
         | into a bounded struct. Idiomatic Go would be to use byte slices
         | which auto-resize, unlike idiomatic C, but you still have to do
         | it.
        
           | jerf wrote:
           | Go would stop this from being exploitable. You might be able
           | to make a slice larger than it is "supposed" to be, but it
           | won't overwrite anything else because Go will be allocating
           | new memory for your bigger slice.
           | 
           | But this is hardly a big claim for Go. The reality is that of
           | all current languages in use, _only_ C and C++ will let this
           | mistake happen and have the consequence of overwriting
           | whatever happens to be in the way. Everything else is too
           | memory safe for this to happen.
        
           | asdfasgasdgasdg wrote:
           | Is it idiomatic go to memcpy into a struct? I would think
           | that this whole paradigm would be impossible in safe golang
           | code.
        
             | slownews45 wrote:
             | That's what I'm trying to understand.
             | 
             | Let's ignore idiomatic code, people do crazy stuff all the
             | time.
             | 
             | What's the go example that gets you from for example an
             | overflow to exploit? That's what I'm trying to follow (not
             | being an expert).
        
               | asdfasgasdgasdg wrote:
               | I am skeptical that you could do it without either using
               | the assembler or the unsafe package, but we will see what
               | Julian says.
        
           | slownews45 wrote:
           | What's the exploit path assuming no use of unsafe?
           | 
           | I can see situations where I could probably get go to crash,
           | but not sure how I get go to act badly.
           | 
           | Note: Not a go / Haskell / C# expert so understanding is
           | light here.
        
             | cjbprime wrote:
             | Go is sometimes considered memory unsafe because of the
             | presence of data races. (This is a controversial
             | semantics.)
        
               | senderista wrote:
               | Then Java is also unsafe by the same standard.
        
               | cjbprime wrote:
               | Why do you say that? Go's data races can produce memory
               | corruption through bounds checking failures. I'm not
               | aware of Java having that kind of memory corruption.
        
             | JulianMorrison wrote:
             | Go has no "unsafe" keyword and several parts of the
             | language are unsafe, you're thinking of Rust which has much
             | tighter guarantees.
             | 
             | Go idioms, like accepting data into buffers that are
             | resized by "append", work around the unsafe parts of the
             | language.
        
               | slownews45 wrote:
               | Go has an unsafe package.
               | 
               | Is there an example of even "bad" go code that gets you
               | from a overflow to an exploit? I'm curious, folks
               | (usually rust folks) do keep making this claim, is there
               | a quick example?
        
               | ptaq wrote:
               | You can totally do this with bad concurrency in Go: read-
               | after-write of an interface value may cause an
               | arbitrarily bad virtual method call, which is somewhat
               | UB. I am not aware of single goroutone exploits, though.
        
               | slownews45 wrote:
               | Concurrency issues / bad program flow feel a bit
               | different don't they? I mean, I can store the action to
               | take on a record in a string in any language, then if I'm
               | not paying attention on concurrency someone else can
               | switch to a different action and then when that record is
               | processed I end up deleting instead of editing etc.
               | 
               | I mention this because in SQL folks not being careful end
               | up in all sorts of messed up situations with high
               | concurrency situations.
        
               | [deleted]
        
           | foobiekr wrote:
           | Idiomatic go would have you using bounded readers though.
        
             | JulianMorrison wrote:
             | Either you read data into a fixed byte[] and stop at its
             | capacity, or you read data into an unbounded byte[] by
             | using append, and Go looks after the capacity, either way,
             | you can't go off the end.
        
         | fleventynine wrote:
         | My language selection checklist:
         | 
         | 1. Does the program need to be fast or complicated? If so,
         | don't use a scripting language like Python, Bash, or
         | Javascript.
         | 
         | 2. Does the program handle untrusted input data? If so, don't
         | use a memory-unsafe language like C or C++.
         | 
         | 3. Does the program need to accomplish a task in a
         | deterministic amount of time or with tight memory requirements?
         | If so, don't use anything with a garbage collector, like Go or
         | Java.
         | 
         | 4. Is there anything left besides Rust?
        
           | javajosh wrote:
           | Well, Java because #3 is worlds better in Java 17. GC perf
           | has improved a lot but everyone seems stuck on Java 8, so no-
           | one knows about it. (and since Java is "uncool", maybe no-one
           | ever will).
        
           | throwaway894345 wrote:
           | If your "deterministic amount of time" can tolerate single-
           | digit microsecond pauses, then Go's GC is just fine. If
           | you're building hard real time systems then you probably want
           | to steer clear of GCs. Also, "developer velocity" is an
           | important criteria for a lot of shops, and in my opinion that
           | rules out Rust, C, C++, and every dynamically typed language
           | I've ever used (of course, this is all relative, but in my
           | experience, those languages are an order of magnitude
           | "slower" than Go, et al with respect to velocity for a wide
           | variety of reasons).
        
             | fleventynine wrote:
             | If it can really guarantee single-digit microsecond pauses
             | in my realtime thread no matter what happens in other
             | threads of my application, that is indeed a game changer.
             | But I'll believe it when I see it with my own eyes. I've
             | never even used a garbage collector that can guarantee
             | single-digit millisecond pauses.
        
           | Twisol wrote:
           | I suspect Ada would make the cut, with the number of times
           | it's been referenced in these contexts, but I haven't
           | actually taken the time to learn Ada properly. It seems like
           | a language before its time.
        
             | IshKebab wrote:
             | As I understand it it's only memory safe if you never free
             | your allocations, which is better than C but not an
             | especially high bar. Basically the same as GC'd languages
             | but without actually running the GC.
             | 
             | It does have support for formal verification though unlike
             | most languages.
        
           | lucb1e wrote:
           | By 'complicated' in point 1, do you mean 'large'? Because a
           | complex algorithm should be fine -- heck, it should be
           | _better_ in something like Python because it 's relatively
           | easy to write, so you have an easier time thinking about what
           | you're doing, avoid making a mistake that would lead to an
           | O(n3) runtime instead of the one you were going for, takes
           | less development time, etc.
           | 
           | I assume you meant 'large' because, as software like
           | Wordpress beautifully demonstrates, you can have the simplest
           | program (from a user's perspective) in the fastest language
           | but by using a billion function calls for the default page in
           | a default installation, you can make anything slow. Using a
           | slow language for large software, if that's what you meant to
           | avoid then I agree.
           | 
           | And as another note, point number 2 basically excludes all
           | meaningful software. Not that I necessarily disagree, but
           | it's a bit on the heavy-handed side.
        
             | fleventynine wrote:
             | By complicated I guess I mean "lots of types". Static
             | typing makes up for its cost once I can't keep all the
             | types in my head at the same time.
             | 
             | Point number 2 excludes pretty much all network-connected
             | software, and that's intentional. I suppose single-player
             | games are ok to write in C or C++.
        
         | foobiekr wrote:
         | To provide some context for my answer, I've seen, first hand,
         | plenty of insecure code written in python, JavaScript and ruby,
         | and a metric ton - measured in low vulnerabilities/M LoC - of
         | secure code written in C for code dating from the 80s to 2021.
         | 
         | I personally don't like the mental burden of dealing with C any
         | more and I did it for 20+ years, but the real problem with
         | vulnerabilities in code once the low hanging fruit is gone is
         | the developer quality, and that problem is not going away with
         | language selection (and in some cases, the pool of developers
         | attached to some languages averages much worse).
         | 
         | Would I ever use C again? No, of course not. I'd use Go or Rust
         | for exactly the reason you give. But to be real about it,
         | that's solving just the bottom most layer.
        
           | nicoburns wrote:
           | C vulnerabilities do have a nasty habit of giving the
           | attacker full code execution though, which doesn't tend to be
           | nearly so much of a problem in other languages (and would
           | likely be even less so if they weren't dependant on
           | foundations written in C)
        
         | duped wrote:
         | > Or are C/C++ benefits just too high to ever give up?
         | 
         | FFI is inherently memory-unsafe. You get to rewrite security
         | critical things from scratch, or accept some potentially
         | memory-unsafe surface area for your security critical things
         | for the benefit that the implementation behind it is sound.
         | 
         | This is true even for memory-safe languages like Rust.
         | 
         | The way around this is through process isolation and
         | serializing/deserializing data manually instead of exchanging
         | pointers across some FFI boundary. But this has non-negligible
         | performance and maintenance costs.
        
         | ransom1538 wrote:
         | Dumb question: Do we need to use C++ anymore? Can we just leave
         | it to die with video games? How many more years of this crap do
         | we need before we stop using that language. Yes I know, C++
         | gurus are smart, but, you are GOING to mess up memory
         | management. You are GOING to inject security issues with c/c++.
        
           | rocqua wrote:
           | If im going to be making code that needs to run fast, works
           | on a bit level, and isn't exposed to the world, then I am
           | picking up C++
           | 
           | It's more convenient than C. It's easier to use (at the cost
           | of safety) compared to Rust.
           | 
           | Perhaps this will change if I know rust better. But for now
           | C++ is where it's at for me for this niche.
        
           | alfalfasprout wrote:
           | C/C++ is great for AI/ML/Scientific computing because at the
           | end of the day, you have tons of extremely optimized
           | libraries for "doing X". But the thing is, in those use cases
           | your data is "trusted" and not publicly accessible.
           | 
           | Similarly in trading, C/C++ abounds since you really do have
           | such fine manual control. But again, you're talking about
           | usage within internal networks rather than publicly
           | accessible services.
           | 
           | For web applications, communications, etc.? I expect we'll
           | see things slowly switch to something like Rust. The issue is
           | getting the inertia to have Rust available to various
           | embedded platforms, etc.
        
       | mleonhard wrote:
       | What went wrong - Issue #0: The library was not re-written in a
       | language that prevents undefined behavior (UB).
        
         | tialaramex wrote:
         | I don't think there are any general purpose programming
         | languages with decent performance which outright "prevent
         | undefined behaviour" in something like NSS. Rust, for example,
         | does not.
         | 
         |  _safe_ Rust doesn 't have undefined behaviour but of course
         | you can (and a large project like this will) use _unsafe_ Rust
         | and then you need the same precautions for that code. This
         | sharply reduces your exposure if you 're doing a decent job -
         | and is therefore worthwhile but it is not a silver bullet.
         | 
         | Outright preventing undefined behaviour is hard. Java outlaws
         | it, but probably not successfully (I believe it's a bug if your
         | Java VM exhibits undefined behaviour, but you may find that
         | unless it was trivial to exploit it goes in the pile of known
         | bugs and nobody is jumping up and down to fix it). Go explains
         | in its deeper documentation that concurrent Go is unsafe (this
         | is one of the places where Rust is safer, _safe_ Rust is still
         | safe concurrently).
         | 
         | Something like WUFFS prevents undefined behaviour and has
         | excellent performance but it has a deliberately limited domain
         | rather than being a general purpose language. _Perhaps_ a
         | language like WUFFS should exist for much of the work NSS does.
         | But Rust does exist, it was created at Mozilla where NSS lives,
         | and NSS wasn 't rewritten in Rust so why should we expect it to
         | be rewritten in this hypothetical Wrangling Untrusted
         | Cryptographic Data Safely language?
        
       ___________________________________________________________________
       (page generated 2021-12-01 23:00 UTC)