[HN Gopher] The world needs a software bill of materials
       ___________________________________________________________________
        
       The world needs a software bill of materials
        
       Author : kiyanwang
       Score  : 113 points
       Date   : 2021-03-21 11:17 UTC (11 hours ago)
        
 (HTM) web link (drrispens.medium.com)
 (TXT) w3m dump (drrispens.medium.com)
        
       | 616c wrote:
       | I think SBOM is interesting but it is totally absurd to think
       | that SBOM is a _complete_ solution and/or comprehensive
       | mitigation to Sunburst.
       | 
       | It is starting to make me mad people say this.
       | 
       | There is money on the line to create or enhance profitable
       | software, so why not sell it that way?
       | 
       | I do not believe Solarwinds staff do not manage their
       | dependencies or handle software better or worse than other
       | companies. One attack vector had software that was properly
       | digitally signed (think about _that_) and required cleverly
       | backdooring developer workstation and infecting pipelines, then
       | wiping traces clean. To mimic "if you can dodge a wrench, you can
       | dodge a ball" you can similarly say "if you can hack a build
       | process to build digitally signed software to wipe your traces
       | away, you can hack a SBOM process in a CD pipeline to say
       | whatever the hell you want."
       | 
       | I am into rektor and sigstore, but even they must realize, like
       | others, if you think about the event everyone is talking about
       | and the real threat model, we are advocating for some speeds as a
       | solid perimeter defense, these are not fortified walls of
       | security design.
       | 
       | Technical people get this nuance, non-technical do not, so this
       | kind of solution advocacy in these articles about Solarwinds as
       | an example, really resonates with the latter, who make purchasing
       | decisions and strategies. That is what worries me.
       | 
       | @ris has a key part of it right, imho, but you have to go bigger
       | than that, and not think just open source (even as a FSFer, I say
       | that).
       | 
       | > The solution to this problem is not bureaucracy. The solution
       | is in the reproducible builds project, Guix and Nix.
       | 
       | My belief for 3 pieces, the third and most difficult is missing.
       | 
       | 1. Yes, digitally signed SBOM (in regulation or software
       | contracts, those in USG contracting will know this is coming down
       | the pipe anyway, others will follow).
       | 
       | 2. Requirements for reproducible builds and _not_ just open
       | source software doing that (I am thinking a build escrow
       | ecosystem will have to come soon so commercial entities can farm
       | out in some way their pipeline to third parties to build the
       | exact thing they sell, identically match, or huge flairs go up).
       | Again, regulation and contracts will have to push this, but I
       | wonder how crazy I sound when I write this.
       | 
       | 3. So if 2 seems hard: we need more appsec competency on just on
       | the dev side, but the build/deploy side. If you have industry
       | security bodies (government, legal, energy, financial) or big
       | employers themselves, they will _need_ to have people set up test
       | labs with realistic deployments over time, watch how their
       | software behaves, build a network of people, resources, and
       | information exchanges. They need to be able to build the skilset,
       | learn to find vulns and most importantly risky default
       | misconfigurations combining multiple software packages individual
       | vendors don't think about. They will need to discuss when
       | software that is 1 month in use or 8 years of use for %80 of my
       | industry sector's employer or 100% of one big employer's network
       | through training and communication to go ask people through these
       | exchanges "hey, these systems are acting weirdly. Is this weird,
       | do others see this or know this mis-configuration could be
       | exploited and people have seen this before?" I mean that kind of
       | knowledge share.
       | 
       | If it does not, certainly re 2 and 3, SBOM will change some, but
       | not all.
        
       | l8again wrote:
       | The idea of SBOM, or BOM is not a new one. Maven already has one
       | for years now called BOM files [1]. So am I right in assuming
       | that the author's suggestion is to make this BOM file public like
       | the ingredients in food product as some people have suggested? If
       | so, I can see a quick static analysis tool that can spit out the
       | vulnerabilities just by parsing the BOM. So, really there's
       | nothing to do much _technically_ here other than releasing out in
       | the open the BOM. And then displaying ugly warnings about any
       | software's BOM to either shame them or actually hurt their bottom
       | line by lost revenue due to those vulnerabilities.
       | 
       | [1]https://maven.apache.org/guides/introduction/introduction-
       | to...
        
         | l8again wrote:
         | Also, I wonder how can we realistically implement this for
         | SaaS?
        
           | dathinab wrote:
           | Implement yes, but will it make a relevant difference? IMHO
           | Unlikely.
           | 
           | A BOM no one (of relevance) ever reads is as good as no BOM.
           | 
           | The main positive effects a BOM can have (outside of SaaS) is
           | to more strongly discourage to use (continue to use) of known
           | to be problematic libraries or services.
        
       | exo762 wrote:
       | Solution is different. Decriminalize software errors exploitation
       | / hacking. Reliance on justice system is a fools' errand in a
       | global economy anyway. There are two tiers of software
       | operations. Global (FAANG etc) and local. First are rather good
       | with security - because they pay attention. Local live in a
       | fantasy world where people who should care are afraid to poke.
       | 
       | Incentives for local services (government, municipal, local
       | companies) are misaligned. This is why their security is in such
       | a bad shape and this is why they fail spectacularly.
        
       | mybrid wrote:
       | Bills of Materials work with hardware because origins can be
       | tracked. Good luck tracking the origin of electrons. I worked in
       | manufacturing for a company the dealt solely with MilSpec
       | (Military Specification). Crates sent to Nuclear Power plants had
       | to X-Rayed on the shipping dock and then again at the receiving
       | dock. If the X-Rays differed in any questionable way the shipment
       | was rejected. However, most supply chains are not that paranoid.
       | In the 1980s there was a grade eight bolt scandal for a satellite
       | that blew up in space because the manufacturer substituted plain
       | steal to make more money. More recently the BOM did nothing to
       | protect the Bay Area Bridge where once again bolts as well as
       | rods were specified of one quality and delivered as another. The
       | bolts and rods are still in the new span because taking them out
       | would require tearing it down again. But the builder assures us
       | things are fine, wink wink, nod nod.
       | https://www.courthousenews.com/34m-settlement-reached-for-de...
        
       | kohlerm wrote:
       | What IMHO really is needed are https://reproducible-builds.org/
       | plus some way to verify within a company that only allowed
       | packages are used. One way to solve this is to check all software
       | (including open source one) into a monorepo and run software that
       | checks for copies of open source code.
        
         | dathinab wrote:
         | Reproducible build require source code access.
         | 
         | Which is the first point I think is necessary:
         | 
         | - Access to source code for at least all entities using the
         | software (including allowing hiring entities to analyze it).
         | Preferable open access to source. Even more preferable open
         | source.
         | 
         | - Combine that with reproducible builds and automatic code
         | analysis and you gain additional trust.
         | 
         | - Naturally this both requires proper code and artifact signing
         | (which was compromised in the supply chain attack this article
         | refers to).
         | 
         | Funny thing is I'm 100% convinced no SBOM, reproducible builds
         | or similar would have prevented this attack. It would just have
         | changed how exactly the attack looks (IMHO).
         | 
         | Still it would be an improvement anyway.
        
           | jacques_chester wrote:
           | Reproducible builds would've had a sporting chance of
           | defeating the solarwinds attack, because the injection point
           | was on particular build servers that were reached through
           | other vectors. If a second party performs a build to verify
           | it, the attacker now has ~2x the cost to conceal their
           | attack.
        
             | dathinab wrote:
             | True, but this requires the source you build not to have
             | been gone through the build server, which is reasonable.
             | 
             | The thing is you can circumvent this by attacking the
             | version control and/or developer systems.
             | 
             | And at least the later one are often _massively_ vulnerable
             | to certain kind of supply chain attacks.
             | 
             | Ironically the permissions and setups commonly used for a
             | nice development flow are also making systems vulnerable
             | for many kinds of supply chain attacks.
             | 
             | I'm currently slowly moving to a more secure dev flow, but
             | it adds overhead. Especially if your dev system is also
             | your laptop.
             | 
             | First step is to run any kind of dev tool (especially
             | builds) in a container. Through this often also means
             | running e.g. a language server in the container while
             | running your IDE out of it and making sure nothing will
             | trigger your IDE to do thinks outside of the container...
        
               | jacques_chester wrote:
               | None of these solutions is complete, but that's not a
               | final argument against them. Raising the cost of attack
               | is always beneficial. It reduces the number of attackers
               | and the number of attacks.
        
       | bryanrasmussen wrote:
       | package managers tend to give you something wherein you can read
       | exactly what is in your solution, which I suppose is why people
       | are always reading what their dependencies are and making sure
       | that everything is updated as it should be and security holes
       | plugged.
       | 
       | So, given the existence of modern package managers, surely
       | problem solved.
        
       | dathinab wrote:
       | Let me tell you a secret (not really):
       | 
       | - supply chain attacks are not new
       | 
       | - sophisticated attacks are not new
       | 
       | - what made microsoft call it the most sophisticated attack was
       | *not* because of it using a supply chain attack, or waiting or
       | any of the major bullet points listed when speaking about it but
       | the combination of them and all the small details skipped
       | 
       | - through it might be the first sophisticated attack using supply
       | chains *which we know of and which got a lot of press/success*.
       | 
       | - Neither new are attack which are possible at massive, potential
       | global scale, even without supply chain attacks. So there has
       | been a long time a lot potential for people losing, trust. There
       | is a reason why many tech affine people don't trust governments
       | or large institutions which sensitive data, we know the even the
       | largest institutions will not always manage to keep our data
       | safe.
       | 
       | - "does not take the coordinated effort of thousands of engineers
       | in a nation-state" all sophisticated large scale attacks do *not*
       | need thousands of engineers. Most times it's done by teams with
       | noticeable less then 100 people. Nation-scal agressors are not
       | that because they have so many (evil;) programmers but because
       | they have access to nation-scale resources, like access to
       | internet infrastructure, vulnerabilities known by the secret
       | service or just money they can use for probing or obfuscating
       | DDOS attacks. Lastly governments can have an easier job to bring
       | a lot of expertise together, by bringing very qualified people
       | together, not very many.
       | 
       | - "load external modules" has nothing to do with supply chain
       | attacks, while it sometimes can make thinks simpler if you can
       | affect the source-code (which is what supply chain attacks are
       | about) you can do everything you need without any "module
       | loading".
       | 
       | - "few risks unique to the software domain", I would say more
       | than a few, and they are well known since over a decade
       | 
       | - "others are proprietary", which is a problem as it undermines
       | many ways a user can try to detect/catch supply chain attacks.
       | Non of which are perfect, like non would have cough this attack.
       | But it still makes attacks of this kind harder.
       | 
       | - "Dependency hell" the only real problem, libraries with non-
       | clear and brittle and to much much changing interfaces, at the
       | same time surprisingly language dependent.
       | 
       | - "easier automated than what defenders need to do.", I wouldn't
       | be sure about it, the same way attackers can scan for problem you
       | can do so (and if you find something fix it). Tools like fuzz
       | testing and other kinds automatic software analysis are widely
       | underused.
       | 
       | - "96% of all software products included third-party software
       | components and commercial" it tries to make this look like a
       | problem but actually the only way to avoid security vulnerability
       | (and build modern software) is by using external tools. If
       | everyone would implement e.g. their own crypto there would be
       | more vulnerabilities and automatic scanning for them would
       | *still* work as people tend to make similar mistakes when writing
       | similar code.
       | 
       | - "and now also by nation state actor", not it should be "and
       | since the beginning". Just look at crypto wars, or how currently
       | the US/EU governments again try to undermine software security
       | for questionable reasons.
       | 
       | - "a bill of materials protects best against supply chain
       | vulnerabilities when it is set up as a holistic cross-domain
       | effort"... no it doesn't. There is nothing in a bill of materials
       | which is able to prevent supply chain attacks. I have no idea why
       | the author believes it. At best it can make it easier to catch if
       | a known-to-be-vulnerable dependency is used. But supply chain
       | attacks are about about making a dependency vulnerable without
       | anyone knowing and *by changing it*. Not just by adding code. I
       | guess this is build on the misconception that not having "loading
       | of external modules" would help against supply chain attacks. It
       | doesn't. It just makes it negligible harder.
       | 
       | -by the way open accessible source does imply a bill of materials
       | (being derivable form the source) and allows you to automatically
       | run software analysis tools on the source code etc.
       | 
       | - "medical industry has been targeted for years" and banking
       | systems, operating systems, mail programs and games have been
       | attacked even longer. The medical industry is interested in this
       | to *avoid legal problems by using absurdities like software being
       | "certified" to be secure (e.g. by using SBOM's) and similar*. A
       | concept which is known to not actually work but tends to work to
       | avoid legal responsibility. If they would care they would push
       | for open accessible source, proper bug bounty programs and
       | similar.
       | 
       | - "What needs to be done?" (In my opinion:) Push for at least
       | open accessible source (!= open source). Push for more usage of
       | software analysis tools. Push for proper bug bounty programs
       | (many existing ones are questionable). *Enforce legal liability
       | IF it's clear the software "seller" didn't care for making and
       | keeping the software secure.* (I.e. Liability if negligent). Push
       | for using additional protection which reduce attackers gain even
       | if there are vulnerabilities. E.g. migrations like address layout
       | randomization and shadow stack, sandboxing, dropping privileges,
       | etc. WebASM might play a major role in this. Push for responsible
       | choice of language, libraries and tooling. Push companies to take
       | more responsibility for open source tools they use (!= open
       | accessible source).
       | 
       | Lastly sometimes (not seldom) they *way* you use a certain
       | dependency and for what you use it is making a *extreme*
       | difference wrt. security. One example would be alternative hash
       | algorithms in the rust eco system. Use them for certain internal
       | use-case and all is fine. Use them at the wrong place and they
       | enable hash map based DOS attacks. Another example would be
       | openssl which you can use responsible, but also can use in
       | horrible insecure ways.
       | 
       | In the end a SBOM is a thing which *should* be cheap to create
       | (potentially automatized) but will only yield a slim improvement
       | in general. It is IMHO *by far* not the most important step to
       | do, but I guess it's one of the most easiest steps to do.
       | 
       | So why not, but don't believe it will make any major difference.
        
       | anoncow wrote:
       | Why does this read like an anti-opensource propaganda piece aimed
       | at big business?
        
       | fmajid wrote:
       | "However, there is an even more worrying effect: if attacks are
       | possible at such a scale that their effect is felt across whole
       | sectors, countries or even globally, as is the case with the
       | "SolarWinds" attack, they have the potential to fundamentally
       | undermine our trust in information technology."
       | 
       | How is that a worrying effect? Trust in IT systems is undeserved,
       | because they are built in an environment of economic incentives
       | that make stability, performance and security secondary concerns
       | to features and time to market, and poor engineering practices
       | that derive from those incentives. A software BOM will do nothing
       | to address those problems.
       | 
       | Healthy skepticism of IT and adapting with defense in-depth and
       | other measures like data minimization (you can't leak what you
       | don't have) are desirable, or even simply using _less_ software.
        
       | redleggedfrog wrote:
       | For 95% of software will never happen unless SBOMs make the
       | company money.
        
       | BrianOnHN wrote:
       | What's with the images?
        
         | the_af wrote:
         | I was going to comment the same. I really like them! They look
         | as if displayed on an old CRT with color artifacts.
        
         | layer8 wrote:
         | My first guess was autostereograms, but that doesn't seem to be
         | the case. Maybe some steganography?
        
       | you_are_naive wrote:
       | Blah blah blockchain blah blah have an alternative compliance
       | questionnaire list which companies outsource already mandatory
       | for every software blah blah as a way to check maturity of the
       | company.
       | 
       | I sometimes wonder if this is how industries end up not
       | innovating or solving obvious problems for decades because they
       | get strangled with bureaucracy which doesn't solve the original
       | problem highlighted in his own example (vendor choosing to ignore
       | to patch a vulnerability).
        
       | LockAndLol wrote:
       | So basically reproducible builds with a dependency list and the
       | CVE list could provide more certainty about the tools we use?
       | 
       | What about websites though? Hash-summed files aren't going to
       | save us, because resources can be loaded dynamically and the
       | client can't know the hash before retrieval.
       | 
       | Reproducible builds would be a great first start. Forcing
       | governments to use opensource may be another step.
        
         | dane-pgp wrote:
         | > What about websites though?
         | 
         | It is possible for a web page to specify the expected hash of a
         | script file, which the browser will enforce. This is called SRI
         | (Subresource Integrity).[0]
         | 
         | Of course that still leaves the bootstrapping problem of how
         | the page itself can be guaranteed to have a specific hash, but
         | fortunately there is a clever hack that can be done with
         | bookmarklets[1], or the page can just be saved and
         | loaded/served locally.
         | 
         | While that works technically, the UX isn't great because the
         | address bar won't show the domain of the remote server
         | (although browsers seem to be hiding the address bar from the
         | user more and more). A better solution would be for browsers to
         | support Hashlinks[2], which would allow a bookmark to point to
         | a remote page with fixed contents.
         | 
         | [0] https://developer.mozilla.org/en-
         | US/docs/Web/Security/Subres...
         | 
         | [1] https://news.ycombinator.com/item?id=17776456
         | 
         | [2] https://github.com/w3c-ccg/hashlink
        
       | SergeAx wrote:
       | Can we please stop calling SolarWinds hack "the most
       | sophisticated and protracted intrusion attacks of the decade"
       | already? It was nothing more than blunt negligence.
        
       | wpietri wrote:
       | At least on first reading, I find this unpersuasive. He correctly
       | lists a variety of problems. But he doesn't explain how his
       | proposed solution, listing all the components of a technological
       | product, would make a practical difference. Creating a list is
       | valuable only if people a) read the list, b) recognize problems,
       | and c) do something based on that.
       | 
       | And for some of the examples he gives, it seems pretty obvious to
       | me that an SBOM wouldn't help. The Equifax breach, for example.
       | They knew [1] that they needed to upgrade Apache Struts. Somebody
       | was supposed to make the upgrade. They just didn't do it. Who
       | would an SBOM help here? Since it's a consumer-facing website,
       | the only people who weren't informed were consumers. So is he
       | proposing to make public the SBOM for every website? I'm not sure
       | that on balance that helps security.
       | 
       | [1] https://www.csoonline.com/article/3444488/equifax-data-
       | breac...
        
         | njitbew wrote:
         | Without reading the article, I can imagine that listing the
         | components of a technological product (i.e., an SBOM) is a
         | _first step_ towards the goal of solving all those problems.
         | Once you have a standardized way of communicating what a
         | software product is made of, you can start thinking of
         | automatically upgrading dependencies (Maven's pom.xml does this
         | to some extent, and Dependabot and Renovatebot leverage this
         | semi-standard to automatically upgrade your dependencies). If
         | you take this one step (or two steps) further, you can start to
         | automatically rebuild the code, automatically deploy the code,
         | patch running systems, detect when CVEs are actively being
         | abused, and so on. Basically, automate the heck out of this so
         | that the "they just didn't do it" will not happen. And for
         | automation, you need standards.
        
         | jmull wrote:
         | I'm a bit skeptical too. (It didn't seem to me that a SBOM
         | would have helped with solarwinds either.) But I'll play the
         | devil's advocate:
         | 
         | * Perhaps end-user systems could automatically monitor the
         | SBOMs of all software installed, cross-reference it with a live
         | vulnerabilities database, and produce vulnerability reports and
         | notifications. This increases the visibility of vulnerabilities
         | and the chance they will be resolved quicker.
         | 
         | * Software companies will feel the increased exposure of the
         | SBOM they need to publish causing them to think more carefully
         | about when and how to take on dependencies. Some do this well
         | already, but this would likely cause more companies to do so.
        
           | wpietri wrote:
           | It's certainly possible. But I think it's equally likely that
           | applied naively, we'd see more breaches as public SBOMs make
           | it clearer what attacks will work where.
           | 
           | There's also a real question of net value for effort.
           | Security is one consideration people balance, but it's far
           | from the only one. Starting with SBOMs as the focus assumes
           | too much about what people care about and how much work
           | they'll do.
           | 
           | I'd much rather people start with some user-focused approach
           | and then making use of particular technologies (like SBOMs)
           | as needed to advance people's actual goals.
        
         | pycal wrote:
         | I think on balance it actually hurts more than it helps.
         | 
         | The author lists Equifax as a case where an organization
         | "failed to update a web server in timely fashion (a few
         | months)" but a software bill of materials would not have made
         | it any more or less obvious that they were running vulnerable
         | web software an attacker could get a foothold in, and could
         | have made it easier for an attacker to exploit that foothold,
         | pivot, and exfiltrate, knowing what other software is available
         | for them to exploit.
         | 
         | Equifax didn't "fail" to manage that particular vulnerability,
         | as the author describes, and protect customer data. They
         | _neglected_ to manage the vulnerability and protect customer
         | data.
         | 
         | It's my opinion that what would actually be valuable (and have
         | been valuable) in the case of Equifax is compliance legislation
         | that places liability on the custodian of PII. This compliance
         | should require companies which are custodians of PII or
         | financial data, or which operate critical infrastructure to
         | have a vulnerability management practice.
        
           | arrosenberg wrote:
           | FYI - Compliance regulation in the US government almost never
           | works, our government sucks at it. If you want to regulate a
           | company like EquiFax, you have to stick to investigations and
           | prosecutions, which the US government is quite good at.
           | Companies can take the risk, but if they violate the law it
           | should be big fines and jail time for the executives.
        
           | Jgrubb wrote:
           | PII?
        
             | vlovich123 wrote:
             | Personally identifiable information. It's a term of art
             | that is extremely common, at least in any large software
             | company, that deals with customer data in any way. I'm not
             | sure if it's usage in the broader industry/common speech
             | (although I swear I've occasionally seen it in news
             | reports)
        
         | rixrax wrote:
         | In my mind SBOM is similar to food ingredients being listed on
         | the packaging. FDA or someone requires them, very few read them
         | or cares what is in there. BUT now that they are listed on
         | every food product, those who care can read them and make
         | informed decisions. And raise alarm when it is found that
         | someone uses unhealthy amounts of whatever in their cakes or
         | sausages.
         | 
         | As for software, if I had up to date reliable SBOMs for
         | everything I run, it would certainly give me piece of mind. And
         | maybe, even if unlikely, I might be able to do purchasing
         | decisions based on used components, their CVE/etc. history, or
         | sheer amount (in less being generally better, unless there is a
         | reason to suspect the vendor e.g. rolled their own TLS instead
         | of using one of the usual suspects).
        
           | daniellarusso wrote:
           | Like 'natural flavors' and 'artificial flavors' are just
           | different uses of 'git rebase'?
        
           | pessimizer wrote:
           | > In my mind SBOM is similar to food ingredients being listed
           | on the packaging. FDA or someone requires them, very few read
           | them or cares what is in there. BUT now that they are listed
           | on every food product, those who care can read them and make
           | informed decisions. And raise alarm when it is found that
           | someone uses unhealthy amounts of whatever in their cakes or
           | sausages.
           | 
           | And sue them if they lie about it. I think a lot of the
           | benefit of these types of regulations is to force businesses
           | to commit active frauds instead of passive frauds. Not doing
           | something you were supposed to do is incompetence. Lying on a
           | form about doing something that you haven't is deceit.
           | 
           | The profits from incompetence and deceit are equal until one
           | gets caught, then the lesser punishment for incompetence as
           | compared to deceit makes deceit more expensive. Smart
           | businesses will choose incompetence every time, and engineer
           | it into the system everywhere where fraud would be
           | profitable.
           | 
           | Of course, they can also hire temps to sign forms, like the
           | banks did in 2008[1], but the current administration has to
           | really want you to get away with it for that to work.
           | 
           | [1] https://www.nolo.com/legal-encyclopedia/false-affidavits-
           | for... _Note: it was strangely difficult to find information
           | on this still on the web._
           | 
           | -----
           | 
           | edit: https://news.ycombinator.com/item?id=26530786
        
           | indymike wrote:
           | The whole idea of SBOM is a bad one because of the rate of
           | change in software. For example, a simple Python web app will
           | aggregate change all the way from the OS, to the language
           | ecosystem, to the application code. What was in the product
           | when you installed it will change dramatically. Bonus: much
           | change is being driven by security issues in your software's
           | supply chain. This idea is just paperwork for the sake of
           | paperwork and will just make vendors like SolarWinds more
           | entrenched.
        
             | AlphaSite wrote:
             | Why can't your web app serve its BOM on an API, maybe union
             | its BOM with the OSes BOM to get the full system.
             | 
             | I guess with a deep service graph this could get very
             | complex very fast.
        
               | indymike wrote:
               | So now every web app has to encapsulate an equivalent to
               | the entire os repository tooling + your entire build
               | system + whatever devops tooling needed to deploy.
               | Bonus... A lot of build tooling is to allow for faster
               | upgrades than the OS provides... Especially with dynamic
               | languages.
        
           | wpietri wrote:
           | An SBOM as part of a contractual requirement when purchasing
           | software seems totally reasonable to me if the receiving
           | organization already has the practice of checking a lot of
           | versions and making sure they're sufficiently up to date. But
           | the hard part there isn't the creation of the SBOM, it's a)
           | actually using the SBOM, b) having enough contract power that
           | if the SBOM turns out to be incomplete, out of date, or a
           | lie, the purchaser can do something about it, and c) the
           | purchaser doing something about it.
           | 
           | Nutritional labels only work in practice because a) plenty of
           | people read and care about them, b) there are regulatory
           | agencies that set standards and enforce compliance, and c) if
           | they are too far off, an expensive class action suit is a
           | real possibility.
           | 
           | My concern with starting with SBOMs is that since they're
           | orders of magnitude harder to read and evaluate, and since
           | many, many companies are already bad at tracking their own
           | software patch status, approximately nobody will actually use
           | them. Again, I look at the Equifax breach: it happened not
           | because they didn't know what a vendor was up to, but because
           | their internal processes weren't sufficient to turn knowledge
           | into results.
        
           | jacques_chester wrote:
           | > _might be able to do purchasing decisions based on used
           | components, their CVE /etc. history, or sheer amount (in less
           | being generally better, unless there is a reason to suspect
           | the vendor e.g. rolled their own TLS instead of using one of
           | the usual suspects)._
           | 
           | Counting CVEs is a poor indicator. It's not a pure function
           | of how many vulnerabilities _exist_ , it's a function of how
           | many exist, are _found_ and _reported_. Those latter two
           | components have a strongly economic nature. It 's cheaper to
           | not search and report than be fastidious.
           | 
           | If anything, more CVE reports from a given company is a
           | positive signal that they give a damn.
           | 
           | (There's also the problem that CVSSv3 is not a very sound
           | measurement of risk. It's sorta-kinda just made up without
           | derivation from a sound theoretical foundation, nor is it
           | based on data about actual impacts. The scores don't move
           | smoothly as a continuous function but jump around a fair
           | amount. It's very easy to swing between widely-separated
           | named categories with a bit of argumentation.)
        
           | Natsu wrote:
           | Scanners already effectively give this, finding the
           | vulnerable components and a list of CVEs. But it may be
           | difficult, expensive, or too time consuming to upgrade the
           | affected components. Or there may be blackout periods (e.g.
           | during open enrollment for many healthcare companies) where
           | they basically can't make any changes to the production
           | stack.
           | 
           | The problems with upgrades are usually centered around
           | testing and understanding the changes and ensuring that
           | things still work. It often requires more resources,
           | especially time & developers, than may be available at any
           | given time. And some companies treat all IT functions as cost
           | centers and you can see this from how they run the place: the
           | internal people don't know their own setup very well and may
           | not have much experience in general, things are run by a tiny
           | number of people who may have multiple roles to fill, etc.
           | 
           | Source: I've helped many people in many industries upgrade
           | complex, security-sensitive enterprise software that
           | interfaces with large amounts of their infrastructure.
        
         | TeMPOraL wrote:
         | I wonder how many companies already have SBOM internally for
         | legal reasons? I know I recently participated in building a
         | partial one, to help the company ensure we comply with exports
         | regulations of multiple countries.
         | 
         | After a casual inspection, we thought we had it all covered,
         | but I felt a bit uneasy, so I dug deeper. Only after I actually
         | _read_ the build scripts of the transitive dependencies, one by
         | one, cover to cover, I discovered we are actually pulling some
         | extra libraries and features we weren 't aware of.
         | 
         | I've spent several days manually digging through build scripts
         | of our dependencies, and manually[0] inspecting all the dynamic
         | libraries we ship, to provide a complete list of artifacts that
         | include components subject to legal requirements of interest.
         | And the _only_ reason I could complete this work to my
         | satisfaction, is because there was a select set of things we
         | were looking for. Even after this, I don 't know what _all_ the
         | stuff our project depends on do - I only know the stuff the
         | legal team cared about is accounted for.
         | 
         | What this experience made me wish for is better tooling for
         | figuring out what exactly goes into a software product. I'd
         | love to have a tool I could attach to our build system, that
         | would be able to track every single library and library feature
         | that's actually being used. It's a tough job, given how many
         | ways there are for some seemingly innocent piece of code to
         | pull in some other innocent piece of code. Such tool would
         | probably have to be launched on a freshly configured VM and
         | intercept all network traffic, just to be sure.
         | 
         | --
         | 
         | [0] - Well, I quickly scripted that part away. Thank God for
         | people who provide CLI interfaces for GUI tools they write. And
         | yes, inspecting the build output was very useful too - that's
         | how we learned a binary-only commercial dependency we ship is
         | also subject to legal requirements. This wasn't at all visible
         | in the build system - the only way to know was to read the
         | vendor's documentation thoroughly, or audit the symbols in the
         | export tables.
        
           | jacques_chester wrote:
           | I had a similar experience several years ago.
           | 
           | It got worse when I began to consider dependencies in the
           | supply chain itself. What version of our CI system are we
           | using? What OS base image? What version are our worker VMs
           | on? What packages are installed on them? And on and on and
           | on. When I began writing these sources of upstream
           | variability down I began to find dozens of them, for what
           | was, in dependency terms, a fairly unremarkable application.
        
         | Ericson2314 wrote:
         | This government-adjacent people rediscovery Nix / Guix. So yes
         | the current phrasing is bit vague in that they are just
         | grasping at the concept via draft requirements. But you can't
         | fault their intuition, as those tools do exact and are
         | absolutely revolutionary.
         | 
         | The one thing I wish they mentioned is
         | https://docs.softwareheritage.org/devel/swh-
         | model/persistent..., which are the right idea and actually used
         | in practice.
        
         | perlgeek wrote:
         | At my employer, we have a company-wide database of which
         | package is installed in which version on each machine (several
         | ten thousands of them).
         | 
         | This allows the compliance department to follow known security
         | issues, and they can then open tickets to the affected
         | operating teams stating on which machines the software needs to
         | be upgraded (or mitigations implemented), and they set
         | deadlines based on vulnerability ratings. If the deadlines
         | aren't meant, there's a hierarchical escalation.
         | 
         | In the case of the Equifax breach, such a mechanism might have
         | helped. If the developers knew they had to update, but didn't,
         | maybe the ticket from compliance would have given them the
         | right nudge to actually do it.
        
           | [deleted]
        
         | netflixandkill wrote:
         | We're already living with dedicated software companies having
         | serious issues with their internal lifecycles and secure build
         | processes. The concept of a SBOM isn't bad but any nontrivial
         | end product is going to be pulling in orders of magnitude more
         | component software than even large nested BOMs do, and no one
         | is willing to pay to maintain what they have internally, much
         | less read and act on that.
         | 
         | In principle, sure, but in immediate practice it would be like
         | california forcing the labeling of basically everything as
         | carcinogenic -- a step sort of in the right direction but
         | mostly useless in practice.
         | 
         | The one thing that absolutely needs to be considered is not
         | constructing it in a way that encourages private and
         | unmaintained forks or requiring business contractual liability.
         | Most of software only works as well as it does because there is
         | so much really good open source to draw on.
        
         | mikepurvis wrote:
         | Wouldn't having to advertise your out of date dependencies help
         | to shame companies into upgrading on a reasonable schedule? So
         | that upgrades are actually a priority and not just a thing that
         | happens when literally everything else is already done?
        
           | zvr wrote:
           | It's not about "shaming", as these SBOMs might not be
           | publicly available. But serious customers might have
           | something to say when they realize that they are getting
           | obsolete versions of components full of security issues.
        
           | sokoloff wrote:
           | If that became a problem, companies intending to skirt the
           | disclosure would fork and "maintain" private branches of
           | dependencies such that it couldn't be determined if they were
           | out of date.
        
             | spacemanmatt wrote:
             | True story: They already do. I know one company that forked
             | Ruby, and would likely claim every library they run under
             | it is thereby forked for the sake of reporting.
        
               | TeMPOraL wrote:
               | It's a natural next step after pinning versions and
               | keeping all dependencies cached in-house. If the rolling
               | disaster that NPM is taught us anything, is that it's
               | critical to have control over the update process of the
               | code that goes into your product. Not to mention, CI runs
               | faster if you don't have to redownload everything from
               | GitHub on each build :).
               | 
               | (Though then developers don't get a day off when GitHub
               | goes down, as it does every couple months.)
        
               | jacques_chester wrote:
               | A good asset graph / asset transparency log / whathaveyou
               | would help a lot. If an SBOM is asserted and timestamped,
               | you can compare assets you download to the earlier
               | assertion to see if they are the same.
               | 
               | When I worked on buildpacks we added something like this.
               | Each buildpack carried a simple BOM of the binaries it
               | referred to and digests for them. When it fetched
               | dependencies it compared the digests and bombed out if
               | there were any mismatches.
               | 
               | This led us to capture far more of our dependency graph
               | than before. It is surprising how many folks will replace
               | binaries in-place without changing version numbers. We
               | also managed to catch bugs in our own CI/CD process.
        
               | mattcwilson wrote:
               | Ok, but there would still be some point of contact where
               | an SBOM would show "company X fork of Ruby, company X
               | fork of package Z" etc, right?
               | 
               | And then the choice gets back to how much to trust
               | company X and package Z, weighed against alternative
               | solutions.
        
               | mannykannot wrote:
               | Unless penalties for doing so are legislated, they need
               | not claim that the fork they are actually using is a fork
               | of anything; they could treat it as if part of their
               | proprietary code (though they might be in violation of
               | license agreements if they do this.)
        
               | sokoloff wrote:
               | It seems like it could equally be "internal package UUID1
               | version X, internal package UUID2 version Y" if malicious
               | compliance and/or industrious laziness was the goal.
        
       | Roark66 wrote:
       | If the author was serious about promoting the idea the article
       | would be published in an open manner (not behind a pay wall).
       | 
       | The concept is good, but good luck enforcing it with closed
       | source software companies.
       | 
       | Anyone that is really interested can already find that info for
       | OS software but where it would really be useful is with closed
       | source software. Where I personally would really love to see it
       | implemented is with embedded devices.
       | 
       | I've been recently hacking a not so old IP cam in my spare time.
       | Hardware is great... It has 600mhz 32bit cpu with 64mb ram,
       | hardware h264 (1080p 30fps close to real-time) encoding, bi-
       | directional audio, WiFi, USB host, ptz, free gpio, Ethernet all
       | for around $20 (indoor version) but software is abysmal. It runs
       | Linux Kernel v3 (almost a decade old). Upon startup immediately
       | starts streaming video/audio to a server in China while the
       | mobile app requires you to "register" for an account with a phone
       | number. The only way it can receive the video is from the Chinese
       | server and it displays ads on 20% of its screen. Ridiculous.
       | Thankfully it is pretty easy to hack, but what about all non
       | technical people who buy it?
        
       | euph0ria wrote:
       | Are people publishing on Medium in order to earn money now or
       | just that the large migration to a free platform is yet to occur?
        
       | pdimitar wrote:
       | No SBOM will help you if the people know they have to act but
       | they don't -- out of malice, bureaucratic slowdown, policy
       | restriction and what-have-you.
       | 
       | If you don't have hardware and software that can't be tampered
       | with and that automatically apply / enforce the SBOM, then it is
       | essentially worthless.
        
         | jacques_chester wrote:
         | I disagree. SBOMs will help to create useful pressure on
         | upstream providers to show their work. In particular, as
         | another commenter pointed out, failing to provide an SBOM and
         | providing a deliberately inaccurate or incomplete SBOM are
         | quite different. One is presumably mere incompetence, the
         | latter opens the door to consequences for fraud.
        
       | varispeed wrote:
       | I find it obscene that there are some important software tools
       | that are often developed by one guy in his rented basement and
       | big corporations make billions on the back of it without sharing
       | a penny. We need a royalty system for open source software, so
       | that these companies will have to start paying fair share to the
       | developers they exploit. This will also ensure the overall safety
       | as developers will have funds to do audits or hire staff to fix
       | security issues or there will be an incentive for people to
       | contribute as successful PRs would be eligible for royalties.
       | Such system could be embedded in GitHub and other VCS.
       | Individuals and companies with a revenue below certain threshold
       | would be exempt from paying. There should be no opt out - in many
       | countries a worker cannot legally work for free, even if they
       | agree to it - they have to be paid at least a minimum wage and in
       | the same vein companies using open source software without
       | payment are circumventing this rule. This will also level the
       | playing field for developers from poor backgrounds - some
       | developers cannot afford to work for free, so they cannot
       | contribute to open source even if they wanted to do, because they
       | have bills to pay. This way open source wouldn't be only reserved
       | for privileged developers who can afford to commit their spare
       | time.
        
         | gonzo41 wrote:
         | As usual, https://xkcd.com/2347/
         | 
         | The contributors to oss are in a tricky spot. Look at what
         | happened with AWS forking Elasticsearch, sure there were
         | reasons, but it seems like there's a gap in the licenses at the
         | moment, that doesn't account for the scale things like ssl play
         | in modern life. Whatever legal terms you'd use, you'd want to
         | aim to not scare small companies in the hope of anchoring a
         | income stream when the scale and find our the oss clause kicks
         | in.
        
         | numpad0 wrote:
         | The fact is technical sophistication and commercial value do
         | not correlate well. Currency is a medium to solve meat world
         | conflicts and there is none for well-written pieces of
         | software.
        
         | rjknight wrote:
         | A different approach might be insurance markets. If I want to
         | use component X, but the use of that component creates a risk
         | (however small) that my business will be hacked, then my
         | business could buy an insurance policy to cover that risk. If
         | the software is maintained by an unpaid guy in a basement, the
         | insurance is likely to be relatively expensive. The insurance
         | company then has a strong incentive to pay basement guy to do
         | maintenance of the software so as to reduce the risk of an
         | insurance claim.
         | 
         | This is grossly over-simplified, but if we accept the notion
         | that businesses can have real liabilities if they get hacked,
         | then they're going to want insurance and the insurance
         | companies are going to want to drive rapid improvements in
         | quality in order to reduce the number of claims. This effect
         | has been a significant factor in improvements in safety in a
         | wide range of other industries.
        
           | yowlingcat wrote:
           | I like this approach more because it incentivizes fulfillment
           | of the risk interface without precluding flexibility in how
           | that might be accomplished. I agree with your conclusion that
           | a significant factor in improvements in safety have been seen
           | in other industries with this approach -- align the incentive
           | of not externalizing the risk and I think you'll see a lot of
           | the misbehavior disappear because it's no longer profitable.
        
             | varispeed wrote:
             | This is not a bad idea, but I am thinking whether this will
             | only shift who is making money off the software from one
             | big corporation to another. Maybe a combination of both -
             | royalties to ensure developers are paid and insurance to
             | ensure big corporations become conscious of their
             | responsibilities.
        
               | rjknight wrote:
               | The ultimate point of the insurance market is to provide
               | an incentive for quality. Use of components which the
               | insurer regards as low-risk will be cheaper to insure.
               | 
               | What would the insurer regard as low-risk? Typically,
               | they would use two things:
               | 
               | 1) Past experience - have they seen this piece of
               | software regularly exploited?
               | 
               | 2) Formal assessment in an underwriters' lab - do
               | relevant experts consider the software to be well-
               | constructed? Here open-source software has a real
               | advantage, because the underwriters can review the source
               | code directly, or pay others to do so. They have access
               | to any automated tests and code coverage assessments, and
               | so might assign a lower risk score to a project with more
               | tests. They might even assign lower risk to projects
               | written in languages that are known to produce safer
               | code, so with all other things being equal a Rust
               | codebase would be cheaper to insure than a C++ one.
               | 
               | This is entirely compatible with a system of bounties or
               | rewards for anyone who moves the project in the direction
               | of safety and high-quality maintenance. By concentrating
               | the risk associated with poor software quality on the
               | insurer, we get around the problem where many different
               | companies use the software but are unwilling to pay for
               | its maintenance. The insurer has a greater incentive to
               | care than they do, and so would be more willing to pay
               | developers.
               | 
               | I admit that this is a back-of-a-napkin sketch, but the
               | incentives do seem to line up correctly.
        
         | JohnJamesRambo wrote:
         | > developed by one guy in his rented basement and big
         | corporations make billions on the back of it without sharing a
         | penny. We need a royalty system
         | 
         | I mean isn't this what a software patent is for? And you guys
         | hate those. It's how you are properly compensated for your
         | inventions.
        
           | pessimizer wrote:
           | It has absolutely nothing to do with software patents, which
           | are garbage, it's simple copyright. It's literally the way
           | all proprietary software is distributed, sprinkled randomly
           | with "open source spirit", with a vaguely specified
           | micropayments system bolted on.
           | 
           | Giving away your software to small business and individuals
           | has nothing to do with Open Source, Microsoft (among others)
           | does it with some of its biggest products that it later
           | charges your firstborn for after you get past a certain size.
           | If you want to do this, just do it.
        
           | the_optimist wrote:
           | No, this is what copyrights and licensing are for. Patents
           | were/are merely a silly exercise in language, exploitable
           | primarily within the legal community.
        
         | pessimizer wrote:
         | That's not Open Source software, that's proprietary software
         | whose source you let people read.
         | 
         | If you want to force people to contribute back if they
         | distribute, make your software Free, if you want to force
         | people to pay you, make your software proprietary.
        
           | Shared404 wrote:
           | Free as in Speech, not Free as in Beer.
           | 
           | It is perfectly possible to charge for open source software.
        
         | O5vYtytb wrote:
         | This would be the end of open source software as we know it.
        
           | varispeed wrote:
           | In a way yes, but currently it is not sustainable. I think we
           | can do much better.
        
         | vls-xy wrote:
         | I agree, but only to an extent. From my perspective OSS
         | contributors are not "privileged developers" who can afford to
         | commit their spare time. Yes, software development is a
         | privileged career, but the privilege is not really pay, but
         | education. Anybody in the world with access to the internet and
         | a decent education can become a software developer. It is a
         | highly competitive global market. My suggestion to any
         | developer who is working something crazy like a 996 schedule is
         | to look for opportunities elsewhere.
        
           | varispeed wrote:
           | My point is that there are many developers who would love to
           | contribute to open source projects, but they don't have
           | wealthy parents who pay bills, didn't inherit a flat, have
           | families to feed and so on, so for them only viable option is
           | to seek employment or work on their own business. People who
           | can commit their time to work on open source are privileged
           | and by giving their work away for free they create a
           | situation that there is less work for people who cannot
           | afford that. For example a company instead of hiring
           | developers, paying salaries and taxes to create a tool they
           | need, will instead use open source project for free and that
           | means other developers are missing out. Ensuring that
           | everyone gets paid levels the playing field. This is the same
           | situation as you have with free internships - there are
           | people whose families have money, so they can afford to get
           | experience working for a company for free and that puts
           | people from poor background into a disadvantage - that's why
           | in many places unpaid internships are illegal.
        
       | dathinab wrote:
       | Any supply chain attack can be done in ways where it's externally
       | impossible to (automatically) differentiate a malicious change
       | from a intentional change.
       | 
       | Which means BOM's are 100% guaranteed _not to prevent supply
       | chain attacks_.
       | 
       | At best it makes them a _small_ bit harder.
       | 
       | Software BOM's can have some small benefits, but preventing
       | supply chain attacks is not one of them. And open access to
       | source always is much more useful then just a BOM.
       | 
       | And like a BOM without signature isn't that much trust able (with
       | closed source) accessible source without signing is not
       | trusteable without reproducible builds.
       | 
       | Anyway signing source code isn't hard so no reasons not to do so,
       | signing build artifacts can be harder.
        
       | dzink wrote:
       | Server logs are already full with calls to post to different
       | pages, or php scripts of vanilla wordpress installations as
       | attackers try to find vulnerable sites. Wouldn't a SBOM make the
       | bad guys job easier? If you are a bad actor or a malicious state
       | actor who has just gotten hands on a new exploit the SBOM would
       | give you an instant menu of a available hackable sites. A B2B
       | vendor or Saas vendor can definitely make their stack available
       | to clients upon deal negotiations, but putting it on the open web
       | is also asking for trouble.
        
         | jacques_chester wrote:
         | With an SBOM, attackers gain an advantage, but it is highly
         | concentrated _in that attacker_. Defenders also gain, the gain
         | is highly dispersed amongst defenders, but each gains ~ the
         | negative value of the attacker 's gain.
         | 
         | Put another way: there are far more defenders than attackers.
         | When something helps both attackers and defenders, the gains of
         | defenders outweigh gains of attackers.
        
       | lambda_obrien wrote:
       | From experience in medical equipment, a BOM was about as useful
       | as a piece of toilet paper. Trying to keep one up to date when
       | every resistor and nut and bolt is included is a pain and about
       | 20 percent was probably wrong. An SBOM is just more beauracracy,
       | what you need is due companies to actually want to pay for
       | developers to use the right tools for the job. If you have
       | security requirements that don't allow for using as many
       | dependencies or require more updates, then pay developers to
       | write something in house or to keep things up to date with more
       | sprints dedicated to maintenance.
        
         | icegreentea2 wrote:
         | From my experience in medical equipment, I wouldn't have
         | guessed 20% wrong... maybe closer to sub 5% wrong. Though I
         | guess we weren't tracking individual components on boards...
         | but we were definitely counting every nut, bolt and screw that
         | we were using in assembly.
         | 
         | I definitely hear your second part though. Having cobbled
         | together an SBOM, it's definitely a pain. We got some value of
         | it, since it really did give us a sense of the scale and shape
         | of our dependencies.
        
       | mikewarot wrote:
       | Wrong root cause analysis. You'd have to solve the halting
       | problem to get anywhere this way, which is _proven_ impossible,
       | thus it is the wrong path.
       | 
       | Real cause: Widespread adoption of Operating Systems that don't
       | default to capability based least privilege.
        
         | dathinab wrote:
         | My dream, a OS with a good proper well designed single
         | capability system with reasonable defaults.
         | 
         | Not the mess Linux has which supports capabilities for _some_
         | privileges but not so much for others, which has bad defaults
         | and which is supper fragmented. I mean for a full cover you
         | need to correctly combine root /sys capabilities with seccomp
         | with bpf with cgroups with polkit with some pctrl settings with
         | linux kernel parameters and even then you probably still need
         | to throw in selinux and I still missed at least file
         | permissions when writing this...
         | 
         | Which lets be honest is just ridiculous.
        
       | Const-me wrote:
       | When I develop software, the source code repo contains a text
       | file with all the third-party stuff I have used, both linked and
       | copy-pasted, along with the URLs where I got the code and their
       | licenses.
       | 
       | Not precisely a BOM and I maintain them for different reason, but
       | overall I think pretty close to what's proposed. Couple examples
       | from my open-source projects: https://github.com/Const-
       | me/vis_avs_dx/blob/master/legal.txt https://github.com/Const-
       | me/Vrmac/blob/master/Pre-existing%2...
        
       | tkinom wrote:
       | I like to see OS capable of full auditable logs of
       | every app execution in the system (phone, mac, linux, windows)
       | every .so, .dll use by each app and their hash/datetime creation.
       | every files/dirs creation / write/read by which app         every
       | socket bind and connect requests.         and other privilege
       | operations              There should be virus total type check on
       | all app/.so/.dll.               There should be allow/forbid LIST
       | for exec,file/dir access/socket, privilege ops access similar to
       | typical firewall software - Not just for net, but also for app
       | execution and files access.                "Default allow",
       | "Default forbid - with log/notification"  fully under user
       | control.                                 like selinux, but with
       | much better UI/UX (web base, build on top of ebpf?)
        
         | 616c wrote:
         | In Linux, not completely an answer to what you want auditd does
         | a lot of it, but I rarely see it mentioned outside the
         | government and military because of use of the STIG
         | requirements.
         | 
         | And to your point: the UI sucks as it just text-based config in
         | its own format and no one likes it or reads the outputted logs
         | in my experience, even the SOC people who should know it.
        
       | ris wrote:
       | The solution to this problem is not bureaucracy. The solution is
       | in the reproducible builds project, Guix and Nix.
        
         | Ericson2314 wrote:
         | The technical side of the solution is those, yes, but it's
         | equally important that procuring administrators start requiring
         | that level of auditability. That's the social solution.
         | 
         | And just making the good technology is no guaranteed that
         | society will raise its standards accordingly. Look no further
         | than the sorry state of programming languages historically if
         | you want proof of that...
        
       | dlor wrote:
       | My big problem with all the SBOM efforts is that any kind of
       | compliance/accuracy will be best effort and most likely wrong,
       | leading to more problems and blame.
       | 
       | This is not as simple as writing down your dependencies. Most
       | people don't even know what their full set of transitive
       | dependencies is, or how to even go about finding it.
       | 
       | How do you know the SBOM you get is even accurate? You can't just
       | crack open a binary and look at what's inside. If you could, we
       | wouldn't need these giant complicated file formats.
        
         | fulafel wrote:
         | You can, in fact, crack open the binaries and look at what's
         | inside. The field of tooling for it is called SCA (software
         | composition analysis).
        
           | ozim wrote:
           | Technically you are right.
           | 
           | Question is who is going to pay for that?
           | 
           | In my job we dealt with enterprise customers that required
           | list of all libraries we use and what license those have. But
           | they had buckets of money to spend on compliance.
        
             | fulafel wrote:
             | Are you asking if customers are willing to pay more for
             | products that feature a SBOM? I think this is more a
             | regulation idea.
        
           | dlor wrote:
           | Sort of. The quality of the data this tooling generates
           | varies GREATLY among languages, build systems and
           | environments. For packaged software like Solarwinds, sure you
           | can try to run an SCA tool. But is anyone claiming an SBOM or
           | SCA tool could have prevented that attack?
           | 
           | The bigger issue is services and hosted software. You can't
           | crack open an API or website that stores your data to see
           | what database they're using. You could ask that they publish
           | an SBOM, but who knows if it's accurate.
        
             | fulafel wrote:
             | I feel you're moving the goalposts a bit. Perfect is the
             | enemy of good, etc. Also surely the tooling would get a lot
             | of investment and improvement poured into it if the
             | proposal went through.
             | 
             | Anyway, if this kind of thing really took off, I could well
             | imagine there being regulation for SaaS products having to
             | do audits involving this, for example.
        
               | dlor wrote:
               | I sort of see this as a situation where an imperfect SBOM
               | is worse than nothing. It would do nothing but add false
               | confidence. I still haven't seen an example of a single
               | supply-chain attack that an SBOM would have prevented.
        
               | jacques_chester wrote:
               | We already have false confidence problems. Security
               | scanning is a billion-dollar industry based on looking up
               | digests in a table. But because the table is maintained
               | by third parties, their incentives are to always be over-
               | cautious. If they give false positives, the burden falls
               | on their customers or the upstream dependency. But false
               | negatives fall on the vendor. So they create noise.
               | 
               | SBOMs _from the upstream_ push the cost back to the
               | upstream and (sorry, investors and founders) vitiate the
               | necessity of those third-party scanning vendors. The
               | incentives change and so too, I expect, would the
               | behaviour.
        
               | fulafel wrote:
               | Well, we were discussing tooling that could be used to
               | check if the declared SBOM is correct, not producing the
               | original SBOM.
               | 
               | This kind of checking with today's practices is
               | necessarily going to be imperfect, just like the BOMs in
               | the physical manufacturing realm where the idea
               | originates in. But if today's 99% solution turns out to
               | be sufficiently useful, we could start making things in a
               | way that are 100% verifiable (stuff like reproducible
               | builds, etc)
               | 
               | In security we've long ago let go of the idea of risk and
               | turst as binary issues, the same thing applies here. Just
               | about every other tool we have to improve security has
               | bigger holes in it than this one.
        
         | jmull wrote:
         | > Most people don't even know what their full set of transitive
         | dependencies is, or how to even go about finding it.
         | 
         | I think that's the point.
         | 
         | Also: you really do know your direct dependencies since you
         | need them to build your software. If the efforts to promote or
         | require SBOM are successful, your dependencies will all have
         | SBOM and your tooling will be update to help you generate
         | yours.
        
           | dlor wrote:
           | I don't think that's true in practice. Try it. I did here:
           | https://dlorenc.medium.com/whos-at-the-helm-1101c37bf0f1
           | 
           | It's basically impossible with today's tooling and practices
           | to come up with a list of dependencies for a moderately
           | complex application.
        
             | Ericson2314 wrote:
             | Not true! We do this with Nix and Guix all the time.
             | 
             | Any regulation that tries to allow for Docker or trad
             | distros will, yes, fail. But if it raises the bar so only
             | things with _sandboxed build steps_ will qualify, its
             | perfectly possible.
             | 
             | This is why it's really important to stear this
             | conversation so the upset procurers don't make some shoddy
             | thing influenced by the whinging of existing contractors,
             | and stick with their gut instincts.
        
       | kodah wrote:
       | We already produce a "SBOM", which is just an export of our
       | dependencies. Our dependencies are also scanned for open CVE's,
       | which is non-optimal because it's retroactive. The strongest
       | forms of security operations we implement proactively are fuzzing
       | and linting (in combination with strongly typed languages and
       | language variants).
       | 
       | The article references the SolarWinds attack but then doesn't go
       | on to explain how it occurred nor how their SBOM would've
       | defeated it. Instead it quotes Microsoft in saying that it was
       | highly sophisticated. Just a reminder that the origins of the
       | SolarWinds Orion hack are still up in the air [0], which makes
       | the definitive tone of this article all the more confusing. It is
       | speculated that hackers compromised TeamCity and that TeamCity
       | injected code during compile time into Orion. This wouldn't be
       | caught by any kind of dependency inspection and doubly so if the
       | attackers were smart enough to use all standard libraries.
       | 
       | Like all hacks, this one had signatures too [1]. Some that stand
       | out to me are the network calls:                 avsvmcloud[.]com
       | deftsecurity[.]com       freescanonline[.]com
       | thedoccloud[.]com       websitetheme[.]com
       | highdatabase[.]com       incomeupdate[.]com
       | databasegalore[.]com       panhardware[.]com
       | zupertech[.]com            13.59.205[.]66       54.193.127[.]66
       | 54.215.192[.]52       34.203.203[.]23       139.99.115[.]204
       | 5.252.177[.]25       5.252.177[.]21       204.188.205[.]176
       | 51.89.125[.]18       167.114.213[.]199
       | 
       | It's impossible to hold definitive allow lists for IP addresses
       | or domains, but knowing characteristics about these calls might
       | at least make _finding_ hacks faster, though this logic is also
       | easily defeated.
       | 
       | In what I would call a very sterile software environment you
       | might install fully-configured SE Linux on a system, limit your
       | outbound network call destinations to approved locations, run
       | linting with static compilation, fuzzing on your functions and
       | binary inputs. I can still think of a myriad of ways of
       | compromising a distributed system like that. That's not to say,
       | "do nothing" and more to say, this is a very complex problem and
       | something like an "SBOM" isn't new or going to solve this problem
       | if replicated.
       | 
       | [0] https://www.nytimes.com/2021/01/06/us/politics/russia-
       | cyber-...
       | 
       | [1] https://blog.malwarebytes.com/threat-
       | analysis/2020/12/advanc...
        
       | Jupe wrote:
       | Ramblings on these topics...
       | 
       | Exposing SBOM on every piece of delivered software will just make
       | a hackers job easier and quicker... Since by design they are
       | machine readable, SBOMs will make querying for specific
       | vulnerabilities trivial.
       | 
       | This is not a top-down problem! Any upper layer can be
       | compromised by a lower layer (os, build tool, library, reporting
       | tool, etc.) this problem can only be solved Botton up : from
       | verified OS, to verified (bootstrap) build tools of that OS, to
       | every library installed on that OS, etc. We currently have
       | decades of software resting atop of unverified libraries resting
       | atop of unverified operating systems, all built with unverified
       | tooling.
       | 
       | We can't even build verification tools that are, themselves,
       | verified! And if we could, can we even say they verify every
       | potential vulnerability? (mitm, boundaries, race conditions, cpu
       | cache, etc.)
       | 
       | I know there is research at some universitys into formally
       | verified OS's, but it's a long way off IMO.
       | 
       | This is _the_ problem of our time. But, unfortunately, the
       | industry seems consumed with velocity and cleverness over
       | stability and security.
        
       | playcache wrote:
       | The rekor project under sigstore is interesting in this regard:
       | https://github.com/sigstore/rekor
       | 
       | Its listed as a signature transparency log, but they support some
       | sort of custom manifest system, so you can set your own schema in
       | your prefered format (xml, json, yaml) - the only thing is they
       | require the manifest / material file is signed (I guess as it
       | then brings a level of non-repudation). I am hoping someone works
       | on an SBOM type.
       | 
       | I heard some of the in-toto folks are working on the project as
       | well. This is a good step towards a SBOM recorded supply chain.
        
         | dlor wrote:
         | Maintainer here! That's exactly the idea. We're working with
         | intoto and others to get metadata that we can actually verify,
         | directly from build systems.
         | 
         | Rekor is a place to put and find that metadata that's globally
         | visible and can't be tampered with.
         | 
         | We're hoping to add support for the ITE-6 in-toto link format
         | soon, which I see as kind of like an SBOM that can be produced
         | directly from your build system.
        
       | hyko wrote:
       | A software BOM will not address these issues.
       | 
       | Stopping the delusion that any one nation can come out ahead in
       | this game by hoarding vulnerabilities, and working towards
       | establishing and enforcing strict rules of cyber warfare are the
       | first step.
        
       | mrweasel wrote:
       | Doesn't something like FDA approval of medical systems already
       | require this? I believe you're required to maintain a list, and
       | risk analysis of third party software you incorporate into
       | medical products:
       | https://en.wikipedia.org/wiki/Software_of_unknown_pedigree
        
       | jacques_chester wrote:
       | We need SBOMs, but these are not enough. We need supply chain
       | attestations, but these are not enough. What we need is the
       | combination of _asset_ data, _process_ data and to acknowledge
       | that our knowledge of both is always incomplete and subject to
       | change. I call this need a  "universal asset graph" and I've been
       | nagging folks for years to get us to it.
       | 
       | The sigstore project is the biggest foundation stone of what I'd
       | wish for, at least in terms of creating a robust shared log of
       | observations (a leader of that effort, dlor, is in this
       | discussion). But we still have a very, very long way to go as an
       | industry.
        
         | Ericson2314 wrote:
         | Just look at Nix.
         | 
         | Here's the thing, having the sellers of unfree software compile
         | the code for is a terrible skeuomorphism from the way
         | traditional products are made. The final integrator should be
         | the one building the code _even for propriety0 and unfree
         | software_ , whose secretiveness should be enforced with
         | contracts not obfuscation and baking in specific dependencies.
         | 
         | The fact that the finally compilation graph, and the IP
         | procurement graph have some similarities should _just_ be a
         | coincidence.
        
           | jacques_chester wrote:
           | I didn't follow your argument. Could you elaborate?
        
             | Ericson2314 wrote:
             | Right now I'd you buy property software, you get a _binary_
             | of some sort. That 's neither composible or auditable.
             | 
             | You should get the source code and a reproducible build,
             | that you can modify and integrate with other things. Kinda
             | like licensing closed source game engine parts, or getting
             | hardware design libraries (IP as they say) for making your
             | own system on a chip.
        
               | jacques_chester wrote:
               | I think I follow now.
               | 
               | Source availability is preferable but it will take a
               | while to become normal. It will be even longer bit-for-
               | bit reproducibility is a commercial norm.
               | 
               | SBOMs still give us value in the meantime. If I buy
               | product X, which asserts using dependency Y, then when a
               | vulnerability is asserted for Y, I can pester the vendor
               | to show that they have updated.
               | 
               | At this point if they claim to upgrade but haven't, that
               | becomes fraud. The economic incentives vs our current
               | anything-goes world are differently weighted.
        
               | Ericson2314 wrote:
               | I kind of buy that, but I think it's important that SBOMs
               | be designed such that it is clear what the gold standard
               | is. The worst outcome is a standard that enforces a bunch
               | of annoying metadata even in the all source and build
               | steps are public case.
               | 
               | The other problem is I don't think people without
               | reproducible builds can deliver a correct SBOM. There
               | must be some severe penalties for missing dependencies or
               | something to steer people towards reproducible builds
               | whether or not they are sharing source and packaging.
        
               | jacques_chester wrote:
               | I think it helps to think of SBOMs as extracts or
               | projections from an underlying knowledgebase which is
               | updateable.
               | 
               | There's certainly no sense in saying that any SBOM is
               | truly final. Merely "this is our best knowledge at time
               | X".
        
       ___________________________________________________________________
       (page generated 2021-03-21 23:01 UTC)