[HN Gopher] Polyfill supply chain attack hits 100K+ sites
       ___________________________________________________________________
        
       Polyfill supply chain attack hits 100K+ sites
        
       Author : gnabgib
       Score  : 483 points
       Date   : 2024-06-25 18:27 UTC (4 hours ago)
        
 (HTM) web link (sansec.io)
 (TXT) w3m dump (sansec.io)
        
       | doctorpangloss wrote:
       | But Microsoft Azure for GitHub ScanningPoint 2024 is SOC2
       | compliant. How could this happen?
        
         | ssahoo wrote:
         | Probably those auditors following playbook from 2008. Using
         | ..VPN, AD, umm pass.
        
       | jstanley wrote:
       | Always host your dependencies yourself, it's easy to do & even in
       | the absence of a supply chain attack it helps to protect your
       | users' privacy.
        
         | can16358p wrote:
         | But if the dependency from a CDN is already cached, it will
         | skip an extra resource and site will load faster.
         | 
         | I agree with the points though.
        
           | pityJuke wrote:
           | That's not been true since Site Isolation IIRC
           | 
           | e: Not sure it's Site Isolation specifically, but it's
           | definitely still not true anymore:
           | https://news.ycombinator.com/item?id=24745748
           | 
           | e2: listen to the commenter below, its Cache Partitioning:
           | https://developer.chrome.com/blog/http-cache-partitioning
        
             | andrewmcwatters wrote:
             | If that's true, this is a wild microcosm example of how the
             | web breaks in ways we don't expect.
        
             | simonw wrote:
             | Right - and site isolation is about five years old at this
             | point. The idea that CDNs can share caches across different
             | sites is quite out of date.
        
           | minitech wrote:
           | Because of modern cache partitioning, HTTP/2+ multiplexing,
           | and sites themselves being served off CDNs, external CDNs are
           | now also worse for performance.
           | 
           | If you use them, though, use subresource integrity.
        
             | fkyoureadthedoc wrote:
             | > and sites themselves being served off CDNs
             | 
             | Funnily enough I can't set up CDN on Azure at work because
             | it's not approved but I could link whatever random ass CDN
             | I want for external dependencies if I was so inclined.
        
           | toast0 wrote:
           | In addition to Cache Partitioning, it was never really likely
           | that a user had visited another site that used the same
           | specific versions from the same cdn as your site uses.
           | 
           | Making sure all of your pages were synchronized with the same
           | versions and bundling into appropriate bits for sharing makes
           | sense, and then you may as well serve it from your own
           | domain. I think serving from your www server is fine now, but
           | back in the day there were benefits to having a different
           | hostname for static resources and maybe it still applies (I'm
           | not as deep into web stuff anymore, thank goodness).
        
       | api wrote:
       | Software supply chains feel like one of the Internet's last
       | remaining high-trust spaces, and I don't think that's going to
       | last long. A tidal wave of this is coming. I'm kind of surprised
       | it's taken this long given how unbelievably soft this underbelly
       | is.
        
         | acedTrex wrote:
         | going back to rolling it yourself, or relying on a few high
         | quality stdlib providers that you likely have to pay for.
        
       | stusmall wrote:
       | I'm surprised there is no mention of subresource integrity in the
       | article. It's a low effort, high quality mitigation for almost
       | any JS packages hosted by a CDN.
       | 
       | EDIT: Oh, it's because they are selling something. I don't know
       | anything about their offerings, but SRI is made for this and is
       | extremely effective.
        
         | davidfischer wrote:
         | SRI generally won't work here because the served polyfill JS
         | (and therefore the SRI hash) depends on the user agent/headers
         | sent by the user's browser. If the browser says it's ancient,
         | the resulting polyfill will fill in a bunch of missing JS
         | modules and be a lot of JS. If the browser identifies as
         | modern, it should return nothing at all.
         | 
         | Edit: In summary, SRI won't work with a dynamic polyfill which
         | is part of the point of polyfill.io. You could serve a static
         | polyfill but that defeats some of the advantages of this
         | service. With that said, this whole thread is about what can
         | happen with untrusted third parties so...
        
           | koolba wrote:
           | It absolutely would work if the browser validates the SRI
           | hash. The whole point is to know in advance what you expect
           | to receive from the remote site and verify the actual bytes
           | against the known hash.
           | 
           | It wouldn't work for some ancient browser that doesn't do SRI
           | checks. But it's no worse for that user than without it.
        
             | stusmall wrote:
             | Their point is that the result changes depending on the
             | request. It isn't a concern about the SRI hash not getting
             | checked, it is that you can't realistically know the what
             | you expect in advance.
        
             | reubenmorais wrote:
             | The CDN in this case is performing an additional function
             | which is incompatible with SRI: it is dynamically rendering
             | a custom JS script based on the requesting User Agent, so
             | the website authors aren't able to compute and store a hash
             | ahead of time.
        
             | davidfischer wrote:
             | I edited to make my comment more clear but polyfill.io
             | sends dynamic polyfills based on what features the
             | identified browser needs. Since it changes, the SRI hash
             | would need to change so that part won't work.
        
               | koolba wrote:
               | Ah! I didn't realize that. My new hot take is that sounds
               | like a terrible idea and is effectively giving full
               | control of the user's browser to the polyfill site.
        
               | svieira wrote:
               | And _this_ hot take happens to be completely correct (and
               | is why many people _didn 't_ use it, in spite of others
               | yelling that they were needlessly re-inventing the
               | wheel).
        
               | tracker1 wrote:
               | Yeah... I've generated composite fills with the pieces I
               | would need on the oldest browser I had to support,
               | unfortunately all downstream browsers would get it.
               | 
               | Fortunately around 2019 or so, I no longer had to support
               | any legacy (IE) browsers and pretty much everything
               | supported at least ES2016. Was a lovely day and cut a lot
               | of my dependencies.
        
             | jermaustin1 wrote:
             | They are saying that because the content of the script file
             | is dynamic based on useragent and what that useragent
             | currently supports in-browser, the integrity hash would
             | need to also be dynamic which isn't possible to know ahead
             | of time.
        
           | stusmall wrote:
           | Oooft. I didn't realize it's one that dynamically changes
           | it's content.
        
             | hluska wrote:
             | So maybe it's less that the article is selling something
             | and more that you just don't understand the attack surface?
        
         | svieira wrote:
         | Wouldn't work in this case because the whole selling point of
         | polyfill.io was that as new features came out and as the
         | browser grew support for new features the polyfill that was
         | loaded would dynamically grow or shrink.
         | 
         | Something like
         | `polyfill.io.example.org/v1?features=Set,Map,Other.Stuff` would
         | _shrink_ over time, while something like
         | `pollyfill.io.example.org/v1?features=ES-Next` would grow and
         | shrink as new features came and went.
        
         | hannob wrote:
         | In all cases where you can use SRI, there's a better
         | mitigation: Just host a copy of the file yourself.
        
       | baxtr wrote:
       | Important context given by the author of polyfill:
       | 
       |  _> If your website uses http://polyfill.io, remove it
       | IMMEDIATELY.
       | 
       | I created the polyfill service project but I have never owned the
       | domain name and I have had no influence over its sale._ (1)
       | 
       | Although I wonder how the GitHub account ownership was
       | transferred.
       | 
       | (1) https://x.com/triblondon/status/1761852117579427975
        
         | yesimahuman wrote:
         | Does this person telling us not to use polyfill.io, and the guy
         | who sold polyfill.io to the chinese company both work at
         | Fastly? If so, that's kind of awkward...
        
           | rafram wrote:
           | Seems like it. I hope the money was worth it.
        
           | teruakohatu wrote:
           | It appears both currently do work for Fastly. I am pleased
           | the Fastly developer advocate warned us, and announced a fork
           | and alternative hosting service:
           | 
           | [1] https://community.fastly.com/t/new-options-for-polyfill-
           | io-u...
           | 
           | But it leaves me with an uneasy feeling about Fastly.
        
           | linclark wrote:
           | Neither of them had ownership of the project, so neither of
           | them were responsible for the sale or benefited from it.
           | 
           | They both simply dedicated a lot of time, care and skill to
           | the project. It's really a shame to see what they spent so
           | much time building and maintaining now being used as a
           | platform to exploit people. I'm sure its extremely
           | disappointing to both of them.
        
             | yesimahuman wrote:
             | https://web.archive.org/web/20240229113710/https://github.c
             | o...
             | 
             | Is JakeChampion not the one who sold the project? His bio
             | says he currently works at Fastly
        
       | mihaic wrote:
       | I had this conversation countless times with developers: are you
       | really ok if someone hijacks the CDN for the code you're
       | including? They almost always seem to be fine with it, simply
       | because everyone else is doing it like this. At the same time
       | they put up with countless 2FAs in the most mundane places.
       | 
       | The follow up of "you know that the random packages you're
       | including could have malware" is even more hopeless.
        
         | fkyoureadthedoc wrote:
         | yes you just put integrity="sha384-whatever" and you're good to
         | go
        
           | ok123456 wrote:
           | Can't do that with this one because it generates the polyfill
           | based on the user agent.
        
             | fkyoureadthedoc wrote:
             | yeah that's nuts, I would never use a random site for that,
             | but in general people's opinion on CDN use is dated. Tons
             | of people still think that cached resources are shared
             | between domains for example.
        
           | mihaic wrote:
           | Sure, but why risk a developer making a typo, saying
           | integrty="sha384-whatever", and that attribute simply being
           | ignored in the html?
        
             | cqqxo4zV46cp wrote:
             | "A developer could typo something" is kind of weak because
             | you could use this argument for basically anything.
        
         | johnmaguire wrote:
         | In general, SRI (Subresource Integrity) should protect against
         | this. It sounds like it wasn't possible in the Polyfill case as
         | the returned JS was dynamic based on the browser requesting it.
        
       | im3w1l wrote:
       | So what did the malware actually _do_?
        
         | rolph wrote:
         | The first time a user, who uses a phone, open a website through
         | an ads ( google ads or facebook ) with this link, it will
         | redirect user to a malicious website.
         | 
         | The request send to https://cdn.polyfill.io/v2/polyfill.min.js
         | needs to match the following format:
         | Request for the first time from a unique IP, with a unique
         | User-Agent.             User-Agent match that of a phone, we
         | used an iphone's user agent ( Mozilla/5.0 (iPhone14,2; U; CPU
         | iPhone OS 14_0 like Mac OS X) AppleWebKit/602.1.50 (KHTML, like
         | Gecko) Version/10.0 Mobile/15E148 Safari/602.1 ).
         | Referer from a reputable website that installed polyfill.
         | Accept /             Accept-Encoding gzip, deflate, br, zstd
         | Delete all cookies
         | 
         | The request will return the original polyfill code, appended
         | with a piece of malicious code. This code will make a run
         | javascript from https://www.googie-anaiytics.com/ga.js , if the
         | device is not a laptop. You can reproduce this multiple time on
         | the same machine by changing User agent gently, (ex: change
         | Mozilla/5.0 to Mozilla/6.0). Sometimes the server will just
         | timeout or return code without the injection, but it should
         | work most of the time.
         | 
         | The javascript on https://www.googie-anaiytics.com/ga.js will
         | redirect users to a malicious website based on some condition
         | check for a number of conditions before running ( useragent,
         | screen width, ...) to ensure it is a phone, the entry point is
         | at the end:
         | 
         | bdtjfg||cnzfg||wolafg||mattoo||aanaly||ggmana||aplausix||statcc
         | t?setTimeout(check_tiaozhuan,-0x4*0x922+0x1ebd+0xd9b):check_tia
         | ozhuan();
         | 
         | The code has some protection built-in, so if it is run on a
         | non-suitable environment, it will attempt to relocate a lot of
         | memory to freeze the current devices. It also re-translate all
         | attribute name access with _0x42bcd7 .
         | 
         | https://github.com/polyfillpolyfill/polyfill-service/issues/...
        
       | program_whiz wrote:
       | Game theory at work? Someone needs to maintain legacy code for
       | free that hosts thousands of sites and gets nothing but trouble
       | (pride?) in return. Meanwhile the forces of the world present
       | riches and power in return to turn to the dark side (or maybe
       | just letting your domain lapse and doing something else).
       | 
       | If security means every maintainer of every OSS package you use
       | has to be scrupulous, tireless, and not screw up for life, not
       | sure what to say when this kind of thing happens other than
       | "isn't that the only possible outcome given the system and
       | incentives on a long enough timeline?"
       | 
       | Kind of like the "why is my favorite company monetizing now and
       | using dark patterns?" Well, on an infinite timeline did you think
       | service would remain high quality, free, well supported, and run
       | by tireless, unselfish, unambitious benevolent dictators for the
       | rest of your life? Or was it a foregone question that was only a
       | matter of "when" not "if"?
        
         | program_whiz wrote:
         | in a strange way, this almost makes the behavior of hopping
         | onto every new framework rational. The older and less relevant
         | the framework, the more the owner's starry-eyed enthusiasm
         | wears off. The hope that bigcorp will pay $X million for the
         | work starts to fade. The tedium of bug fixes and maintenance
         | wears on, the game theory takes it's toll. The only rational
         | choice for library users is to jump ship once the number of
         | commits and hype starts to fall -- that's when the owner is
         | most vulnerable to the vicissitudes of Moloch.
        
           | chrisweekly wrote:
           | Good point. What's often (and sometimes fairly) derided as
           | "chasing the new shiny" has a lot of other benefits too:
           | increased exposure to new (and at least sometimes
           | demonstrably better) ways of doing things; ~inevitable
           | refactoring along the way (otherwise much more likely
           | neglected); use of generally faster, leaner, less dependency-
           | bloated packages; and an increased real-world userbase for
           | innovators. FWIW, my perspective is based on building and
           | maintaining web-related software since 1998.
        
         | szundi wrote:
         | This is an insightful comment, sadly
        
         | m0llusk wrote:
         | Alternatively, if you rely on some code then download a
         | specific version and check it before using it. Report any
         | problems found. This makes usage robust and supports open
         | source support and development.
        
           | SV_BubbleTime wrote:
           | I guess that would work if new exploits weren't created or
           | discovered.
           | 
           | Otherwise all your plan to "run old software" is
           | questionable.
        
           | j1elo wrote:
           | Vendoring should be the norm, not the special case.
           | 
           | Something like this ought to be an essential part of all
           | package managers, and I'm thinking here that the first ones
           | should be the thousands of devs cluelessly using NPM around
           | the world:
           | 
           | https://go.dev/ref/mod#vendoring
        
             | bryanlarsen wrote:
             | We've seen a lot more attacks succeed because somebody has
             | vendored an old vulnerable library than supply chain
             | attacks. Doing vendoring badly is worse than relying on
             | upstream. Vendoring is part of the solution, but it isn't
             | the solution by itself.
        
               | j1elo wrote:
               | Not alone, no. That's how CI bots help a lot, such as
               | Dependabot.
               | 
               | Althought it's also worrying how we seemingly need more
               | technologies on top of technologies just to keep a
               | project alive. It used to be just including the system's
               | patched header & libs, now we need extra bots surveying
               | everything...
               | 
               | Maybe a linux-distro-style of community dependency
               | management would make sense. Keep a small group of
               | maintainers busy with security patches for basically
               | everything, and as a downstream developer just install
               | the versions they produce.
               | 
               | I can visualize the artwork..."Debian but for JS"
        
           | program_whiz wrote:
           | I'm afraid this is hitting on the other end of inviolable
           | game theory laws. Dev who is paid for features and business
           | value wants to read line-by-line random package that is
           | upgrading from version 0.3.12 to 0.3.13 in a cryptography or
           | date lib that they likely don't understand? And this should
           | be done for every change of every library for all software,
           | by all devs who will always be responsible, not lazy, and
           | very attentive and careful.
           | 
           | On the flip side there is "doing as little as possible and
           | getting paid" for the remainder of a 40 year career where you
           | are likely to be shuffled off when the company has a bad
           | quarter anyway.
           | 
           | In my opinion, if that was incentivized by our system, we'd
           | already be seeing more of it, we have the system we have due
           | to the incentives we have.
        
             | ravenstine wrote:
             | Correct. I don't think I have ever seen sound engimeering
             | decisions being rewarded at any business I have worked for.
             | The only reason any sound decisions are made is that some
             | programmers take the initiative, but said initiative rarely
             | comes with a payoff and always means fighting with other
             | programmers who have a fetish for complexity.
             | 
             | If only programmers had to take an ethics oath so they have
             | an excuse not to just go along with idiotic practices.
        
               | PaulHoule wrote:
               | Then there are the programmers who read on proggit that
               | "OO drools, functional programming rules" or the C++
               | programmers who think having a 40 minute build proves how
               | smart and tough they are, etc.
        
         | SV_BubbleTime wrote:
         | Real solution?
         | 
         | We're in a complexity crisis and almost no one sees it.
         | 
         | It's not just software dependencies of course. It's everything
         | almost everywhere.
         | 
         | No joke, the Amish have a point. They were just a few hundred
         | years too early.
        
           | willcipriano wrote:
           | It's a competence crisis not a complexity one.
           | 
           | https://www.palladiummag.com/2023/06/01/complex-systems-
           | wont...
        
             | chrisweekly wrote:
             | Probably both, IMHO.
        
             | ykonstant wrote:
             | I see both, incentivized by the cowboy developer attitude.
        
             | irskep wrote:
             | I'm predisposed to agree with the diagnosis that
             | incompetence is ruining a lot of things, but the article
             | boils down to "diversity hiring is destroying society" and
             | seems to attribute a lot of the decline to the Civil Rights
             | Act of 1964. Just in case anybody's wondering what they
             | would get from this article.
             | 
             | > By the 1960s, the systematic selection for competence
             | came into direct conflict with the political imperatives of
             | the civil rights movement. During the period from 1961 to
             | 1972, a series of Supreme Court rulings, executive orders,
             | and laws--most critically, the Civil Rights Act of 1964--
             | put meritocracy and the new political imperative of
             | protected-group diversity on a collision course.
             | Administrative law judges have accepted statistically
             | observable disparities in outcomes between groups as prima
             | facie evidence of illegal discrimination. The result has
             | been clear: any time meritocracy and diversity come into
             | direct conflict, diversity must take priority.
             | 
             | TL;DR "the California PG&E wildfires and today's JavaScript
             | vulnerability are all the fault of Woke Politics." Saved
             | you a click.
        
               | UweSchmidt wrote:
               | A more fundamental reason is that society is no longer
               | interested in pushing forward at all cost. It's the
               | arrival at an economical and technological equilibrium
               | where people are _comfortable_ enough, along with the end
               | of the belief in progress as an ideology, or way to
               | salvation somewhere during the 20th century. If you look
               | closely, a certain kind of relaxation has replaced a
               | quest for efficiency everywhere. Is that disappointing?
               | Is that actually bad? Do you think there might be a rude
               | awakening?
               | 
               | Consider: It was this scifi-fueled dream of an amazing
               | high-tech, high-competency future that also implied
               | machines doing the labour, and an enlightened future
               | relieving people of all kinds of unpleasantries like
               | boring work, therefore prevented them from attaining high
               | competency. The fictional starship captain, navigating
               | the galaxy and studying alien artifacts was always saving
               | planets full of humans in desolate mental state...
        
               | PaulHoule wrote:
               | My own interpretation of the business cycle is that
               | growth cause externalities that stop growth. Sometimes
               | you get time periods like the 1970s where efforts to
               | control externalities themselves would cause more
               | problems than they solved, at least some of the time.
               | (e.g. see the trash 1974 model year of automobiles where
               | they hadn't figured out how to make emission controls
               | work.)
               | 
               | I'd credit the success of Reagan in the 1980s at managing
               | inflation to a quiet policy of degrowth the Republicans
               | could get away with because everybody thinks they are
               | "pro business". As hostile as Reagan's rhetoric was
               | towards environmentalism note we got new clean air and
               | clean water acts in the 1980s but that all got put in
               | pause under Clinton where irresponsible monetary
               | expansion restarted.
        
               | rytor718 wrote:
               | Thank you for summarizing (I actually read the whole
               | article before seeing your reply and might have posted
               | similar thoughts). I get the appeal of romanticizing our
               | past as a country, looking back at the post-war era,
               | especially the space race with a nostalgia that makes us
               | imagine it was a world where the most competent were at
               | the helm. But it just wasn't so, and still isn't.
               | 
               | Many don't understand that the Civil Rights Act describes
               | the systematic LACK of a meritocracy. It defines the ways
               | in which merit has been ignored (gender, race, class,
               | etc) and demands that merit be the criteria for success
               | -- and absent the ability for an institution to decide on
               | the merits it provides a (surely imperfect) framework to
               | force them to do so. The necessity of the CRA then and
               | now, is the evidence of absence of a system driven on
               | merit.
               | 
               | I want my country to keep striving for a system of merit
               | but we've got nearly as much distance to close on it now
               | as we did then.
        
             | SV_BubbleTime wrote:
             | We haven't gotten smarter or dumber.
             | 
             | But we have exceeded our ability to communicate the ideas
             | and concepts, let alone the instructions of how to build
             | and manage things.
             | 
             | Example: a junior Jiffy Lube high school dropout in 1960
             | could work hard and eventually own that store. Everything
             | he would ever need to know about ICE engines was simple
             | enough to understand over time... but now? There are 400
             | oil types, there are closed source computers on top
             | computers, there are specialty tools for every vehicle
             | brand, and you can't do anything at all without knowing 10
             | different do-work-just-to-do-more-work systems. The high
             | school dropout in 2024, will never own the store. Same kid.
             | He hasn't gotten dumber. The world just left him by in
             | complexity.
             | 
             | Likewise... I suspect that Boeing hasn't forgotten how to
             | build planes, but the complexity has exceeded their
             | ability. No human being on earth could be put in a room and
             | make a 747 even over infinite time. It's a product of far
             | too many abstract concepts in a million different places
             | that have come together to make a thing.
             | 
             | We make super complex things with zero effort put into
             | communicating how or why they work a way they do.
             | 
             | We increase the complexity just to do it. And I feel we are
             | hitting our limits.
        
               | PaulHoule wrote:
               | The problem w/ Boeing is not the inability of people to
               | manage complexity but of management's refusal to manage
               | complexity in a responsible way.
               | 
               | For instance, MCAS on the 737 is a half-baked
               | implementation of the flight envelope protection facility
               | on modern fly-by-wire airliners (all of them, except for
               | the 737). The A320 had some growing pains with this,
               | particularly it had at least two accidents where pilots
               | tried to fly the plane into the ground, thought it would
               | fail because of the flight envelope protection system,
               | but they succeeded and crashed anyway. Barring that bit
               | of perversity right out of the _Normal Accidents_ book,
               | people understand perfectly well how to build a safe fly-
               | by-wire system. Boeing chose not to do that, and they
               | refused to properly document what they did.
               | 
               | Boeing _chose_ to not develop a 737 replacement, so all
               | of us are suffering: in terms of noise, for instance,
               | pilots are going deaf, passengers have their head
               | spinning after a few hours in the plane, and people on
               | the ground have no idea that the 737 is much louder than
               | competitors.
        
               | hypeatei wrote:
               | Okay but your entire comment is riddled with mentions of
               | complex systems (flight envelope system?) which proves
               | the point of the parent comment. "Management" here is a
               | group of humans who need to deal with all the complexity
               | of corporate structures, government regulations, etc..
               | while also dealing with the complexities of the products
               | themselves. We're all fallible beings.
        
               | _DeadFred_ wrote:
               | Boeing management is in the business of selling
               | contracts. They are not in the business of making
               | airplanes. That is the problem. They relocated
               | headquarters from Seattle to Chicago and now DC so that
               | they can focus on their priority, contracts. They dumped
               | Boeing's original management style and grafted on the
               | management style of a company that was forced to merge
               | with Boeing. They diversified supply chain as a form of
               | kickbacks to local governments/companies that bought
               | their 'contracts'.
               | 
               | They enshiftified every area of the company, all with the
               | priority/goal of selling their core product, 'contracts',
               | and filling their 'book'.
               | 
               | We are plenty capable of designing Engineering systems,
               | PLMs to manage EBOMs, MRP/ERP systems to manage MBOMs,
               | etc to handle the complexities of building aircraft. What
               | we can't help is the human desire to prioritize
               | enshitfication if it means a bigger paycheck. Companies
               | no longer exist to create a product, and the product is
               | becoming secondary and tertiary in management's
               | priorities, with management expecting someone else to
               | take care of the 'small details' of why the company
               | exists in the first place.
        
               | _DeadFred_ wrote:
               | Boeing is a kickbacks company in a really strange way.
               | They get contracts based on including agreements to
               | source partly from the contracties local area. Adding
               | complexity for contracts and management bonus sake, not
               | efficiency, not redundancy, not expertise. Add onto that
               | a non-existent safety culture and a non-
               | manufacturing/non-aerospace focused management philosophy
               | grafting on from a company that failed and had to be
               | merged into Boeing replacing the previous Boeing
               | management philosophy. Enshitifaction in every area of
               | the company. Heck they moved headquarters from Seattle to
               | Chicago, and now from Chicago to DC. Prioritizing being
               | where the grift is over, you know, being where the
               | functions of the company are so that management has a
               | daily understanding of what the company does. Because to
               | management what the company does is win contracts, not
               | build aerospace products. 'Someone else' takes care of
               | that detail, according to Boeing management. Building
               | those products in now secondary/tertiary to management.
               | 
               | I did ERP/MPR/EBOM/MBOM/BOM systems for aerospace. We
               | have that stuff down. We have systems for this kind of
               | communication down really well. We can build within a
               | small window an airplane with thousands of parts with
               | lead times from 1 day to 3 months to over a year for
               | certain custom config options, with each parts design/FAA
               | approval/manufacturing/installation tracked and audited.
               | Boeing's issue is culture, not humanity's ability to make
               | complex systems.
               | 
               | But I do agree that there is a complexity issue in
               | society in general, and a lot of systems are coasting on
               | the efforts of those that originally put them in
               | place/designed them. A lot of government seems to be this
               | way too. There's also a lot of overhead for overheads
               | sake, but little process auditing/iterative improvement
               | style management.
        
               | lawlessone wrote:
               | >Example: a junior Jiffy Lube high school dropout in 1960
               | c
               | 
               | Nowadays the company wouldn't hire a junior to train.
               | They'll only poach already experienced people from their
               | competitors.
               | 
               | Paying for training isn't considered worthwhile to the
               | company because people wont stay.
               | 
               | People won't stay because the company doesn't invest in
               | employees , it only poaches.
        
         | causal wrote:
         | It seems when proprietary resources get infected it's because
         | hackers are the problem, but when open source resources get
         | infected its a problem with open source.
         | 
         | But there isn't any particular reason why a paid/proprietary
         | host couldn't just as easily end up being taken over / sold to
         | a party intending to inject malware. It happens all the time
         | really.
        
           | program_whiz wrote:
           | agreed, but if a company is making millions for the security
           | of software, the incentive is to keep it secure so customers
           | stick with it. Remember the lastpass debacle, big leak and
           | lost many customers...
        
             | krageon wrote:
             | Oh yeah, corporations are _so_ accountable. We have tons of
             | examples of this.
        
             | advael wrote:
             | Directly security-focused products like lastpass are the
             | only things that have any market pressure whatsoever on
             | this, and that's because they're niche products for which
             | the security is the only value-add, marketed to explicitly
             | security-conscious people and not insulated by a whole
             | constellation of lock-in services. The relevant security
             | threats for the overwhelming majority of people and
             | organizations are breaches caused by the practices of
             | organizations that face no such market pressure, including
             | constant breaches of nonconsensually-harvested data, which
             | aren't even subject to market pressures from their victims
             | in the first place
        
               | PaulHoule wrote:
               | I wouldn't point to LastPass as an exemplar...
               | 
               | https://www.theverge.com/2024/5/1/24146205/lastpass-
               | independ...
        
               | advael wrote:
               | I didn't, and my point was exactly that it's not a great
               | one, so I think we largely agree here
        
               | the8472 wrote:
               | Even for security-related products the incentives are
               | murky. If they're not actually selling you security but a
               | box on the compliance bingo then it's more likely that
               | they actually increase your attack surface because they
               | want to get their fingers into everything so they can
               | show nice charts about all the things they're monitoring.
        
               | advael wrote:
               | Aye. My internal mythological idiolect's trickster deity
               | mostly serves to personify the game-theoretic arms race
               | of deception and is in a near-constant state of cackling
               | derisively at the efficient market hypothesis
        
             | f1refly wrote:
             | I think for some reason some people _still_ buy cisco
             | products, so this reasoning doesn 't seem to be applicable
             | to the real world.
        
           | TZubiri wrote:
           | Yes, the economic problem of reward absence is exclusive to
           | open source and private software does not have it. They may
           | have others, like excess of rewards to hackers in form of
           | crypto ransom to the point that the defense department had to
           | step in and ban payouts.
        
         | akira2501 wrote:
         | Perhaps. I view it as the squalor of an entirely
         | unsophisticated market. Large organizations build and deploy
         | sites on technologies with ramifications they hardly understand
         | or care about because there is no financial benefit for them to
         | do so, because the end user lacks the same sophistication, and
         | is in no position to change the economic outcomes.
         | 
         | So an entire industry of bad middleware created from glued
         | together mostly open source code and abandoned is allowed to
         | even credibly exist in the first place. That these people are
         | hijacking your browser sessions rather than selling your data
         | is a small distinction against the scope of the larger problem.
        
       | pimterry wrote:
       | > this domain was caught injecting malware on mobile devices via
       | any site that embeds cdn.polyfill.io
       | 
       | I've said it before, and I'll say it again:
       | https://httptoolkit.com/blog/public-cdn-risks/
       | 
       | You can reduce issues like this using subresource intergrity
       | (SRI) but there are still tradeoffs (around privacy & reliability
       | - see article above) and there is a better solution: self-host
       | your dependencies behind a CDN service you control (just
       | bunny/cloudflare/akamai/whatever is fine and cheap).
       | 
       | In a tiny prototyping project, a public CDN is convenient to get
       | started fast, sure, but if you're deploying major websites I
       | would really strong recommend not using public CDNs, never ever
       | ever ever (the World Economic Forum website is affected here, for
       | example! Absolutely ridiculous).
        
         | irrational wrote:
         | > self-host your dependencies
         | 
         | I can kind of understand why people went away from this, but
         | this is how we did it for years/decades and it just worked.
         | Yes, doing this does require more work for you, but that's just
         | part of the job.
        
           | jweir wrote:
           | Own your process - at best that CDN is spying on your users.
        
           | marcosdumay wrote:
           | > and it just worked
           | 
           | Just to add... that is _unlike_ the CDN thing, that will send
           | developers into Stack Overflow looking how to set-up CORS.
        
           | toddmorey wrote:
           | For performance reasons alone, you definitely want to host as
           | much as possible on the same domain.
           | 
           | In my experience from inside companies, we went from self-
           | hosting with largely ssh access to complex deployment
           | automation and CI/CD that made it hard to include any new
           | resource in the build process. I get the temptation:
           | resources linked from external domains / cdns gave the
           | frontend teams quick access to the libraries, fonts, tools,
           | etc. they needed.
           | 
           | Thankfully things have changed for the better and it's much
           | easier to include these things directly inside your project.
        
             | jameshart wrote:
             | There was a brief period when the frontend dev world
             | believed the most performant way to have everyone load,
             | say, jquery, would be for every site to load it from the
             | same CDN URL. From a trustworthy provider like Google, of
             | course.
             | 
             | It turned out the browser domain sandboxing wasn't as good
             | as we thought, so this opened up side channel attacks,
             | which led to browsers getting rid of cross-domain cache
             | sharing; and of course it turns out that there's really no
             | such thing as a 'trustworthy provider' so the web dev
             | community memory-holed that little side adventure and
             | pivoted to npm.
             | 
             | Which is going GREAT by the way.
             | 
             | The advice is still out there, of course. W3schools says:
             | 
             | > One big advantage of using the hosted jQuery from Google:
             | 
             | > Many users already have downloaded jQuery from Google
             | when visiting another site. As a result, it will be loaded
             | from cache when they visit your site
             | 
             | https://www.w3schools.com/jquery/jquery_get_started.asp
             | 
             | Which hasn't been true for years, but hey.
        
               | josephg wrote:
               | The only thing I'd trust w3schools to teach me is SEO.
               | How do they stay on top of Google search results with
               | such bad, out of date articles?
        
             | matsemann wrote:
             | > _For performance reasons alone, you definitely want to
             | host as much as possible on the same domain._
             | 
             | It used to be the opposite. Browsers limit the amount of
             | concurrent requests to a domain. A way to circumvent that
             | was to load your resources from a.example.com,
             | b.example.com, c.example.com etc. Paying some time for
             | extra dns resolves I guess, but could then load many more
             | resources at the same time.
             | 
             | Not as relevant anymore, with http2 that allows sharing
             | connections, and more common to bundle files.
        
               | PaulHoule wrote:
               | Years ago I had terrible DNS service from my ISP, enough
               | to make my DSL sometimes underperform dialup. About 1 in
               | 20 DNS lookups would hang for many seconds so it was
               | inevitable that any web site that pulled content from
               | multiple domains would hang up for a long time when
               | loading. Minimizing DNS lookups was necessary to get
               | decent performance for me back then.
        
             | the8472 wrote:
             | Maybe people have been serving those megabytes of JS
             | frameworks from some single-threaded python webserver (in
             | dev/debug mode to boot) and wondered why they could only
             | hit 30req/s or something like that.
        
             | hinkley wrote:
             | Using external tools can make it quite a lot harder to do
             | differential analysis to triage the source of a bug.
             | 
             | The psychology of debugging is more important than most
             | allow. Known unknowns introduce the possibility that an
             | Other is responsible for our current predicament instead of
             | one of the three people who touched the code since the
             | problem happened (though I've also seen this when the
             | number of people is exactly 1)
             | 
             | The judge and jury in your head will refuse to look at
             | painful truths as long as there is reasonable doubt, and so
             | being able to scapegoat a third party is a depressingly
             | common gambit. People will attempt to put off paying the
             | piper even if doing so means pissing off the piper in the
             | process. _That_ bill can come due multiple times.
        
         | ziml77 wrote:
         | I've seen people reference CDNs for internal sites. I hate that
         | because it is not only a security risk but it also means we
         | depend on the CDN being reachable for the internal site to
         | work.
         | 
         | It's especially annoying because the projects I've seen it on
         | were using NPM anyway so they could have easily pulled the
         | dependency in through there. Hell, even without NPM it's not
         | hard to serve these JS libraries internally since they tend to
         | get packed into one file (+ maybe a CSS file).
        
         | simonw wrote:
         | I always prefer to self-host my dependencies, but as a
         | developer who prefer to avoid an npm-based webpack/whatever
         | build pipeline it's often WAY harder to do that than I'd like.
         | 
         | If you are the developer of an open source JavaScript library,
         | please take the time to offer a downloadable version of it that
         | works without needing to run an "npm install" and then fish the
         | right pieces out of the node_modules folder.
         | 
         | jQuery still offer a single minified file that I can download
         | and use. I wish other interesting libraries would do the same!
         | 
         | (I actually want to use ES Modules these days which makes
         | things harder due to the way they load dependencies. I'm still
         | trying to figure out the best way to use import maps to solve
         | this.)
        
           | spankalee wrote:
           | As you might know, Lit offers a single bundled file for the
           | core library.
        
             | simonw wrote:
             | Yes! Love that about Lit. The problem is when I want to add
             | other things that have their own dependency graph.
        
               | spankalee wrote:
               | This is why I don't think it's very workable to avoid
               | npm. It's the package manager of the ecosystem, and
               | performs the job of downloading dependencies well.
               | 
               | I personally never want to go back to the pre-package-
               | manager days for any language.
        
               | PaulHoule wrote:
               | One argument is that Javascript-in-the-browser has
               | advanced a lot and there's less need for a build system.
               | (ex. ESM module in the browser)
               | 
               | I have some side projects that are mainly HTMX-based with
               | some usage of libraries like D3.js and a small amount of
               | hand-written Javascript. I don't feel that bad about
               | using unpkg because I include signatures for my
               | dependencies.
        
               | spankalee wrote:
               | npm is a package manager though, not a build system. If
               | you use a library that has a dependency on another
               | library, npm downloads the right version for you.
        
               | josephg wrote:
               | Yep. And so does unpkg. If you're using JavaScript code
               | through unpkg, you're still using npm and your code is
               | still bundled. You're just getting someone else to do it,
               | at a cost of introducing a 3rd party dependency.
               | 
               | I guess if your problem with npm and bundlers is you
               | don't want to run those programs, fine? I just don't
               | really understand what you gain from avoiding running
               | bundlers on your local computer.
        
               | simonw wrote:
               | Before ESM I wasn't nearly as sold on skipping the build
               | step, but now it feels like there's a much nicer browser
               | native way of handling dependencies, if only I can get
               | the files in the right shape!
               | 
               | The Rails community are leaning into this heavily now:
               | https://github.com/rails/importmap-rails
        
           | alephnerd wrote:
           | > I always prefer to self-host my dependencies
           | 
           | Ime this has always been standard practice for production
           | code at all the companies I've worked at and with as a SWE or
           | PM - store dependencies within your own internal Artifactory,
           | have it checked by a vuln scanner, and then called and
           | deployed.
           | 
           | That said, I came out of the Enterprise SaaS and Infra space
           | so maybe workflows are different in B2C, but I didn't a
           | difference in the customer calls I've been on.
           | 
           | I guess my question is why your employer or any other org
           | would not follow the model above?
        
             | ttyprintk wrote:
             | Would an unwisely-configured site template or generator
             | explain the scale here?
             | 
             | Or, a malicious site template or generator purposefully
             | sprinkling potential backdoors for later?
        
               | alephnerd wrote:
               | But wouldn't some sort of SCA/SAST/DAST catch that?
               | 
               | Like if I'm importing a site template, ideally I'd be
               | verifying either it's source or it's source code as well.
               | 
               | (Not being facetious btw - genuinely curious)
        
             | baq wrote:
             | > I guess my question is why your employer or any other org
             | would not follow the model above?
             | 
             | When you look at Artifactory pricing you ask yourself 'why
             | should I pay them a metric truckload of money again?'
             | 
             | And then dockerhub goes down. Or npm. Or pypi. Or github...
             | or, worst case, this thread happens.
        
               | alephnerd wrote:
               | I just gave Artifactory as an example. What about GHE,
               | self-hosted GitLab, or your own in-house Git?
               | 
               | Edit: was thinking - would be a pain in the butt to
               | manage. That tracks, but every org ik has some corporate
               | versioning system that also has an upsell for source
               | scanning.
               | 
               | (Not being facetious btw - genuinely curious)
        
               | baq wrote:
               | I've been a part of a team which had to manage a set of
               | geodistributed Artifactory clusters and it was a pain in
               | the butt to manage, too - but these were self-hosted. At
               | a certain scale you have to pick the least worst solution
               | though, Artifactory seems to be that.
        
               | mrighele wrote:
               | There are cheaper or free alternatives to Artifactory.
               | Yes they may not have all of the features but we are
               | talking about a company that is fine with using a random
               | CDN instead.
               | 
               | Or, in the case of javascript, you could just vendor your
               | dependencies or do a nice "git add node_modules".
        
             | smaudet wrote:
             | > have it checked by a vuln scanner
             | 
             | This is kinda sad. For introducing new dependencies, a vuln
             | scanner makes sense (don't download viruses just because
             | they came from a source checkout!), but we could have kept
             | CDNs if we'd used signatures.
             | 
             | EDIT: Never mind, been out of the game for a bit! I see
             | there is SRI now...
             | 
             | https://developer.mozilla.org/en-
             | US/docs/Web/Security/Subres...
        
             | swatcoder wrote:
             | > I guess my question is why your employer or any other org
             | would not follow the model above?
             | 
             | Frankly, it's because many real-world products are pieced
             | together by some ragtag group of bright people who have
             | been made responsible for things they don't really know all
             | that much about.
             | 
             | The same thing that makes software engineering inviting to
             | autodidacts and outsiders (no guild or license, pragmatic
             | 'can you deliver' hiring) means that quite a lot of it
             | isn't "engineered" at all. There are embarrassing gaps in
             | practice everywhere you might look.
        
               | josephg wrote:
               | Yep. The philosophy most software seems to be written
               | with is "poke it until it works locally, then ship it!".
               | Bugs are things you react to when your users complain.
               | Not things you engineer out of your software, or
               | proactively solve.
               | 
               | This works surprisingly well. It certainly makes it
               | easier to get started in software. Well, so long as you
               | don't mind that most modern software performs terribly
               | compared to what the computer is capable of. And suffers
               | from reliability and security issues.
        
           | pcthrowaway wrote:
           | This supply chain attack had nothing to do with npm afaict.
           | 
           | The dependency in question seems to be (or claim to be) a
           | lazy loader that determines browser support for various
           | capabilities and selectively pulls in just the necessary
           | polyfills; in theory this should make the frontend assets
           | leaner.
           | 
           | But the CDN used for the polyfills was injecting malicious
           | code.
        
             | bdcravens wrote:
             | yes, but the NPM packaging ecosystem leads to a reliance on
             | externally-hosted dependencies for those who don't want to
             | bundle
        
             | josephg wrote:
             | Sounds like a bad idea to me.
             | 
             | I would expect latency (network round trip time) to make
             | this entire exercise worthless. Most polyfills are 1kb or
             | less. Splitting polyfill code amongst a bunch of small
             | subresources that are loaded from a 3rd party domain sounds
             | like it would be a net loss to performance. Especially
             | since your page won't be interactive until those resources
             | have downloaded.
             | 
             | Your page will almost certainly load faster if you just put
             | those polyfills in your main js bundle. It'll be simpler
             | and more reliable too.
        
           | silverwind wrote:
           | The assumption of many npm packages is that you have a
           | bundler and I think rightly so because that leaves all
           | options open regarding polyfilling, minification and actual
           | bundling.
        
             | jacobsenscott wrote:
             | polyfilling and minification both belong on the ash heap of
             | js development technologies.
        
               | benregenspan wrote:
               | I would agree with you if minification delivered marginal
               | gains, but it will generally roughly halve the size of a
               | large bundle or major JS library (compared to just
               | gzip'ing it alone), and this is leaving aside further
               | benefits you can get from advanced minification with dead
               | code removal and tree-shaking. That means less network
               | transfer time and less parse time. At least for my use-
               | cases, this will always justify the extra build step.
        
               | out-of-ideas wrote:
               | I really miss the days of minimal/no use of JS in
               | websites (not that I want java-applets and Flash LOL).
               | Kind of depressing that so much of the current webdesign
               | is walled behind javascript.
        
             | spankalee wrote:
             | The assumption shouldn't be that you have a bundler, but
             | that your tools and runtimes support standard semantics, so
             | you can bundle if you want to, or not bundle if you don't
             | want to.
        
           | nephanth wrote:
           | > I always prefer to self-host my dependencies
           | 
           | Js dependencies should be pretty small compared to images or
           | other resources. Http pipelining should make it fast to load
           | them from your server with the rest
           | 
           | The only advantage to using one of those cdn-hosted versions
           | is that it might help with browser caching
        
             | asddubs wrote:
             | nope, browsers silo cache to prevent tracking via cached
             | resources
        
             | dfabulich wrote:
             | > Http pipelining should make it fast to load them from
             | your server with the rest
             | 
             | That's true, but it should be emphasized that it's only
             | fast if you bundle your dependencies, too.
             | 
             | Browsers and web developers haven't been able to find a way
             | to eliminate a ~1ms/request penalty for each JS file, even
             | if the files are coming out of the local cache.
             | 
             | If you're making five requests, that's fine, but if you're
             | making even 100 requests for 10 dependencies and their
             | dependencies, there's a 100ms incentive to do at least a
             | bundle that concatenates your JS.
             | 
             | And once you've added a bundle step, you're a few minutes
             | away from adding a bundler that minifies, which often saves
             | 30% or more, which is usually way more than you probably
             | saved from just concatenating.
             | 
             | > The only advantage to using one of those cdn-hosted
             | versions is that it might help with browser caching
             | 
             | And that is not true. Browsers have separate caches for
             | separate sites for privacy reasons. (Before that, sites
             | could track you from site to site by seeing how long it
             | took to load certain files from your cache, even if you'd
             | disabled cookies and other tracking.)
        
           | bonestamp2 wrote:
           | > prefer to avoid an npm-based webpack/whatever build
           | pipeline
           | 
           | What kind of build pipeline do you prefer, or are you saying
           | that you don't want any build pipeline at all?
        
             | simonw wrote:
             | I don't want a build pipeline. I want to write some HTML
             | with a script type=module tag in it with some JavaScript,
             | and I want that JavaScript to load the ES modules it
             | depends on using import statements (or dynamic import
             | function calls for lazy loading).
        
           | skybrian wrote:
           | I suspect this is more relevant for people who aren't
           | normally JavaScript developers. (Let's say you use Go or
           | Python normally.) It's a way of getting the benefits of
           | multi-language development while still being mostly in your
           | favorite language's ecosystem.
           | 
           | On the Node.js side, it's not uncommon to have npm modules
           | that are really written in another language. For example, the
           | esbuild npm downloads executables written in Go. (And then
           | there's WebAssembly.)
           | 
           | In this way, popular single-language ecosystems evolve
           | towards becoming more like multi-language ecosystems. Another
           | example was Python getting 'wheels' straightened out.
           | 
           | So the equivalent for bringing JavaScript into the Python
           | ecosystem might be having Python modules that adapt
           | particular npm packages. Such a module would automatically
           | generate JavaScript based on a particular npm, handling the
           | toolchain issue for you.
           | 
           | A place to start might be a Python API for the npm command
           | itself, which takes care of downloading the appropriate
           | executable and running it. (Or maybe the equivalent for Bun
           | or Deno?)
           | 
           | This is adding still more dependencies to your supply chain,
           | although unlike a CDN, at least it's not a live dependency.
           | 
           | Sooner or later, we'll all depend on left-pad. :-)
        
           | galdosdi wrote:
           | Oh lol yeah, I recently gave up and just made npm build part
           | of my build for a hobby project I was really trying to keep
           | super simple, because of this. It was too much of a hassle to
           | link in stuff otherwise, even very minor small things
           | 
           | You shouldn't need to fish stuff out of node_moduoes though,
           | just actually get it linked and bundled into one is so that
           | it automatically grabs exactly what you need and it's deps.
           | 
           | If this process sketches you out as it does me, one way to
           | address that, as I do, is have the bundle emitted with
           | minification disabled so its easy to review
        
         | spankalee wrote:
         | I don't think SRI would have ever worked in this case because
         | not only do they dynamically generate the polyfill based on URL
         | parameters and user agent, but they were updating the polyfill
         | implementations over time.
        
         | jsheard wrote:
         | That was my thought too but polyfill.io does do a bit more than
         | a traditional library CDN, their server dispatches a different
         | file depending on the requesting user agent, so only the
         | polyfills needed by that browser are delivered and newer ones
         | don't need to download and parse a bunch of useless code. If
         | you check the source code they deliver from a sufficiently
         | modern browser then it doesn't contain any code at all (well,
         | unless they decide to serve you the backdoored version...)
         | 
         | https://polyfill.io/v3/polyfill.min.js
         | 
         | OTOH doing it that way means you _can 't_ use subresource
         | integrity, so you really have to trust whoever is running the
         | CDN even more than usual. As mentioned in the OP, Cloudflare
         | and Fastly both host their own mirrors of this service if you
         | still need to care about old browsers.
        
         | ryan29 wrote:
         | The same concept should be applied to container based build
         | pipelines too. Instead of pulling dependencies from a CDN or a
         | pull through cache, build them into a container and use that
         | until you're ready to upgrade dependencies.
         | 
         | It's harder, but creates a clear boundary for updating
         | dependencies. It also makes builds faster and makes old builds
         | more reproducible since building an old version of your code
         | becomes as simple as using the builder image from that point in
         | time.
         | 
         | Here's a nice example [1] using Java.
         | 
         | 1. https://phauer.com/2019/no-fat-jar-in-docker-image/
        
           | hinkley wrote:
           | I get the impression this is a goal of Nix, but I haven't
           | quite digested how their stuff works yet.
        
           | gopher_space wrote:
           | > The same concept should be applied to container based build
           | pipelines too. Instead of pulling dependencies from a CDN or
           | a pull through cache, build them into a container and use
           | that until you're ready to upgrade dependencies.
           | 
           | Everything around your container wants to automatically
           | update itself as well, and some of the changelogs are half
           | emoji.
        
         | adolph wrote:
         | > the World Economic Forum website is affected here, for
         | example! Absolutely ridiculous
         | 
         | Dammit Jim, we're economists, not dream weavers!
        
         | mmsc wrote:
         | >self-host your dependencies behind a CDN service you control
         | (just bunny/cloudflare/akamai/whatever is fine and cheap).
         | 
         | This is not always possible, and some dependencies will even
         | disallow it (think: third-party suppliers). Anyways, then that
         | CDN service's BGP routes are hijacked. Then what? See "BGP
         | Routes" on https://joshua.hu/how-I-backdoored-your-supply-chain
         | 
         | But in general, I agree: websites pointing to random js files
         | on the internet with questionable domain independence and
         | security is a minefield that is already exploding in some
         | places.
        
         | bawolff wrote:
         | The shared CDN model might have made sense back when browsers
         | used a shared cache, but they dont even do that anymore.
         | 
         | Static files are cheap to serve. Unless your site is getting
         | hundreds of millions of page views, just plop the js file on
         | your webserver. With HTTP/2 it will probably be almost the same
         | speed if not faster than a cdn in practise.
        
           | Cthulhu_ wrote:
           | If you have hundreds of millions of pageviews, go with a
           | trusted party - someone you actually pay money to - like
           | Cloudflare, Akamai, or any major hosting / cloud party. But
           | not to increase cache hit rate (what CDNs were originally
           | intended for), but to reduce latency and move resources to
           | the edge.
        
             | bawolff wrote:
             | Does it even reduce latency that much (unless you have
             | already squeezed latency out of everything else that you
             | can)?
             | 
             | Presumably your backend at this point is not ultra
             | optimized. If you send a link header and using http/2 the
             | browser will download the js file while your backend is
             | doing its thing. I'm doubtful that moving js to the edge
             | would help that much in such a situation unless the client
             | is on the literal other side of the world.
             | 
             | There of course comes a point where it does matter, i just
             | think the cross over point is way later than people expect.
        
               | smaudet wrote:
               | > Does it even reduce latency that much
               | 
               | Absolutely:
               | 
               | https://wondernetwork.com/pings/
               | 
               | Stockholm <-> Tokyo is at least 400ms here, anytime you
               | have multi-national sites having a CDN is important. For
               | your local city, not so much (and of course you won't
               | even see it locally).
        
               | bawolff wrote:
               | I understand that ping times are different when
               | geolocated. My point was that in fairly typical scenarios
               | (worst cases are going to be worse) it would be hidden by
               | backend latency since the fetch could be concurrent with
               | link headers or http 103. Devil in details of course.
        
               | a2800276 wrote:
               | I'm so glad to find some sane voices here! I mean, sure,
               | if you're really serving a lot of traffic to Mombasa,
               | akamai will reduce latency. You could also try to avoid
               | multi megabyte downloads for a simple page.
        
               | jsheard wrote:
               | Content: 50KB
               | 
               | Images: 1MB
               | 
               | Javascript: 35MB
               | 
               | Fonts: 200KB
               | 
               | Someone who is good at the internet please help me budget
               | this. My bounce rate is dying.
        
               | jenadine wrote:
               | What's all that JavaScript for?
        
               | ozzcer wrote:
               | Cookie banner
        
               | bawolff wrote:
               | While there are lots of bad examples out there - keep in
               | mind its not quite that straight forward as it can make a
               | big difference whether those resources are on the
               | critical path that blocks first paint or not.
        
               | josephg wrote:
               | It's not an either or thing. Do both. Good sites are
               | small and download locally. The CDN will work better (and
               | be cheaper to use!) if you slim down your assets as well.
        
             | akira2501 wrote:
             | > But not to increase cache hit rate (what CDNs were
             | originally intended for)
             | 
             | Was it really cache hit rate of the client or cache hit
             | rate against the backend?
        
           | hinkley wrote:
           | Aren't we also moving toward not even letting cross-origin
           | scripts having very little access to information about the
           | page? I read some stuff a couple years ago that gave me a
           | very strong impression that running 3rd party scripts was
           | quickly becoming an evolutionary dead end.
        
             | progmetaldev wrote:
             | Definitely for browser extensions. It's become more
             | difficult with needing to set up CORS, but like with most
             | things that are difficult, you end up with developers that
             | "open the floodgates" and allow as much as possible to get
             | the job done without understanding the implications.
        
           | zdragnar wrote:
           | Even when it "made sense" from a page load performance
           | perspective, plenty of us knew it was a security and privacy
           | vulnerability just waiting to be exploited.
           | 
           | There was never really a compelling reason to use shared CDNs
           | for most of the people I worked with, even among those
           | obsessed with page load speeds.
        
             | progmetaldev wrote:
             | In my experience, it was more about beating metrics in
             | PageSpeed Insights and Pingdom, rather than actually
             | thinking about the cost/risk ratio for end users. Often the
             | people that were pushing for CDN usage were SEO/marketing
             | people believing their website would rank higher for taking
             | steps like these (rather than working with devs and having
             | an open conversation about trade-offs, but maybe that's
             | just my perspective from working in digital marketing
             | agencies, rather than companies that took time to
             | investigate all options).
        
               | josephg wrote:
               | I don't think it ever even improved page load speeds,
               | because it introduces another dns request, another tls
               | handshake, and several network round trips just to what?
               | Save a few kb on your js bundle size? That's not a good
               | deal! Just bundle small polyfills directly. At these
               | sizes, network latency dominates download time for almost
               | all users.
        
               | progmetaldev wrote:
               | I believe you could download from multiple domains at the
               | same time, before HTTP/2 became more common, so even with
               | the latency you'd still be ahead while your other
               | resources were downloading. Then it became more difficult
               | when you had things like plugins that depended on order
               | of download.
        
               | josephg wrote:
               | You can download from multiple domains at once. But think
               | about the order here:
               | 
               | 1. The initial page load happens, which requires a DNS
               | request, TLS handshake and finally HTML is downloaded.
               | The TCP connection is kept alive for subsequent requests.
               | 
               | 2. The HTML references javascript files - some of these
               | are local URLs (locally hosted / bundled JS) and some are
               | from 3rd party domains, like polyfill.
               | 
               | 3a. Local JS is requested by having the browser send
               | subsequent HTTP requests over the existing HTTP
               | connection
               | 
               | 3b. Content loaded from 3rd party domains (like this
               | polyfill code) needs a new TCP connection handshake, a
               | TLS handshake, and then finally the polyfills can be
               | loaded. This requires several new round-trips to a
               | different IP address.
               | 
               | 4. The page is finally interactive - but only after all
               | JS has been downloaded.
               | 
               | Your browser can do steps 3a and 3b in parallel. But I
               | think it'll almost always be faster to just bundle the
               | polyfill code in your existing JS bundle. Internet
               | connections have very high bandwidth these days, but
               | latency hasn't gotten better. The additional time to
               | download (lets say) 10kb of JS is trivial. The extra time
               | to do a DNS lookup, a TCP then TLS handshake and then
               | send an HTTP request and get the response can be
               | significant.
               | 
               | And you won't even notice when developing locally,
               | because so much of this stuff will be cached on your
               | local machine while you're working. You have to look at
               | the performance profile to understand where the page load
               | time is spent. Most web devs seem much more interested in
               | chasing some new, shiny tech than learning how
               | performance profiling works and how to make good websites
               | with "old" (well loved, battle tested) techniques.
        
               | bawolff wrote:
               | > I don't think it ever even improved page load speeds,
               | because it introduces another dns request, another tls
               | handshake, and several network round trips just to what?
               | 
               | I think the original use case, was when every site on the
               | internet was using jquery, and on a js based site this
               | blocked display (this was also pre fancy things like
               | HTTP/2 and TLS 0-RTT). Before cache partitioning you
               | could reuse jquery js requested from a totally different
               | site currently in cache as long as the js file had same
               | url, which almost all clients already had since jquery
               | was so popular.
               | 
               | So it made sense at one point but that was long ago and
               | the world is different now.
        
         | fhub wrote:
         | I strongly believe that Browser Dev Tools should have an extra
         | column in the network tab that highlights JS from third party
         | domains that don't have SRI. Likewise in the Security tab and
         | against the JS in the Application Tab.
        
         | TZubiri wrote:
         | Another alternative is not to use dependencies that you or your
         | company are not paying for.
        
         | larodi wrote:
         | I can see the CDNs like CF / Akamai becoming soon like the
         | internet 1.2 - with the legitimate stuff in, and all else
         | considered gray/dark/1.0 web.
        
         | evantbyrne wrote:
         | I agree with this take, but it sounds like Funnull acquired the
         | entirety of the project, so they could have published the
         | malware through NPM as well.
        
         | modeless wrote:
         | Another downside of SRI is that it defeats streaming. The
         | browser can't verify the checksum until the whole resource is
         | downloaded so you don't get progressive decoding of images or
         | streaming parsing of JS or HTML.
        
       | karaterobot wrote:
       | It's amazing to me that anyone who tried to go to a website, then
       | was redirected to an online sports betting site instead of the
       | site they wanted to go to, would be like "hmm, better do some
       | sports gambling instead, and hey this looks like just the website
       | for me". This sort of thing must work on some percentage of
       | people, but it's disappointing how much of a rube you'd have to
       | be to fall for it.
        
         | causal wrote:
         | It plants a seed. Could be a significant trigger for gambling
         | addicts.
        
         | baobabKoodaa wrote:
         | I'm genuinely puzzled that a group with the ability to hijack
         | 100k+ websites can think of nothing more lucrative to do than
         | this.
        
           | tracker1 wrote:
           | Even with low rates, my first thought would probably be
           | crypto mining via wasm. I'd never do it, but would have been
           | less noticeable.
        
           | EasyMark wrote:
           | You'd think they'd contract with some hacker group or
           | government and use it as a vector to inject something more
           | nefarious if money was their goal
        
             | baobabKoodaa wrote:
             | Yeah, I guess they just genuinely love sports betting
        
           | akira2501 wrote:
           | Yes, but if you reduce your overall risk, your chances of
           | actually receiving a payout increase immensely.
        
         | bornfreddy wrote:
         | This assumes that advertisers know how the traffic came to
         | their site. The malware operators could be scamming the
         | advertisers into paying for traffic with very low conversion
         | rates.
        
         | laurent123456 wrote:
         | I can't find the reference now, but I think I read somewhere it
         | only redirects when the user got there by clicking on an ad. In
         | that case it would make a bit more sense - the script
         | essentially swaps the intended ad target to that sport gambling
         | website. Could work if the original target was a gaming or
         | sport link.
        
       | jscheel wrote:
       | So glad I removed polyfill as soon as all that nonsense went down
       | a few months ago.
        
       | no_wizard wrote:
       | I can't believe the Financial Times didn't secure the domain for
       | the project. They backed it for a long time then dropped support
       | of it.
       | 
       | I wonder if the polyfills themselves are compromised, because you
       | can build your poly fill bundles via npm packages that are
       | published by JakeChampion
        
       | apitman wrote:
       | Sigstore is doing a lot of interesting work in the code supply
       | chain space. I have my fingers crossed that they find a way to
       | replace the current application code signing racket along the
       | way.
        
         | lifeisstillgood wrote:
         | Sorry - I must have missed this - what's the application code
         | signing racket?
        
           | apitman wrote:
           | Shipping an app that runs on Windows without scary warnings
           | required ~400USD/year code signing certificate, unless you
           | release through the Microsoft Store.
        
       | victor9000 wrote:
       | I added a ublock rule on mobile to exclude this domain
       | 
       | ||polyfill.io^
       | 
       | Any other practical steps that mobile users can take?
        
         | flanbiscuit wrote:
         | Thanks for this, I added it to everywhere I use uBlock Origin.
         | 
         | In case anyone was wondering, here's more info:
         | https://github.com/gorhill/uBlock/wiki/Strict-blocking
         | 
         | This only works at blocking full domains starting from uBO
         | v0.9.3.0. Latest version is v1.58.1, so it's safe to assume
         | most people are up to date. But noting it just in case.
         | 
         | https://github.com/gorhill/uBlock/releases
        
         | tracker1 wrote:
         | Just added to my pihole.
        
         | WarOnPrivacy wrote:
         | edit:DNS servers for Cloudflare, Quad9, Google and Level3 all
         | cname cdn.polyfill.io to Cloudflare domains.
         | 
         | I assume those are the alt endpoints that Cloudflare setup in
         | Feb. Lots of folks seem to be protected now.
         | 
         | Cloudflare notice: https://blog.cloudflare.com/polyfill-io-now-
         | available-on-cdn...
         | 
         | Feb discussion:
         | https://github.com/formatjs/formatjs/issues/4363
         | 
         | (edit:withdrawn) For my use cases, I made the local DNS servers
         | authoritative for polyfill.io. Every subdomain gets a Server
         | Failed error.
         | 
         | Might work for pihole too.
        
         | jagged-chisel wrote:
         | Might as well block googie-anaiytics.com while you're at it
        
       | bangaladore wrote:
       | It seems like Cloudflare predicted this back in Feb.
       | 
       | https://blog.cloudflare.com/polyfill-io-now-available-on-cdn...
        
         | jjulius wrote:
         | CF links to the same discussion on GitHub that the OP does.
         | Seems less like they predicted it, and more like they just
         | thought that other folks concerns were valid and amplified the
         | message.
        
       | TonyTrapp wrote:
       | Gotta love the code excerpt verifying the device type that checks
       | for "Mac68K" and "MacPPC" strings. Your retro Macs are safe!
        
       | lumb63 wrote:
       | I'll go ahead and make an assumption that the Chinese government
       | was involved. Countries badly need to figure out a way to punish
       | bad actors in cybersecurity realms. It seems that this type of
       | attack, along with many others, are quickly ramping up. If there
       | isn't competent policy in this area, it could become very
       | dangerous.
        
       | szundi wrote:
       | Isn't there some hash in the script tag for these kinds of stuff?
       | Maybe that should be mandatory or something? This broke half the
       | internet anyway.
        
       | the8472 wrote:
       | start projects with                   Content-Security-Policy:
       | default-src 'self';
       | 
       | then add narrow, individually justified exceptions.
        
       | baobabKoodaa wrote:
       | Is anyone taking the steps to inform affected site owners?
        
       | jl6 wrote:
       | So who was the previous owner that sold us all out?
        
       | Animats wrote:
       | Washington Post home page external content:
       | app.launchdarkly.com         cdn.brandmetrics.com         chrt.fm
       | clientstream.launchdarkly.com         events.launchdarkly.com
       | fastlane.rubiconproject.com         fonts.gstatic.com
       | g.3gl.net         grid.bidswitch.net
       | hbopenbid.pubmatic.com         htlb.casalemedia.com
       | ib.adnxs.com         metrics.zeustechnology.com
       | pixel.adsafeprotected.com         podcast.washpostpodcasts.com
       | podtrac.com         redirect.washpostpodcasts.com
       | rtb.openx.net         scripts.webcontentassessor.com
       | s.go-mpulse.net         tlx.3lift.com
       | wapo.zeustechnology.com         www.google.com
       | www.gstatic.com
       | 
       | Fox News home page external content:
       | 3p-geo.yahoo.com         acdn.adnxs.com         ads.pubmatic.com
       | amp.akamaized.net         api.foxweather.com         bat.bing.com
       | bidder.criteo.com         c2shb.pubgw.yahoo.com
       | cdn.segment.com         cdn.taboola.com
       | configs.knotch.com         contributor.google.com
       | dpm.demdex.net         eb2.3lift.com
       | eus.rubiconproject.com         fastlane.rubiconproject.com
       | foxnewsplayer-a.akamaihd.net         frontdoor.knotch.it
       | fundingchoicesmessages.google.com         global.ketchcdn.com
       | grid.bidswitch.net         hbopenbid.pubmatic.com
       | htlb.casalemedia.com         ib.adnxs.com
       | js.appboycdn.com         js-sec.indexww.com
       | link.h-cdn.com         pagead2.googlesyndication.com
       | perr.h-cdn.com         pix.pub         player.h-cdn.com
       | prod.fennec.atp.fox         prod.idgraph.dt.fox
       | prod.pyxis.atp.fox         rtb.openx.net         secure-
       | us.imrworldwide.com         static.chartbeat.com
       | static.criteo.net         s.yimg.com         sync.springserve.com
       | tlx.3lift.com         u.openx.net
       | webcontentassessor.global.ssl.fastly.net
       | www.foxbusiness.com         www.googletagmanager.com
       | www.knotch-cdn.com         zagent20.h-cdn.com
       | 
       | So there's your target list for attacking voters.
        
         | 38 wrote:
         | Exactly why I have a whitelist
         | 
         | https://github.com/3052/blog/blob/main/2024-06/ublock-origin...
        
         | akira2501 wrote:
         | "No sir, we have absolutely no idea why anyone would ever use
         | an ad blocker."
        
       | markus_zhang wrote:
       | Maybe this will lead to everyone building their own stuffs. A
       | pretty good outcome for SWEs, isn't it?
        
       | ChrisMarshallNY wrote:
       | I tend to avoid [other people's] dependencies like the plague.
       | Not just for security, but also for performance and Quality. I
       | think I have a grand total of two (2), in all my repos, and they
       | are ones that I can reengineer, if I absolutely need to (and I
       | have looked them over, and basically am OK with them, in their
       | current contexts).
       | 
       | But I use a _lot_ of dependencies; it 's just that I've written
       | most of them.
       | 
       | What has been annoying AF, is the inevitable sneer, when I
       | mention that I like to avoid dependencies.
       | 
       | They usually mumble something like _" Yeah, but DRY...", or
       | "That's a SOLVED problem!"_ etc. I don't usually hang around to
       | hear it.
        
         | cqqxo4zV46cp wrote:
         | In most professional development contexts your puritan approach
         | is simply unjustified. You're obviously feeling very smug now,
         | but that feeling is not justified. I note that you say "in all
         | my repos". What is the context in which these repositories
         | exist? Are they your hobby projects and not of any real
         | importance? Do you have literally anyone that can call you out
         | for the wasted effort of reimplementing a web server in
         | assembly because you don't trust dependencies? I hope that
         | you're making your own artisanal silicon from sand you dug
         | yourself from your family farm. Haven't you heard about all
         | those Intel / AMD backdoors? Sheesh.
         | 
         | EDIT: you're an iOS developer. Apples and oranges. Please don't
         | stand on top of the mountain of iOS's fat standard library and
         | act like it's a design choice that you made.
        
           | ChrisMarshallNY wrote:
           | _> In most professional development contexts your puritan
           | approach is simply unjustified. You're obviously feeling very
           | smug now, but that feeling is not justified. I note that you
           | say "in all my repos". What is the context in which these
           | repositories exist? Are they your hobby projects and not of
           | any real importance? Do you have literally anyone that can
           | call you out for the wasted effort of reimplementing a web
           | server in assembly because you don't trust dependencies? I
           | hope that you're making your own artisanal silicon from sand
           | you dug yourself from your family farm. Haven't you heard
           | about all those Intel  / AMD backdoors? Sheesh.
           | 
           | EDIT: you're an iOS developer. Apples and oranges. Please
           | don't stand on top of the mountain of iOS's fat standard
           | library and act like it's a design choice that you made._
           | 
           | --
           | 
           |  _ahem_ , yeah...
           | 
           |  _[EDIT] Actually, no. Unlike most Internet trolls, I don 't
           | get off on the misfortune of others. I -literally-, was not
           | posting it to be smug. I was simply sharing my approach,
           | which is hard work, but also one I do for a reason.
           | 
           | In fact, most of the grief I get from folks, is smugness, and
           | derision. A lot of that "Old man is caveman" stuff; just like
           | what you wrote. I've been in a "professional development
           | context" since 1986 or so, so there's a vanishingly small
           | chance that I may actually be aware of the ins and outs of
           | shipping software.
           | 
           | I was simply mentioning my own personal approach -and I have
           | done a_ lot _of Web stuff, over the years-, along with a
           | personal pet peeve, about how people tend to be quite smug to
           | me, because of my approach.
           | 
           | You have delivered an insult, where one was not needed. It
           | was unkind, unsought, undeserved, and unnecessary._
           | 
           | Always glad to be of service.
           | 
           | BTW. It would take anyone, literally, 1 minute to find all my
           | repos.
        
       | diego_sandoval wrote:
       | The phrase "supply chain attack" makes it sound like it's some
       | big, hard to avoid problem. But almost always, it's just
       | developer negligence:
       | 
       | 1. Developer allows some organization to inject arbitrary code in
       | the developer's system
       | 
       | 2. Organization injects malicious code
       | 
       | 3. Developer acts all surprised and calls it an "attack"
       | 
       | Maybe don't trust 3rd parties so much? There's technical means to
       | avoid it.
       | 
       | Calling this situation a supply chain attack is like saying you
       | were victim of a "ethanol consumption attack" when you get drunk
       | from drinking too many beers.
        
         | ldoughty wrote:
         | In this case, the developer sold the user account & repository
         | for money (no ownership change to monitor).. so if you were not
         | privy to that transaction, you really couldn't "easily" avoid
         | this without e.g. forking every repo you depend on and bringing
         | it in house or some other likely painful defense mechanism to
         | implement
        
         | cqqxo4zV46cp wrote:
         | What good does this comment do beside allow you to gloat and
         | put others down? Like, Christ. Are you telling me that you'd
         | ever speak this way to someone in person?
         | 
         | I have no doubt that every single person in this thread
         | understands what a supply chain attack is.
         | 
         | You are arguing over semantics in an incredibly naive way.
         | Trust relationships exist both in business and in society
         | generally. It's worth calling out attacks against trust
         | relationships as what they are: attacks.
        
         | akira2501 wrote:
         | It's called a supply chain attack to displace the blame on the
         | profitable organization that negligently uses this code onto
         | the unpaid developers who lost control of it.
         | 
         | As if expecting lone OSS developers that you don't donate any
         | money towards somehow being able to stand up against the
         | attacks of nation states is a rational position to take.
        
       | leeeeeepw wrote:
       | Name and shame the sports betting site
        
         | orthecreedence wrote:
         | Name and shame people who put unwarranted trust in third
         | parties to save 2ms on their requests.
        
       | EGreg wrote:
       | Sadly this is how the Web works.
       | 
       | We need a much more decentralized alternative, that lets static
       | files be served based on content hashes. For now browser
       | extensions are the only way. It's sad but the Web doesn't protect
       | clients from servers. Only servers from clients.
        
         | cqqxo4zV46cp wrote:
         | Before getting on your soapbox about the decentralised web,
         | please look at what Polyfill actually did. I'm not sure what
         | you're actually suggesting, but the closest remotely viable
         | thing (subresource integrity) already exists. It simply
         | wouldn't work in Polyfill's case because Polyfill dynamically
         | selected the 'right' code to send based on user agent.
         | 
         | As usual this problem has nothing to do with centralisation v
         | decentralisation. Are you suggesting that people vet the third
         | parties used by the sites visit? How does that sound practical
         | for anyone other than ideological nerds?
        
       | edm0nd wrote:
       | They are denying everything on their Twitter @Polyfill_Global lol
       | 
       | https://x.com/Polyfill_Global
        
       ___________________________________________________________________
       (page generated 2024-06-25 23:00 UTC)