[HN Gopher] Never Update Anything
___________________________________________________________________
Never Update Anything
Author : generatorman
Score : 105 points
Date : 2024-07-19 19:07 UTC (3 hours ago)
(HTM) web link (blog.kronis.dev)
(TXT) w3m dump (blog.kronis.dev)
| hitpointdrew wrote:
| "Never Update Anything"
|
| Author proceeds to add to two updates to the article, epic troll.
| Joel_Mckay wrote:
| These days it makes sense to life-cycle entire container images
| rather than maintain applications with their dependencies.
|
| The current BSOD epidemic demonstrated the folly of mass
| concurrent versioning.
|
| *nix admins are used to playing upgrade Chicken with their
| uptime scores. lol =)
| nickthegreek wrote:
| Previously (November 4, 2021 -- 319 points, 281 comments):
| https://news.ycombinator.com/item?id=29106159
| schiffern wrote:
| Hug of Death right now, mirror here:
| https://web.archive.org/web/20240509031433/https://blog.kron...
| jspash wrote:
| Maybe they should have updated the server capacity?
| KronisLV wrote:
| Currently it's running on a VPS that has 1 CPU core and 4 GB
| of RAM, resources which are shared with a few other
| processes. I'm thinking that I might move over from multiple
| smaller VPSes (better separation of resources, smaller
| fallout from various issues) to a fewer bigger ones in the
| future (also cheaper), in which case the containers would
| struggle less under load spikes.
| cjalmeida wrote:
| Well, if only he had updated his server stack to something more
| scalable...
| exe34 wrote:
| > Not only that, but put anything and everything you will ever
| need within the standard library or one or two large additional
| libraries.
|
| you can definitely do that with python today: assemble a large
| group of packages that conver a large fraction of what people
| need to do, and maintain that as the 1 or 2 big packages.
| nobody's stopping you.
| ungamedplayer wrote:
| You would need to maintain python itself too. Imagine if you
| had done this same plan prior to the python 3 transition.
| exe34 wrote:
| https://github.com/naftaliharris/tauthon
|
| it's already being done!
| apantel wrote:
| No no no, it's "never update anything and don't expose your
| machine to the internet". Winning strategy right there.
| 0cf8612b2e1e wrote:
| If only that were possible with some appliances. I can keep my
| TV offline, but not the Roku. Internet connected utilities
| which will continually patch themselves into enshitification.
| KronisLV wrote:
| I know it's supposed to be a statement to take the absurd title
| of my article a bit further, but in some cases, I can see that
| being said unironically.
|
| Nothing good would happen if some machine running Windows XP in
| a hospital that's hooked up to an expensive piece of equipment
| that doesn't run with anything else suddenly got connected to
| the Internet. Nor does the idea of any IoT device reaching past
| the confines of the local network make me feel safe, given how
| you hear about various exploits that those have.
|
| On one hand, you should get security patches whenever possible.
| On the other hand, it's not realistic to get _just_ security
| patches with non-breaking changes only. Other times, pieces of
| hardware and software will just be abandoned (e.g. old Android
| phones) and then you 're on your own, even if you'd want to
| keep them up to date.
| vbezhenar wrote:
| I used to work with state agencies and they run outdated
| unpatched Windows computers all over the place.
|
| Nowadays I work in medical software and hospitals are running
| outdated unpatched Windows computers everywhere.
|
| Nobody cares about updates. Almost nobody. I never saw
| Windows 11. Windows 10 is popular, but there are plenty of
| Vistas. I'm outright declining supporting Windows XP and we
| lost some customers over this issue.
|
| My development tools are somewhat outdated, because compilers
| love to drop old Windows versions and 32-bit architectures,
| so sometimes I just can't update the compiler. For example
| I'm stuck with Java 8 for the foreseeable future, because
| Vista users are too numerous and it's not an option to drop
| them.
|
| Hacker News is like another world. Yes, I update my computer,
| but everyone else does not. Even my fellow developers often
| don't care and just use whatever they got.
| SoftTalker wrote:
| "In my eyes it could be pretty nice to have a framework version
| that's supported for 10-20 years and is so stable that it can be
| used with little to no changes for the entire expected lifetime
| of a system."
|
| This is what applications used to be like, before the web and
| internet hit and regular or even push updating became easy.
|
| It was simply so difficult and expensive to provide updates once
| the software was in the customer's hands that it was done
| intentionally and infrequently. For the most part, you bought
| software, installed it, and used it. That was it. It never
| changed, unless you bought it again in a newer version
| EvanAnderson wrote:
| Frequent updates, in the old days, meant that a vendor had poor
| QA. I think that's probably still the case most of the time
| today, too.
| tivert wrote:
| > Frequent updates, in the old days, meant that a vendor had
| poor QA. I think that's probably still the case most of the
| time today, too.
|
| The internet has normalized poor QA. The bosses don't give a
| shit about QA anymore because it's so cheap to just push out
| a patch.
|
| I mean just look at old video game magazines that talked
| about the software development process: the developers would
| test the hell out of a game, _then test the hell out of it
| again_ , because once it was burned onto a $100 cart (in 2024
| dollars) _it wasn 't ever going to change_.
|
| Now games can remain buggy and unstable for months or even
| years after "release."
| dasil003 wrote:
| I never worked on games, but I did do a streaming video app
| for PS3 in 2010, during the time period when it was
| arguably the best media center box available. Working with
| Sony (SCEE) on this was eye opening how their QA process
| was set up. Essentially you formally submitted the compiled
| app (C++ and Flash!) to them, and then tickets would come
| back and you'd have to address them. QA was a separate org
| so you never really met them in person, all issue
| discussion would happen in-band to the ticketing system. QA
| had all the power to decide when and if this thing would
| ship, much moreso than Product or Engineering. I can't say
| the process made a ton of sense for a downloadable app
| powered by a multi-platform online service, but it was
| illuminating as to how the high quality bar of 90s/00s
| console games was achieved.
| chrisjj wrote:
| > then tickets would come back
|
| Luxury! With Nintendo you'd often get one ticket. Any
| further bugs would cost you further submissions, and
| months of slippage.
| numpad0 wrote:
| Windows XP. High profile zero day cases and Windows Update
| during 2000s created a "security updates are like dietary
| supplements" mindset.
| chrisjj wrote:
| The differece is it no longer means the vendor's QA is poorer
| /than average/.
| hypercube33 wrote:
| I remember even games, or especially games were like this.
| Interplay would rarely have a post launch patch or make it past
| 1.01 versions of a whole game. then in the late 90s or 2000ish
| Tribes 2 came out and basically didn't even work for over a
| year until patches finished the game. I think once Internet hit
| critical mass things shifted forever and haven't gone back.
| spyspy wrote:
| Kinda weird to see Java over Go, when the former is basically an
| entirely new language from what it was 10 years ago and the
| latter has made it an explicit goal to never break older versions
| and (almost) never change the core language.
| karolist wrote:
| Writing backends in Go I do get that warm fuzzy feeling knowing
| that it will compile and work in ten years. The syntax is easy
| to read, if I'm not lazy to add extensive tests I can simply
| read these as documentation to re-familiarise myself later.
| It's now my go to tool for everything server side.
| ungamedplayer wrote:
| Now do it for lisp and your libraries alone were last updated
| 5 years ago.
| KronisLV wrote:
| Oh hey, I was wondering why the VPS suddenly had over 100 load
| average, restarted Docker since the containers were struggling,
| now I know why (should be back now for a bit). Won't necessarily
| fix it, might need to migrate over to something else for the
| blog, with a proper cache, alongside actually writing better
| articles in the future.
|
| I don't think the article itself holds up that well, it's just
| that updates are often a massive pain, one that you have to deal
| with _somehow_ regardless. Realistically, LTS versions of OS
| distros and technologies that don 't change often will lessen the
| pain, but not eliminate it entirely.
|
| And even then, you'll still need to deal with breaking changes
| when you will be forced to upgrade across major releases (e.g.
| JDK 8 to something newer after EOL) or migrate once a technology
| dies altogether (e.g. AngularJS).
|
| It's not like people will backport fixes for anything
| indefinitely either.
| cjalmeida wrote:
| >might need to migrate over to something else for the blog,
| with a proper cache
|
| Never Update _Anything_ :)
| KronisLV wrote:
| I am very much tempted not to because it works under lower
| loads, could just put it on a faster server, but how could I
| pass up the chance to write my own CMS (well, a better one
| than the previous ones I've done)? That's like a rite of
| passage. But yes, the irony isn't lost on me, I just had to
| go for _that_ title.
| stavros wrote:
| If you have to write your own CMS, make it compile to
| static files. I did that with Django, used Django-distill,
| and it's hands down the best static site generator I've
| ever used. My site never needs updates and never goes down
| under any amount of load.
| hipadev23 wrote:
| "static files" are nothing more than no-TTL caching
| strategy with manual eviction.
| Joel_Mckay wrote:
| Alpine linux was designed for web services, as it includes the
| bare minimum resources necessary for deployment.
|
| https://wiki.alpinelinux.org/wiki/Nginx
|
| Also, may want to consider a flat html site if you don't have
| time to maintain a framework/ecosystem. =3
| KronisLV wrote:
| Alpine is pretty nice!
|
| I did end up opting for Ubuntu LTS (and maybe the odd Debian
| based image here or there) for most of my containers because
| it essentially has no surprises and is what I run locally, so
| I can reuse a few snippets to install certain tools and it
| also has a pretty long EOL, at the expense of larger images.
|
| Oddly enough, I also ended up settling on Apache over Nginx
| and even something like Caddy (both of which are also really
| nice) because it's similarly a proven technology that's good
| enough, especially with something like mod_md
| https://httpd.apache.org/docs/2.4/mod/mod_md.html and because
| Nginx in particular had some unpleasant behavior when DNS
| records weren't available because some containers in the
| cluster weren't up
| https://stackoverflow.com/questions/50248522/nginx-will-
| not-...
|
| I might go for a static site generator sometime!
| Joel_Mckay wrote:
| Apache is stable for wrapping mixed services, but needs a
| few firewall rules to keep it functional (slow loris +
| mod_qos etc.) =)
|
| Ubuntu LTS kernels are actually pretty stable, but
| containers are still recommended. ;)
| KronisLV wrote:
| That's fair! Honestly, it's kind of cool to see how many
| different kinds of packages are available for Apache.
|
| A bit off topic, but I rather enjoyed the idea behind
| mod_auth_openidc, which ships an OpenID Connect Relying
| Party implementation, so some of the auth can be
| offloaded to Apache in combination with something like
| Keycloak and things in the protected services can be kept
| a bit simpler (e.g. just reading the headers provided by
| the module): https://github.com/OpenIDC/mod_auth_openidc
| Now, whether that's a good idea, that's debatable, but
| there are also plenty of other implementations of Relying
| Party out there as well:
| https://openid.net/developers/certified-openid-connect-
| imple...
|
| I am also on the fence about using mod_security with
| Apache, because I know for a fact that Cloudflare would
| be a better option for that, but at the same time self-
| hosting is nice and I don't have anything too precious on
| those servers that a sub-optimal WAF would cause me that
| many headaches. I guess it's cool that I can, even down
| to decent rulesets: https://owasp.org/www-project-
| modsecurity-core-rule-set/ though the OWASP Coraza
| project also seems nice: https://coraza.io/
| Joel_Mckay wrote:
| I prefer x509 client GUID certs, and AMQP+SSL with null
| delineated bson messaging.
|
| Gets rid of 99.999% of problem traffic on APIs.
|
| It is the most boring thing I ever integrated, and
| RabbitMQ has required about 3 hours of my time in 5
| years. I like that kind of boring... ;)
| graemep wrote:
| What exactly do you do to protect Apache from slow loris?
| Its my main reason for not using Apache.
| louwrentius wrote:
| For contrast, I recently had a no 1. HN hit and my Pi4 never
| had a core beyond 20%
|
| Yes, it's a static website. It's amazing how little performance
| you actually need to survive a HN avalanche
| cogman10 wrote:
| I find it pretty funny that immediately on the first click of
| this article I was greeted with an internal server error.
| KronisLV wrote:
| That was me scrambling to allocate more resources to the
| container and redeploy it, after my alerting ticked me off
| about issues and I figured out what's going on. While the
| container itself was down, the reverse proxy returned an error.
| neontomo wrote:
| the react module bloat example is not a fair one, the recommended
| way to start a react project isn't to use create-react-app. other
| methods are more streamlined. but then again, the deprecation of
| create-react-app perhaps proves the point that updates create
| problems.
| eXpl0it3r wrote:
| It's not anymore the recommended way and last I checked it's
| not really being maintained as much as other ways, but for
| quite a while, it was the recommended way.
| neontomo wrote:
| that is what i'm saying ;-)
| AaronFriel wrote:
| A feature I've wanted for ages, for every OS package manager
| (Windows, apt, yum, apk, etc.), every language's package manager
| (npm, pypi, etc.), and so on is to update but filter out anything
| less than one day, one week, or one month old. And it applies
| here, too.
|
| Now, some software, they effectively do this risk mitigation for
| you. Windows, macOS, browsers all do this very effectively. Maybe
| only the most cautious enterprises delay these updates by a day.
|
| But even billion dollar corporations don't do a great job of
| rolling out updates incrementally. This especially applies as
| tools exist to automatically scan for dependency updates, the
| list of these is too long to name - don't tell me about an update
| only a day old, that's too risky for my taste.
|
| So for OS and libraries for my production software? I'm OK
| sitting a week or a month behind, let the hobbyists and the rest
| of the world test that for me. Just give me that option, please.
| n_plus_1_acc wrote:
| Isn't that basically non rolling-release distros?
| c0balt wrote:
| Yesn't, many still release updates frequently as long
| (usually, if server etc.) as they are compatible. Mostly
| though only minor updates for features.
|
| This is required for some components, like, e.g., glibc or
| openssh, to stay secure-ish.
| gnramires wrote:
| Debian has 2/3 stages of software deployment that I know of:
| Unstable, Testing and Stable. By the time it comes to stable it
| has been quite extensively tested. The exceptions are only
| security updates which you may want to get very quickly anyway.
| I really recommend Debian (in particular with unattended
| security upgrades) for severs.
|
| Other distros have this as well (Thumbleweed, Void, etc.), and
| I really think most people should not be using recently-
| deployed software. A small community using them however helps
| testing so the rest of us can have more stability. Which is why
| I don't recommend using Arch (or Debian unstable) for general
| users, unless you specifically want to help testing and accept
| the risk.
|
| Also randomizing update schedules by at least a few hours does
| seem very wise (I don't think even the most urgent updates
| would make or break in say 6 hours of randomization?)
| attentive wrote:
| al2023 uses "releasever" which is basically a dated snapshot of
| the packages. You can choose to install last-1 instead of the
| latest.
| knallfrosch wrote:
| We're being paid to migrate our hardware boxes programmatically
| to Windows 10 IoT LTSC so that new boxes ship with 10+ years of
| security. We're still supporting some XP devices (not connected
| to the internet.) So to anyone depending on us: You're welcome.
|
| But let me tell you something: Long-Term Support software mostly
| doesn't pay well, and it's not fun either. Meanwhile some Google
| clown is being paid 200k to fuck up Fitbit or rewrite Wallet for
| the 5th time in the newest language.
|
| So yeah. I'd love to have stable, reliable dependencies while I'm
| mucking around with the newest language de jour. But you see how
| that doesn't work, right?
| kccqzy wrote:
| The fucking up of Fitbit and the rewriting of Wallet are not
| the engineers' fault. These kind of projects are mostly decided
| and planned by PMs: clueless and incompetent PMs. For payments
| in particular it was not even just an incompetent PM, but an
| incompetent director that saw the success of the NBU Paisa
| payment in India and thought the U.S. would be the same.
|
| The engineers are at most just complicit. Those who aren't are
| laid off or they quit on their own accord.
| ozim wrote:
| Engineers are mostly complicit because they get that 200k$
| salary when they chase next shiny thing.
|
| No one is paying such salaries for mundane clerical job.
| chrisjj wrote:
| You're saying complicity is not a fault?? :)
| msoad wrote:
| Kinda ironic that the article itself was updated
| CooCooCaCha wrote:
| I disagree, keep things constantly updated (within reason).
|
| Most companies I've worked for have the attitude of the author,
| they treat updates as an evil that they're forced to do
| occasionally (for whatever reason) and, as a result, their
| updates are more painful than they need to be. It's a self-
| fulfilling prophecy.
| eXpl0it3r wrote:
| Even as a developer not focused on web dev this sounds pretty
| bad, unless everyone in your dependency tree (from OS to language
| to libraries) decides to make a switch and even then, you'll be
| stuck with outdated ways to do things.
|
| Who wants to continue maintaining C++03 code bases without all
| the C++11/14/17/20 features? Who wants to continue using .NET
| Framework, when all the advances are made in .NET? Who wants to
| be stuck with libraries full of vulnerabilities and who accepts
| the risk?
|
| Not really addressed is the issue of developers switching
| jobs/projects every few years. Nobody is sticking around long
| enough to amass the knowledge needed to ensure maintenance of any
| larger code base.
|
| Which is caused by or caused the companies to also not commit
| themselves for any longer period of times. If the company expects
| people to leave within two years and doesn't put in the monetary
| and non-monetary effort to retain people, why should devs
| consider anything longer than the current sprint?
| KronisLV wrote:
| > Who wants to continue maintaining C++03 code bases without
| all the C++11/14/17/20 features? Who wants to continue using
| .NET Framework, when all the advances are made in .NET? Who
| wants to be stuck with libraries full of vulnerabilities and
| who accepts the risk?
|
| With the exception that in this hypothetical world we'd get
| backported security updates (addressing that particular point),
| who'd want something like this would be the teams working on
| large codebases that: - need to keep working in
| the future and still need to be maintained - are too big
| or too time consuming to migrate to a newer tech stack (with
| breaking changes in the middle) with the available resources
| - are complex in of themselves, where adding new features could
| be a detriment (e.g. different code styles, more things to
| think about etc.)
|
| Realistically, that world probably doesn't exist and you'll be
| dragged kicking and screaming into the future, once your Spring
| version hits EOL (or worse yet, will work with unsupported old
| versions and watch the count of CVEs increase, hopefully very
| few will find themselves in this set of circumstances).
| Alternatively, you'll just go work somewhere else and it'll be
| someone else's problem, since there are plenty of places where
| you'll always try to keep things up to date as much as
| possible, so that the delta between any two versions of your
| dependencies will be manageable, as opposed to needing to do
| "the big rewrite" at some point.
|
| That said, enterprises already often opt for long EOL Linux
| distros like RHEL and there is _a lot_ of software out there
| that is stuck on JDK 8 (just a very visible example) with no
| clear path of what to do once it reaches EOL, so it 's not like
| issues around updates don't exist. Then again, not a lot of
| people out there need to think about these things, because the
| total lifetime of any given product, project, their tenure in
| the org or even the company itself might not be long enough for
| those issues to become that apparent.
| pron wrote:
| Because of this, in the JDK we've adopted a model we call "tip &
| tail". The idea is that there are multiple release trains, but
| they're done in such a way that 1/ different release trains
| target different audiences and 2/ the model is cheap to maintain
| -- cheaper, in fact, than many others, especially that of a
| single release train.
|
| The idea is to realise that there are two different classes of
| consumers who want different things, and rather than try to find
| a compromise that would not fully satisfy either group (and turns
| out to be more expensive to boot), we offer multiple release
| trains for different people.
|
| One release train, called the tip, contains new features and
| performance enhancements in addition to bug fixes and security
| patches. Applications that are still evolving can benefit from
| new features and enhancements and have the resources to adopt
| them (by definition, or else they wouldn't be able to use the new
| features).
|
| Then there are multiple "tail" release trains aimed at
| applications that are not interested in new features because they
| don't evolve much anymore (they're "legacy"). These applications
| value stability over everything else, which is why only security
| patches and fixes to the most severe bugs are backported to them.
| This also makes maintaining them cheap, because security patches
| and major bugs are not common. We fork off a new tail release
| train from the tip every once in a while (currently, every 2
| years).
|
| Some tail users may want to benefit from performance improvements
| and are willing to take the stability risk involved in having
| them backported, but they can obviously live without them because
| they have so far. If their absence were painful enough to justify
| increasing their resources, they could invest in migrating to a
| newer tail once. Nevertheless, we do offer a "tail with
| performance enhancements" release train in special circumstances
| (if there's sufficient demand) -- for pay.
|
| The challenge is getting people to understand this. Many want a
| particular enhancement they personally need backported, because
| they think that a "patch" with a significant enhancement is safer
| than a new feature release. They've yet to internalise that what
| matters isn't how a version is _called_ (we don 't use semantic
| versioning because we think it is unhelpful and necessarily
| misleading), but that there's an inherent tension between
| enhancements and stability. You can get more of one or the other,
| but not both.
| jldugger wrote:
| Is this any different than the LTS approach Canonical and
| others take?
| pron wrote:
| I think Canonical do something similar, but as to other LTSs
| -- some do tip & tail and some don't. The key is that tails
| get only security patches and fixes to major bugs and rarely
| anything else. This is what offers stability and keeps
| maintaining multiple tails cheap (which means you can have
| more of them).
|
| Even some JDK vendors can't resist offering those who want
| the comforting illusion of stability (while actually taking
| on more real risk) "tail patches" that include enhancements.
| chad1n wrote:
| Most LTS strategies are like this, enterprises run the LTS
| version on the server while the consumers run the latest
| version. In a way, it is beta testing, but the consumer isn't
| really mad about it since he gets new features or performance
| boosts. LTS users usually update once every 3-6 months or if a
| serious CVE comes out, while normal users update daily or
| weekly. To be honest, I know servers running whatever the
| latest version of nodejs is, instead of LTS, mostly because
| they don't know that node has a LTS policy.
| ricksunny wrote:
| The 'Skip this Update [pro]' button example (Docker Desktop) just
| made me facepalm and helped me internalize that I'm not a luddite
| from technology, I'm a luddite from the _collectives_ of people
| (not the individual people...(!) ) feeling compelled to craft
| these dark business patterns.
| dzonga wrote:
| java over golang lol. when golang has literally been version
| stable for over a decade now
| cupantae wrote:
| I've supported enterprise software for various big companies and
| I can tell you that most decision makers for DCs agree with this
| sentiment.
|
| EMC had a system called Target Code which was typically the last
| patch in the second-last family. But only after it had been in
| use for some months and/or percentage of customer install base.
| It was common sense and customers loved it. You don't want your
| storage to go down for unexpected changes.
|
| Dell tried to change that to "latest is target" and customers
| weren't convinced. Account managers sheepishly carried on an
| imitation of the old better system. Somehow from a PR point of
| view, it's easier to cause new problems than let the known ones
| occur.
| UniverseHacker wrote:
| I pretty much agree- most systems don't need updating. I've seen
| and setup OpenBSD servers that ran for a decade without issues
| never getting updates. I currently run some production web
| services on Debian where I do updates every 3 years or so, and no
| issues.
|
| Leaving something alone that works good is a good strategy. Most
| of the cars on the road are controlled by ECUs that have never
| had, and never will have any type of updates, and that is a good
| thing. Vehicles that can get remote updates like Teslas are going
| to be much less reliable than one not connected to anything that
| has a single extensively tested final version.
|
| An OS that is fundamentally secure by design, and then locked
| down to not do anything non-essential, doesn't really need
| updates unless, e.g. it is a public facing web server, and the
| open public facing service/port has a known remote vulnerability,
| which is pretty rare.
| albertP wrote:
| He. I've been running always everything one or two versions
| behind latest (for my personal laptop, not servers). That means
| mainly OS (e.g., macOS), but as long as I can avoid automatic
| updates, I do so.
|
| I believe the chances of having a bricked laptop because of a bad
| update are higher than the chances of getting malware because
| running one or two versions behind the latest one.
| Plasmoid wrote:
| I was working at a place that delivered onprem software. One
| customer asked us "We like features of version N but we're
| running N-1. Can you backport them so you don't have to
| upgrade?". I replied we'd already done that, it was called
| version N.
| mike741 wrote:
| Urgent updates can be necessary every once in a while but should
| be recognized as technical failures on the part of the
| developers. Failure can be forgiven, but only so many times. The
| comments saying "what about X update that had this feature I
| need?" are missing the point entirely. Instead ask yourself about
| all of the updates you've made without even looking at the patch
| notes, because there are just too many updates and not enough
| time. Instead of blaming the producers for creating a blackbox
| relationship with the consumers, we blame the consumer and
| blindly tell them to "just update." That's what needs to change.
| It's a bit similar to opaque ToS issues.
___________________________________________________________________
(page generated 2024-07-19 23:07 UTC)