[HN Gopher] Kubernetes Reinvented Virtual Machines (in a good se...
___________________________________________________________________
Kubernetes Reinvented Virtual Machines (in a good sense)
Author : paulgb
Score : 68 points
Date : 2022-07-31 19:30 UTC (3 hours ago)
(HTM) web link (iximiuz.com)
(TXT) w3m dump (iximiuz.com)
| aetherspawn wrote:
| Good for some use cases, but still plenty of servers that are
| hostile to K8s because ie you have to login and enter a license
| key, or something like that.
| sureglymop wrote:
| The containerization features used by projects like Docker and
| Kubernetes are provided by the Linux kernel. Containers are not
| VMs, they're just regular processes.
| rwmj wrote:
| Wouldn't you want to use KubeVirt for this?
| hinkley wrote:
| Does anyone know of a good, sober treatise on the suitability of
| microkernels for multitenancy and/or split priority systems?
|
| Hypervisors just seem like particularly impoverished microkernels
| to me. It reminds me of social programs working hard to avoid the
| term Socialism because of emotional baggage. Oh this is
| definitely not a microkernel? Is it a full privilege program that
| handles a small number of system-wide concerns and delegates the
| maximum number of other concerns to processes running at a lower
| privilege level? Yes? Then can we call it a... no? Okay buddy.
| avereveard wrote:
| *chroots
| z3t4 wrote:
| The bus factor - your engineering skills that took you 20+ years
| to acquire are worth nothing if _you_ can 't be replaced...
| secondcoming wrote:
| > On the darker side, inefficient scaling (due to uneven
| daily/yearly traffic distribution), overly complicated
| deployments (delivering code to many boxes quickly is hard), and
| fragile service discovery (have you tried running consul or
| zookeeper at scale?) would lead to higher operational costs.
|
| Obviously, everyone's use-case is different but none of this has
| been my experience with VMs:
|
| - Both AWS and GCP allow you to define custom scaling policies
| for an instance group.
|
| - Deploying new code just means updating a template with your new
| image and pushing that out. Both GCP and AWS would then cycle the
| existing machines automatically to use this new image.
|
| - At least in our case, there is no need for Service Discovery;
| you just need to know where the Load Balancer is.
|
| We have moved some things to Kubernetes and now have to deal with
| inexplicable occasional failures. That and an everyday massive
| YAML headache.
|
| YMMV
| awoimbee wrote:
| IMO it takes time to get up to speed but then it's pure bliss
| in terms of reliability. And don't go to yaml hell, use pulumi
| or equivalent.
| tannhaeuser wrote:
| That doesn't make sense? K8s runs Docker images, and a Docker
| image explicitly is _not_ a VM but, in fact, a lightweight
| replacement for a full-blown VM. Of course, running Docker on
| Windows or Mac OS requires a Linux VM, though.
| jiggawatts wrote:
| You can run Windows images on Windows clients using Docker.
| x86x87 wrote:
| Did K8s reinvent VMs though? People keep forgetting that
| vm!=container and the security posture is completely different.
| Also, for the amounts of complexity K8s adds it's not worth the
| overhead in 95% of cases (i get it that it's cool and everyone
| and their dog want to claim they have experience with it - I am
| affraid that once the "movement" fizzles out we're going to be
| stuck with a lot of things that are over-overengineered)
| jstx1 wrote:
| I don't think it's fizzling out, it's kind of becoming a basic
| standard. And if your use case is simple enough to not need
| Kubernetes, then doing it with Kubernetes isn't that much more
| complicated either.
| jiggawatts wrote:
| A standard in the Linux world. Windows doesn't have
| "distributions", so there's less need for containers in that
| space. Microsoft uses Service Fabric internally for the Azure
| back-end, which covers most of the other use-cases of
| Kubernetes.
| Jenk wrote:
| Service Fabric needs to die. It is an abomination on
| software.
|
| Why, why, * _WHY*_ does each SF service need to have some
| terrible wrapper code that forces it to run in, and only
| in, SF? Oh ya.. vendor lock-in.
|
| No thanks.
|
| Yes, I'm aware that MS added support for docker containers
| in SF. That integration is dreadful. They did about as good
| a job of it as the WCF team did of shoe-horning "RESTFul
| Services" in.
|
| > Windows doesn't have "distributions", so there's less
| need for containers in that space.
|
| I don't follow. What does distributions have to do with
| need for containers?
| k8sToGo wrote:
| Do you know kubernetes or are you just jumping on the hate
| train?
|
| Can't you argue with that logic against everything remotely
| complex? If you have people who know how to use it, then why
| not use it? Would you say the same about complicated engineered
| car engines?
| jmspring wrote:
| He has a point, containers and VMs are not the same.
| k8sToGo wrote:
| Yes that part is true. I was not referring to that part.
| de6u99er wrote:
| > Do you know kubernetes or are you just jumping on the hate
| train?
|
| It seems rather that you don't know what kubernetes is. His
| comment is justified. You can put your pitchfork back.
|
| > Can't you argue with that logic against everything remotely
| complex? If you have people who know how to use it, then why
| not use it? Would you say the same about complicated
| engineered car engines?
|
| It seems that you don't understand ehat he's trying to say.
| k8sToGo wrote:
| >It seems rather that you don't know what kubernetes is.
|
| The company, for which I helped migrating to k8s, pays me 6
| figures to keep maintaining it, thinks otherwise (also look
| at my username ;) ). All of our developers love it because
| it replaced some weird hacked together deployment stuff.
|
| Which part do you think he is right or wrong?
|
| It just seems that in every thread people moan about k8s.
| But it seems that it's usually people who are overwhelmed
| by it, which is a legitimate reason not to use it. But just
| saying that it's generally bad is what starting to annoy me
| a bit.
| packetized wrote:
| I dare say that it's annoying you as a result of
| cognitive dissonance about your employer paying six
| figures to migrate to it.
|
| If you're being paid well, obviously you'll be annoyed by
| concepts contrary to your work.
| actually_a_dog wrote:
| > It is difficult to get a man to understand something
| when his salary depends upon his not understanding it.
|
| - Upton Sinclair.
| lovehashbrowns wrote:
| That was pretty much my experience. From a distance, it
| looks over-engineered. Once I started using it in my
| personal projects and at work in production, it makes a
| lot of sense and it's far easier than people make it out
| to be.
|
| I'm currently helping my employer switch everything to
| kubernetes and we're replacing a lot of really, truly
| over-engineered trash.
|
| I will say I do hate yaml but there aren't a ton of
| choices, tbh.
| benreesman wrote:
| I actually know k8 as well as another Borg-alike (Tupperware)
| that actually runs at Borg-like scale.
|
| GP is absolutely right that it is insane overkill with no
| conceivable way to pay back the complexity investment in all
| but a few niche cases. Working at a FAANG isn't the only
| niche where full-Kube makes sense, there are others, but it's
| a niche thing, with the niche being roughly: I rack and run
| physical gear on a limited number of SKUs at "cattle not
| pets" scale.
|
| If you're on a cloud provider? You've already got secure,
| restartable, restorable, on-demand-growable, discoverable,
| fail-over-able, imaginary machines a zillion times more
| robust than any half-assed Borg-alike setup most people would
| come up with.
|
| They're happy to take your money if you yo dawg like
| imaginary computers so I put a docker in your Xen VM so you
| can Kubernetes while you pay me big bucks, but that's because
| they're not running charities.
| MBCook wrote:
| I thought the article went through the whole evolution from
| boxes to VMs to containers to K8 very well.
|
| It's so much more than it's headline.
| codsane wrote:
| Yeah I'm not sure if it's the Reddit mentality at play here
| or actual outage but it just always seems like these types of
| comments are reacting to a headline, it is unfortunate.
| orwin wrote:
| I am currently overengineering my previous work to run it
| inside kubernetes. It isn't my decision, i just asked for a VM
| and the access to ansible, but the chiefs wanted to use
| kubernetes for everything.
|
| I don't think this is that bad tbh. I runs my unit tests on
| container construction and while it was tedious to develop,
| mostly because i cannot run docker on my work PC, i understand
| that it makes it easier to inventory and check the advancement.
| The management will always love it, so i think kubernetes is
| here to stay. Not because it solve a lot of engineering issues
| (not for me anyway, i'd rather run a cron in my VM than declare
| a kube cronJob), but because it solve a lot of other
| fondamental issues for the management (inventory, visibility on
| existing/running projects. And everybody use the same system
| without using the _same_ system.)
| k8sToGo wrote:
| I don't know about your team, but in our team I try to move
| as much as possible to kubernetes so I have a common way of
| deployment across all members. Of course if you are just
| thrown into the cold water without help from the Kubernetes
| guys, it will suck.
|
| Also I noticed that once developers "get it", they actually
| like it a lot.
| sporkland wrote:
| Why can't you run docker on your work PC? If it's the
| licensing issues I'd recommend using lima-vm
| (https://github.com/lima-vm/lima) as a substitute.
| Already__Taken wrote:
| > while it was tedious to develop, mostly because I cannot
| run docker on my work PC
|
| Have a look at tilt.dev You can get it to build in-cluster
| too so you only need the CLI. I've really enjoyed its
| workflow.
|
| Or try podman-remote. Should be easier to get those CLI tools
| on your work PC.
| CameronNemo wrote:
| Our k8s based deployments go so much smoother than our VM
| based deployments (Ansible). We have centralized logging set
| up for k8s, but not VMs. We have post-update health checks
| for k8s pods, but not systemd services. Et cetera.
| [deleted]
| pravus wrote:
| I've always said that it reinvented the mainframe by turning it
| inside-out. I did some work on an IBM 360/370 running VM/CMS
| and I can see a lot of parallels in how both systems operate.
| Most of the architecture is an interface that is implemented by
| some pluggable piece of software and there's also all of the
| resource scheduling and management which is a first-class
| component. This makes the system very flexible and reliable.
|
| And I think k8s is here to stay. If what you have is a bunch of
| containers you can easily push that into k8s. It's not any more
| complex than any other solution you would use to deploy them.
| jbverschoor wrote:
| I thought we had this kind of covered 20 years ago with J2EE
| jayd16 wrote:
| Doesn't really handle multiple process isolation and IPC as
| well.
| cassianoleal wrote:
| It also doesn't really handle anything that doesn't run on
| the JVM.
| jayd16 wrote:
| Well sure but that's true of any VM, right? You still need
| to make sure your images run on the k8s instance even if
| that's a lower bar.
| [deleted]
___________________________________________________________________
(page generated 2022-07-31 23:00 UTC)