[HN Gopher] Kubernetes Exposed: One YAML Away from Disaster
___________________________________________________________________
Kubernetes Exposed: One YAML Away from Disaster
Author : consumer451
Score : 105 points
Date : 2023-08-08 18:15 UTC (4 hours ago)
(HTM) web link (blog.aquasec.com)
(TXT) w3m dump (blog.aquasec.com)
| koito17 wrote:
| I am actually a bit surprised there are any clusters out there
| exposing their service ports. Do people not run firewalls in
| their nodes? Sure, it may cause some headaches due to a range of
| random IPs and random ports that Kubernetes expects to be able to
| bind, but I assume any sysadmin will know how to deal with this.
| I know k3s has the awesome advice of _disabling_ firewalld[1],
| but presumably the clusters mentioned in this article are largely
| using kubeadm or some other tool rather than k3s.
|
| With that said, I agree that secrets management on k8s is pretty
| bad (unless you use an external service like Hashicorp Vault, a
| frequent pattern that I run into with k8s). Even with RBAC and
| secrets in a namespace being inaccessible from other namespaces,
| the moment someone has permissions to read secrets from a
| namespace, its essentially game over for you, as noted in this
| paragraph.
|
| > Another potential attack vector within a Kubernetes cluster is
| accessing internal services. ... In cases where authentication is
| required, the credentials can typically be found within the
| Kubernetes secrets space.
|
| [1] https://docs.k3s.io/advanced#red-hat-enterprise-linux--
| cento...
| crabbone wrote:
| Why would you be surprised? In most cases the only thing users
| need from a Kubernetes cluster is to access it from the
| outside... like... why wouldn't they expose it to the outside
| world? -- this is the most useful thing you can do with it.
|
| And the range of things you might want to expose... oh, god
| only knows. Think about such trivial stuff as FTP or iSCSI --
| kinda useful, right? -- but Kubernetes rules for port exposure
| are idiotic, because you cannot expose ranges. And quite a few
| of popular Internet protocols assume they can allocate whatever
| port they want and start talking on it. And then you start
| fighting Kubernetes... and you'll win, but it will be some
| custom security solution... with the obvious problem of it
| being a custom security solution.
|
| Now, add to this that Kubernetes will typically mess with your
| iptables and maybe also arptables, and if your firewall is
| based on eg. iptables, then you might be out of luck using it
| with Kubernetes, and again, will end up rolling something
| "special".
|
| Obviously, you might also want, say, DNS to work inside your
| cluster, but that's a separate can of worms.
| merb wrote:
| The firewalld is for the trusted zone... and for the apiserver
| 6443 https port k3s comes with rbac enabled by default so if
| you want to access it via kubelet you need to open it?!
|
| trusted zone is used by overlay networks or node internal stuff
| (or if you manually put your interface into it)
| erulabs wrote:
| I mean, it's not the best argument to say a system is "one action
| away from disaster". If you pull the escape lever in an F16 for
| no reason you're also pretty screwed. If you intentionally fly
| into the ground no one blames General Dynamics for making the
| plane too complex.
|
| It turns out complex tools for complex tasks require some skilled
| operators. Yes kubernetes is overkill for your home lab, no it's
| not overkill for enterprise-grade AWS orchestration with hundreds
| of apps and millions of users. News at 11.
| ramraj07 wrote:
| I want to host a website not bomb the Middle East. Most
| enterprise apps are still websites that need no more than
| elastic beanstalk. If your analogy is to compare k8s to f16
| then maybe you are not doing your argument any service.
| alephnerd wrote:
| If your dealing with dynamic scaling needs K8s are quite
| nice. There's a reason why most Telcos have deployed 5G
| functionality using K8s.
|
| If you are in a situation where you want to follow a
| decoupled/microservices development pattern (ie. split
| component ownership across dev teams) as well as manage
| scalability during high load periods, K8s is hard to beat.
|
| K8s is a useful tool, but can be overkill in a number of
| situations. Just like a hammer, you can't use it to solve for
| every usecase.
|
| P.S. F-35s use K8s clusters (or might be K3s) to deploy their
| EWS.
| erulabs wrote:
| Well both hosting websites _and_ bombing the middle east make
| use of Kubernetes. [1]
|
| 1. https://www.theregister.com/2020/06/03/kubernetes_b_21_bom
| be...
|
| I've spent my entire career being on-call for web servers and
| Elastic Beanstalk is _not_ enough. Deployment failures,
| deployment speed, deployment safety, upgrades, local
| testing... There is a reason AWS themselves is pushing EKS
| and not Elastic Beanstalk, and it 's not because Kubernetes
| is trendy, it's because it's incredibly useful.
| ramraj07 wrote:
| I've had production SaaS apps running on elastic beanstalk
| just fine for years now. Better SLA than any of the kube
| apps I've seen. You're right that if you have highly
| capable support team and the real need for it go with the
| pods but if you just have mediocre engineers and a
| straightforward deployment with no real scaling issues then
| just stick to simple systems.
| crabbone wrote:
| It's nothing like that.
|
| Kubernetes was built with no program for how to do important
| things. Every important thing in Kubernetes is post-hoc, poorly
| bolted-on patch instead of real solution.
|
| Take for instance upgrades. Don't like upgrades? -- Try
| downgrades? -- Oh, what do I hear, you seem to not like that
| either... So, how about user management? -- Also no? -- Well,
| how about backups and recovery? -- What, again?
|
| Kubernetes solved _some_ easy problems quickly with no program
| for how to solve hard problems. Security is one of those hard
| problems Kubernetes has no idea what to do about. The reason?
| -- Kubernetes, on its own, provides very little. It doesn 't
| know how to do networking -- you need a CNI for it. But there's
| no good interface in Kubernetes to allow the admin to somehow
| control _in general_ how the cluster is supposed to behave.
| Maybe some container somewhere will do something tricky, and
| will punch a hole in whatever firewall you though you 've built
| around it. And you cannot make sure upfront that's not going to
| happen. There isn't a single way to punch that kind of hole...
| and the problem is, often time simply to make things work
| Kubernetes users will punch those holes. Maybe because they
| don't know how to do things better, or maybe because there
| isn't really a better way, or the better way is so tedious and
| convoluted that nobody will really follow it...
|
| It's unreasonable to expect _anyone_ to be able to deal with
| this kind of complexity. We need to be able to manage
| complexity, but Kubernetes gives us no tools to do it. It 's
| the same type of problem as Dijkstra described with "goto" --
| it's the tools fault. It creates so much complexity that humans
| cannot cope. Not only humans, computer programs cannot cope --
| too many things to check, to many possible problems to protect
| yourself against.
| erulabs wrote:
| I disagree with everything you've said but there isn't enough
| content to any of your complaints to respond meaningfully.
|
| Upgrades and downgrades work fantastically. Putting "a
| firewall in front" is trivial - just... put a firewall in
| front. I don't know what "punch a hole" means, but if you
| mean "bind a port", yes, there _are indeed_ a lot of ways to
| do that (on linux, windows, and osx!), and kubernetes
| provides excellent, organized, standardized and controllable
| ways to do that.
|
| Kubernetes does not create complexity in my experience. It's
| the other way around - developers who have never seen the
| complexity in the world complain about the complexity
| embodied in the tools. They think "you're not gonna need it",
| despite an entire industry of operators saying "actually yes,
| we do need this, and badly".
|
| I've seen developers who've never built a website in their
| life complain about React, and it sounds exactly the same as
| this. You might not need these tools, but they're not
| _unneeded_. If you can make a simpler and more powerful tool,
| please, go ahead, you 'll be rich and famous, and everyone
| from the US Navy to telecoms to where I work will celebrate
| you.
| superq wrote:
| "Punch a hole" is a common idiom in the security and
| networking space which means "open a port in the firewall".
| barakm wrote:
| This is also why managed Kubernetes is a useful thing (EKS, GKE,
| et al)... but if you still want to do it yourself, maybe look
| into some Kubernetes distros (like Typhoon
| (https://typhoon.psdn.io) which I run on my clusters)
| lakomen wrote:
| But managed is expensive. You have to pay at least double vs
| unmanaged (in the dedicated world, ofc there is no "unmanaged"
| k8s offer from those cloud providers, just an analogy). You are
| then bound to that cloud and its traffic prices and all those
| you mentioned are filthy expensive when it comes to traffic.
| alephnerd wrote:
| If you are dealing with the level of scale that allows you to
| fully take advantage of K8s, the cost of managed offerings is
| less than that of deploying and managing Openshift on-prem.
|
| Also, the beancounters don't like on-prem as much anymore due
| to the upfront cost outlay.
|
| I've been working with a number of major enterprise accounts
| and they've all been adopting managed K8s offerings for this
| very reason.
| superq wrote:
| The irony to me is that servers are so much faster than
| they were ten years ago that a typical, fully-loaded
| enterprise app can almost always run on a single mid-sized
| 10-core box with 256 GB of RAM.. from ten years ago. It's
| absolutely ludicrous how much power is available now on a
| single box if you really want to scale up instead of out.
|
| (Databases, of course, always like to eat everything
| because of poorly-optimized statements, but they've never
| really been good fits for K8s or containers anyway.)
| 0xbadcafebee wrote:
| For those unaware: Kubernetes has long had _horrible_ default
| security. RBAC isn 't even enabled by default. Nearly every
| cluster I've seen allows regular users and applications to
| breakout into the host nodes by default. Network ACLs, if they
| are ever used, almost never block either intra-cluster or extra-
| cluster attacks. Unencrypted API ports are standard. The
| distinctions between "cluster-level" resources and "everything
| else" unnecessarily complicates creating fine-grained rules.
| Default access even with RBAC is too wide-open. I won't even get
| into ABAC...
|
| Basically, the geniuses at Google decided to make clusters
| insecure by default, because being an admin is, like, hard, man.
| So guess what? People deployed clusters that were insecure by
| default.
|
| I don't think it has ever been easier to take over cloud
| computing environments just by exploiting a dinky web app.
| znpy wrote:
| > Basically, the geniuses at Google decided to make clusters
| insecure by default, because being an admin is, like, hard,
| man. So guess what? People deployed clusters that were insecure
| by default.
|
| There is a lot to unpack here, but while I agree with the main
| point (kubernetes defauls suck) the main problem I've seen all
| across the industry is that people expect to be able to install
| stuff around and that to be magically secure, functional and
| performant.
|
| It just doesn't work like that. Systems administration is its
| own discipline and you can't expect to follow some random
| tutorial off the internet and be ready for production.
|
| Also, it doesn't help that the kubernetes documentation is
| awful.
|
| I remember my time working with kubernetes, it's a patchwork of
| stuff... it does require time and a fairly high pain threshold
| to properly master it.
| orthoxerox wrote:
| Well, there are two opposing approaches: "up and running"
| defaults or production defaults.
|
| You choose "up-and-running" defaults if you want people to be
| able to immediately run your software. So this means no
| encryption, no auth (or "the first user is the admin" kind of
| auth), storing your state in a bundled SQLite DB, etc. This
| helps with growth, because random devs can download the
| container image, run it and check out your features in five
| minutes. And then run it in production.
|
| With production defaults everything is locked down. Go issue
| some certificates and make sure your OS (or JRE) trusts them,
| because your nodes won't talk to each other without TLS. No
| one has any rights to anything by default, can't even log in
| until you go and edit authN/Z configuration. The container
| will kill itself if the env var for the database password is
| still "change_this!". It's a pain to deploy this, but a
| pleasure to run.
|
| Ideally, all products should come with two sets of defaults.
| Your AIO quickstart image should be easy to run. Your
| production-grade image or the configs that come with the
| distribution should be locked down tighter than Fort Knox.
|
| K8s is a piece of infrastructure that doesn't really benefit
| from an "up and running" approach, but it has this kind of
| defaults for some reason instead of having every default
| policy resolve to "not allowed".
| mulmen wrote:
| > or "the first user is the admin" kind of auth
|
| My mind immediately went to Eddie Izzard's "do you have a
| flag" bit.
| hedora wrote:
| OpenBSD is an existence proof of a third route:
|
| Carefully manage complexity, and write a good manual.
|
| OpenBSD is about 22M lines of code; kubernetes is 58M, so
| the code bases are of comparable complexity (OpenBSD uses
| many more programming languages than kubernetes does).
|
| I'd argue that the base OpenBSD install provides far more
| utility than Kubernetes, but market share numbers disagree.
|
| Sources:
|
| https://openhub.net/p/kubernetes https://www.reddit.com/r/o
| penbsd/comments/d9a8c9/how_many_li...
|
| Edit: Using open hub for both says OpenBSD is 75M lines:
|
| https://openhub.net/p/openbsd
| lakomen wrote:
| I don't follow. OpenBSD is an OS, k8s is a set of
| services running on an OS.
| code_biologist wrote:
| Both are things deployed to host network visible
| services.
| [deleted]
| ryukafalz wrote:
| > There is a lot to unpack here, but while I agree with the
| main point (kubernetes defauls suck) the main problem I've
| seen all across the industry is that people expect to be able
| to install stuff around and that to be magically secure,
| functional and performant.
|
| > It just doesn't work like that. Systems administration is
| its own discipline and you can't expect to follow some random
| tutorial off the internet and be ready for production.
|
| Disagree. I can install an app to my Sandstorm instance and
| reasonably expect it to be secure, functional, and performant
| (inasmuch as the app is performant anyway). The tools we use
| make things difficult, but I don't think there's as much
| irreducible complexity here as you're implying. We can and
| should build better tools.
| mplewis wrote:
| Sandstorm is a very different category of system from k8s.
| ransom1538 wrote:
| "Default access even with RBAC is too wide-open."
|
| Not following this. Isn't a k8s cluster behind a network
| firewall? Only exposing 443? To me RBAC would be useful in a
| specific case where you want k8 service accounts (no one uses
| these) to have different levels of security. That sounds awful.
| alephnerd wrote:
| Jeff in Engineering might have deployed an app spanning
| vCenter as well as EKS, and to continue to allow the app to
| function wrote subpar ingress/egress rules.
| mulmen wrote:
| 443 is just a port. If we assume that implies HTTPS then
| that's still only TLS. If I exploit the service ("securely"!)
| I am inside your network. I can then pivot to your management
| layer. Game over.
| benatkin wrote:
| Horrible scaling to tiny clusters as well. I'm glad they kept
| the design pure though. Unencrypted means you get to choose the
| encryption. Redis didn't have built in encrypted tcp servers
| for a long time either. When they added it there was a good
| discussion about the overhead:
| https://github.com/redis/redis/issues/7595
| k8sfellow wrote:
| RBAC is now enabled by default. But I wonder how many of these
| clusters that are exploitable are Openshift for example.
| Openshift had secure by default settings from as long as I can
| remember. Non-root pods as default, api-server and even metric
| collection ports are protected by authentication.
|
| Openshift also had much strict RBAC settings from very
| beginning(like even in Openshift 3.x days, when k8s had no RBAC
| by default and you had to opt-in).
| crabbone wrote:
| Here's my Kubernetes security horror story.
|
| You probably know about resolv.conf, but you might not know about
| "search domains". A feature that should've never existed, but was
| added because "why not?". If you don't know what it does, in
| first approximation, it creates a domain alias. This is sometimes
| used to save some typing when using DNS for service discovery in
| larger clusters in a very hierarchical way (when names start to
| look like a.b.c.d.e.f.g...com, it's tempting to alias them to
| something shorter).
|
| So, long story short. Ubuntu LTS had a bug (actually a whole
| bunch of them) in the same script that was responsible for
| generation of this file (it's a bad idea to have this file to
| begin with... but hey, who cares, right?). So, one aspect of the
| bug was that a variable name "DOMAINS" was used instead of its
| value. And that became a search address in resolv.conf.
|
| Now, this search works by appending the search path to the domain
| name, if DNS server tells you it cannot find it. I had no idea
| DOMAINS was possible to register as a valid domain name. I mean,
| it should've been a TLD to be discoverable globally, right? --
| Well, it's still a mystery to me, but, for some reason domains of
| the form a.b.c.DOMAINS are resolved to some AWS EC2 instance...
|
| And, the punch line: if you screw up CoreDNS configuration
| somehow, or something else goes wrong with service discovery
| inside your Kubernetes cluster on unpatched Ubuntu -- there's a
| good chance your traffic will go to that EC2 instance. Who's on
| the other side -- I don't know. I didn't send anything important
| that way and discovered it because something like Spark operator
| didn't work... but it could've been much worse.
|
| Oh, btw, I believe resolv.conf is used by different CNIs. Maybe
| Calico and Flanel. I cannot remember atm. why it was used... or
| maybe it's some kubelet setting.
|
| Anyways. Larger point is... large Kubernetes clusters are
| virtually impossible to systematically audit. Too many things can
| do too many things. Too much trickery goes on in all sorts of
| system components that deal with networking. It's also impossible
| to track who does what and who owns what because there's no user
| identity.
|
| ---
|
| I believe that, again, industry adopted an absolute horror show
| of a program / environment not because it was a great tool, but
| because "organically" developers started to add "features",
| creating something that's hard to resist, because, superficially,
| it creates a lot of value with minimal effort -- but it also lays
| a trap that is not immediately obvious. And those who find
| themselves trapped, instead of running away, star adding more
| "features" to fight the trap. But, in this foolhardy attempt,
| they only make the trap stronger.
| alephnerd wrote:
| Let the Blackhat blogspam begin!
| geek_at wrote:
| I had a similar experience when I tried to use kubernetes in my
| home lab. Crashed it twice and since the technology and tools
| evolve so fast almost all tutorials I found were already so out
| of date that the commands didn't work and some tools were already
| deprecated.
|
| I moved to docker swarm and love it. It's so much easier,
| straight forward, automatic ingress network and failover were all
| working out of the box.
|
| I'll stay with swarm for now
| jrochkind1 wrote:
| I'm not following what similarity you see between your
| experience and anything discussed in the article. Did you get
| hacked?
| fatfingerd wrote:
| I would hope that people who ended up with unsecured
| endpoints also didn't feel like they understood it and it was
| the product for them.
| KronisLV wrote:
| > I moved to docker swarm and love it. It's so much easier,
| straight forward, automatic ingress network and failover were
| all working out of the box. I'll stay with swarm for now.
|
| I've had decent luck in the past with the K3s distribution,
| which is a bit cut down (but certified) Kubernetes:
| https://k3s.io/ Others might also mention K0s, MicroK8s or
| others - there's lots of options there.
|
| It also integrates nicely with Portainer (aside from occasional
| Traefik ingress weirdness sometimes), which I already use for
| Swarm and would suggest to anyone that wants a nice web based
| UI for either of the technologies: https://www.portainer.io/
|
| But even so, I still run Docker Swarm for most of my private
| stuff as well and it's a breeze. For my needs, it has just the
| right amount of abstractions: stacks with services and replicas
| that use networks and can have some storage in the form of
| volumes or bind mounts. Configuration in the form of
| environment variables and/or mounted files (or secrets), some
| deployment constraints and dependencies sometimes, some health
| checks and restart policies, as well as resource limits.
|
| If I need a mail server, then I just have a container that
| binds to the ports (even low port numbers) that I need and
| configure it. If I need a web server, then I can just run
| Apache/Nginx/Caddy and use more or less 1:1 configuration files
| that I'd use when setting up either outside of containers, but
| with the added benefit of being able to refer to other apps by
| their service names (or aliases, if they have underscores in
| the names, which sometimes isn't liked).
|
| At a certain scale, it's dead simple to use - no need for PVs
| and PVCs, no need for Ingress and Service abstractions, or lots
| and lots of templating that Helm charts would have (although
| those are nice in other ways). Unfortunately, Swarm isn't
| awfully good for job prospects, which I have to admit. Then
| again, one can sort of say the same about Hashicorp Nomad, yet
| it's also alive and kicking.
| ilyt wrote:
| I put stuff in systemd units and route them on HAProxy. For
| home network it's more than enough. Occasional container if
| app is designed so badly the authors can't figure out how to
| make it easy to install or package
| mulmen wrote:
| Kubernetes should be called Kafka but the name was already
| taken. It _feels_ deliberately complicated.
|
| I'm honestly curious about Kubernetes and I want to learn
| more but wow, it really is hard to grok.
|
| What does it mean that k3s is "certified"? Should I care?
| What's the tradeoff?
|
| When you mentioned Traefik my eyes glazed over.
|
| Is there a picture of how all this fits together? One not
| created by a consultant selling their services?
| hosh wrote:
| It isn't deliberately complicated, and once you know why it
| is designed the way it is, it becomes very capable.
|
| The biggest thing to understand about Kubernetes is that
| you cannot understand it as a normal piece of software. You
| don't directly control it, but rather, define the
| conditions under which a collection of automations respond
| to things going on. This is how it self heals. The whole
| system resembles more of a garden, where you are providing
| the conditions in which the plant grows, but you are not
| directly causing the plant to grow.
|
| For example, if you understand a pod, then the next step is
| understanding how to keep a pod running, how to detect
| failures, what go do if it does fail, and how to change out
| one set of pods for the next. This would be a deployment.
|
| You build your understanding by understanding more
| independent automations. For example, after understanding a
| deployment, you can look at how horizontal pod autoscalers
| work. An HPA knows nothing about a deployment and its
| internal state, yet will inform the deployment how many
| pods it should be running based on the metrics it sets. The
| HPA and the deployment scheduler otherwise work
| independently, and you can indeed write something that
| functions as an alternate HPA, and Kubernetes will do it.
| mulmen wrote:
| > It isn't deliberately complicated, and once you know
| why it is designed the way it is, it becomes very
| capable.
|
| I'm sure it isn't and I changed the emphasis to make that
| line more about the slope of the learning curve.
|
| Thanks for this explanation. The inability to grok the
| internals is really hanging me up. I'll embrace the
| simpler "distributions" as a way to understand the higher
| level constructs.
|
| Does it also make sense to ignore the fact that this is
| built on Linux's containers and namespaces?
|
| Based on my very limited understanding Kubernetes is
| basically a "cloud operating system". This is why the
| underlying components like networking and container
| managers can be swapped out. But I still don't grok how
| the subsystems fit together.
| hosh wrote:
| The pod is the basic abstraction. The underlying
| container technology can be swapped out. People have made
| Kubernetes schedulers for Microsoft technologies and WASM
| runtimes as well. If you wanted to use microvms, you can.
| Most people stick with Linux containers.
|
| The Linux container/namespace is useful in the sense that
| operational characteristics matter, and many
| troubleshooting can end up going deeper and deeper in the
| stack. But as far as understanding how to _design_
| things, knowing how the pure abstractions are composed
| together lets you be able to apply it to many systems,
| whether it is Kubernetes or something else.
|
| I suggest reading through the first few chapters of Roy
| Fielding's PhD dissertation. He is known better for REST,
| but his dissertation was actually about architectural
| elements, properties, and constraints, and then applying
| to hypermedia as an engine of state.
|
| Once you know how to reason through computing
| architecture, you can pick apart any architecture,
| framework, or paradigm, and know how to work with it, and
| when you fight against it. You don't grok a system by
| finding out how specific parts fit with each other, but
| rather, how things are consequences of the architectural
| elements, properties, and constraints. Then, the specific
| way things work (and don't work!) together pops out.
|
| And don't be afraid to dive into the messiness behind an
| abstraction. Knowing and being willing to do that can set
| you apart in your career.
| KronisLV wrote:
| > What does it mean that k3s is "certified"? Should I care?
| What's the tradeoff?
|
| I think that maybe I should have linked directly to the K3s
| GitHub, which goes into detail about what it is, what the
| components are, as well as links to more information about
| testing: https://github.com/k3s-io/k3s/
|
| Kubernetes is a large and complex system, but doing end to
| end tests aims to ensure that various implementations work
| correctly, they have pretty in depth docs on all of this,
| including conformance tests in particular: https://github.c
| om/kubernetes/community/blob/master/contribu...
|
| So, if a distribution passes all of these, then it's likely
| that it will give you fewer unpleasant surprises. Here's a
| list: https://www.cncf.io/certification/software-
| conformance/
|
| > When you mentioned Traefik my eyes glazed over.
|
| Well, you typically do need an ingress controller that will
| allow exposing services to the outside and direct network
| traffic.
|
| Traefik is liked by some, less so by others, but generally
| works okay - except that its documentation (for K3s in
| particular) isn't as abundant as it would be when using
| Nginx. For example, if you want to use a custom SSL/TLS
| certificate (without ACME) for all domains by default,
| figuring that out might take a while.
|
| That said, if you don't like it, technically you could swap
| it for something else, however I personally had a cluster
| hang once while doing that due to some cleanup action
| getting stuck.
|
| There are plenty of options:
| https://kubernetes.io/docs/concepts/services-
| networking/ingr...
|
| > Is there a picture of how all this fits together? One not
| created by a consultant selling their services?
|
| I actually found their diagrams to be surprisingly sane:
| https://docs.k3s.io/architecture
|
| However, do be warned that there are many different ways
| how to build clusters that might have significant
| differences.
|
| K3s is just one of the Kubernetes distros - that attempt to
| provide an opinionated view on how to mix and match the
| components out there to end up with something that will let
| you use Kubernetes and have things actually work as
| expected at the end of the day.
|
| A bit like how we have OS distros like Debian, Ubuntu, RHEL
| and so on.
| GauntletWizard wrote:
| Do you go to the hot new restaurant in town on their grand
| opening, and then leave a review that says "the food was great,
| but it was too crowded and the service was slow", and never go
| there again? That sounds like what you did there.
| globuous wrote:
| Is that not what you'd do? That sounds like an unusually
| useful review tbh.
| starttoaster wrote:
| The point of their comment, I think, is that a restaurant's
| grand opening is not always indicative of future
| experiences. The cooks/chefs, and front of house staff are
| all fairly new to the environment, menus might change as
| they find out that certain menu items are throwing off
| their timing or they don't have a great way to procure
| certain ingredients, and it will take time to all settle
| into a routine that works.
|
| I think they were saying that is akin to the state of
| kubernetes, where everything is too new to really judge it
| today. Let me make clear my bias before I get to my
| opinion: I'm a big fan of what kubernetes has accomplished
| with managing the deployment of applications. But I think
| comparing current day kubernetes to a restaurant's grand
| opening is naive. Kubernetes is amazing at what it does,
| and for how highly flexible it is. But it is highly
| flexible because of how advanced it has become, which in
| turn requires more knowledgeable administrators. You can
| take off with Docker Swarm because... it's very simple. It
| doesn't aim to be flexible, it just aims to be the docker-
| compose of distributed systems. Which is fine if you don't
| need the flexibility that kubernetes offers anyway.
|
| And while I do appreciate kubernetes for all of its
| strengths, going back towards the point of this OP, I do
| believe it's a bit of a security nightmare. For example,
| the kubernetes operator pattern often just requires this
| application to run in your cluster with a ClusterRole
| granting it read and potentially write levels of access to
| ConfigMaps and Secrets around your cluster. Often without
| much in the ways of a method of telling the kubernetes API
| server what namespaces it is allowed to view those objects
| in. Worth mentioning though that secrets management is
| something that Docker Swarm also handles fairly poorly, and
| honestly isn't much of a security regression from the
| normal "deploy a VM that has all your decrypted secrets
| sitting on the system" anyway.
| fatfingerd wrote:
| The description sounds a lot more like a trendy restaurant
| where no one can figure out what is actually on the current
| menu. That's more of a permanent complaint than an opening
| night one.
| radicalbyte wrote:
| Menus replaced by QR codes. No wifi and poor mobile
| internet.
| rhuru wrote:
| Personally I think it is more like,
|
| I wanted to eat some food so I went to the meat packing
| factory, hence disappointed I went to a drive through and
| bought burger.
| Uehreka wrote:
| Kubernetes is not the hot new restaurant in town, they've
| been around for years. If they're still in a period of rapid
| churn that's concerning.
| candiddevmike wrote:
| What did you learn by taking the easy route with Docker Swarm
| in your home lab? What did you set out to do with Kubernetes to
| begin with?
| ramraj07 wrote:
| > What did you set out to do with Kubernetes to begin with?
| Get some actual servers up and running to do real work vs
| getting certified on how to get k8s running.
| LargeDiggerNick wrote:
| I read swarm doesn't have autoscaling but that was a while ago.
| Did that change?
| dijksterhuis wrote:
| Nope. no change.
| debarshri wrote:
| This is super interesting. If you actually look at a Loadbalancer
| logs without a WAF in place, you will see huge amount of bot
| activity.
|
| What is not clear in the blogs is how did they actually accessed
| the API server without a valid service token. Secondly, they
| index alot on IPs, that implied these are k8s cluster that are
| not managed (EKS, GKE etc).
|
| Is my assumption correct?
| abatilo wrote:
| The GKE "default" is to bind a public IP to your control plane.
| So these could still be managed
| ozarker wrote:
| A couple years ago I came into a company that had recently
| switched almost everything to Kubernetes. The company expected
| the devs to learn how to own and operate the cluster. Most of the
| devs there couldn't care less about anything except how to deploy
| their code to it (probably rightfully so). But I was shocked to
| find the k8s api was exposed to the internet and unauthenticated.
|
| I brought the problem up with the higher-ups and they were pretty
| lethargic about it. Moral of the story I guess is if you're gonna
| choose to operate infrastructure like Kubernetes make sure you
| have experienced people trained to do so and don't expect your
| devs to wear another hat.
| andhapp wrote:
| DevOps is the new way, I think. Developers create the code and
| also know how to run and maintain infra :-)
| ramraj07 wrote:
| Or do what actually productive people do and use managed
| services like Elastic Beanstalk and RDS.
| starttoaster wrote:
| It usually doesn't work that way, in my experience. My title
| is "Senior DevOps Engineer" and I do my fair share of writing
| code as well as maintaining infra. But honestly, hiring for
| people who do what I do seems to be very hit and miss. Most
| people excel at one or the other, and so they only pick up
| tickets that fit their skillset. They might dive into a
| ticket outside their comfort zone here or there, but not
| commonly. It would seem to me that many DevOps engineering
| teams are just one team who has a manager that hired and
| manages some Devs and some Ops folk, and just wrangles each
| of them into doing tasks that fit the goal of the team and
| their individual talents. And from an outsider's perspective
| it would seem like one team of jacks of all trades, but
| internally it's very much known who is going to take on
| certain kinds of tasks and the siloes are all very much in
| place.
| diarrhea wrote:
| That situation sounds reasonable though. I don't believe in
| tearing down all silos. Expertise and preferences exist,
| it's human nature. I always thought the DevOps mantra of
| having one person be good at both was a fruitless
| endeavour. Have Dev and Ops collaborate closely, yes, but
| let them be separate people (not teams!). It's fine. You
| know how to dev and ops, and I don't doubt it. I do too.
| But if we focused that same energy and expertise, we'd be
| double the devs/ops. Nevertheless, people like us are
| excellent glue for organisations. At the same time, teams
| and organisations of only us would be quite pointless.
|
| I feel similarly about fullstack. You're a front end dev
| that has written a controller and used an ORM before. Often
| not too inspiring. Backend devs are still needed, whereas
| full stack devs form excellent glue.
| _jal wrote:
| I came in to my current gig as the first devops hire. We
| were a lot smaller then. 8 years on, I'm managing an
| infrastructure team alongside ops and devops teams. We also
| draw lines a little differently; infrastructure manages
| core services like auth, logging, instrumentation, etc.
|
| In my experience, devops folks usually come from the dev
| side, not the ops side, and their experience and curiosity
| reflects that. So we let them focus on shipping line-of-
| business code while making sure what they ship runs in a
| competently managed environment, as well as handling data
| centers and all the more traditional infra stuff.
| mac-chaffee wrote:
| DevOps still must draw the line somewhere. DevOps people
| typically don't physically plug cables into switches, for
| instance. With the ever-growing complexity of infra, I see
| that line having to shift more and more, with either PaaS
| products or platform engineering teams filling the space by
| providing something like Kubernetes as a service to the
| DevOps folks.
| sharts wrote:
| I don't think that's true anymore. Most companies that
| seriously deploy k8s tend to have dedicated teams for it.
|
| There's no way an app developer is going to be able to do
| their work while also knowing the ins and outs of cluster
| mgmt.
| bonestamp2 wrote:
| > Most companies that seriously deploy k8s tend to have
| dedicated teams for it.
|
| Ya, and everywhere I've worked, that team is called
| "devops". The application developers are a different team.
| Maybe that's not the norm, but I thought it was.
| hosh wrote:
| There are app developers like that. There is a class of
| problems where you need to be able to do both application
| development and system work to be able to do well.
|
| What a lot of people miss about larger k8s deployment is
| that you can create an internal platform team, like a
| micro-heroku within the company. The company would have to
| hire people who can create _products_ and also knowing how
| to work with infra in order to create a platform. It would
| be this rare combination of product, dev, ops, and focusing
| on supporting the rest of the engineering team and giving
| up on the glory of launching something for end users.
| jml78 wrote:
| They don't need to know k8s administration but they need to
| understand how their services are going to function and
| scale in k8s.
| politelemon wrote:
| That's the problem, many companies don't realize they need
| to be serious, they just have a few people who want k8s in
| their stack.
| ChicagoDave wrote:
| The bigger issue is that container deployment and management is
| a big reseller business and they actually don't care about
| whether it's appropriate or not or if the eventual maintenance
| and DevOps are in place.
| gz5 wrote:
| and initiatives to make devs/ops/systems use vpn, bastion,
| static permitted IPs, etc. are often DOA due to their impacts
| on automation and speed.
|
| a foss alternative is kubeztl - it takes kubectl off the
| networks - no public IP at all - but w/o users needing agents:
| https://github.com/openziti-test-kitchen/kubeztl/tree/main
|
| disclosure: i am a maintainer and the software overlay in the
| middle (helps enforce outbound-only, pre-authorized connects
| only) needs to be managed (self-hosted foss or hosted saas), so
| there are still trade-offs.
| lakomen wrote:
| That's what I've been saying all along. For me, when I use
| something I need to understand it. I'm usually a quick learner
| (3rd iteration is where I start making sense of a complex
| topic). I beat my head against k8s for 2 weeks, every day, I
| wanted to setup a 3 node control pane on my own, and it was
| just too hard. I gave up.
|
| Also, I don't have many resources at my disposal, k8s requires
| a separate storage cluster. All-together I would've required 9
| servers, 3 control pane, 3 app or pods, 3 storage. Way too
| expensive. And then think further, all this has to be updated
| sometime. I don't have the brain and time resources to do that.
|
| So, if I want k8s, I need to pay for expensive cloud. Or, I can
| remain with my 1 server with backups and build systems on
| demand. Yes, I hear the booksmarties scream "but but
| failsafety". 1 incubator server, specialized systems when
| something takes off.
|
| Some companies can afford to pay for expensive cloud, more
| power to them. I can't. I'm busy solving problems at hand, not
| dealing with additional problems of a system I don't fully
| understand.
|
| Yes, k8s is a nice comfortable way, if you can afford it. But
| it's expensive and wasteful of resources. And we do have a
| climate crisis.
| hosh wrote:
| Someone is building a systemd based kublet replacement.
|
| Why? Because Kubernetes has some great ideas on self-healing
| and resiliency.
|
| But if you are small, and can work with a single instance for
| availability, then probably should stick with systemd. You
| could also use podman to get a small bit of the kubernetes
| ideas for working with containers.
|
| The next level of scale wouldn't be the full k8s but more
| like k0s, or k3s.
| ncrmro wrote:
| K3s/k9s can be ran on a single server and gives you the
| declarative benefits and the entire helm ecosystem.
|
| Everyone assumes you have to run multi node.
| tarruda wrote:
| With "kind" (kubernetes in docker) setting up a one node
| cluster is as easy as typing "kind create cluster"
| thinkmassive wrote:
| > All-together I would've required 9 servers, 3 control pane,
| 3 app or pods, 3 storage. ... Yes, k8s is a nice comfortable
| way, if you can afford it. But it's expensive and wasteful of
| resources. And we do have a climate crisis.
|
| You can run each of those roles on the same machine,
| achieving high availability from 3 total. For learning
| purposes you can run this all in VMs on a single machine.
|
| There are plenty of good reasons not to try kubernetes when
| it doesn't fit your use case, but the overhead it imposes is
| mostly in time and expertise. Claiming it's more costly in
| compute resources, and especially contributing climate
| crisis, is reaching.
| mjburgess wrote:
| It's better to pay the cost of abstraction when you've good
| evidence of its necessity
| aprdm wrote:
| Yes, same with cloud and giving devs full access to
| AWS/Azure/You-name-it. Disaster waiting to happen
___________________________________________________________________
(page generated 2023-08-08 23:00 UTC)