[HN Gopher] Why Fix Kubernetes and Systemd?
       ___________________________________________________________________
        
       Why Fix Kubernetes and Systemd?
        
       Author : MalteJ
       Score  : 133 points
       Date   : 2022-09-18 15:49 UTC (7 hours ago)
        
 (HTM) web link (medium.com)
 (TXT) w3m dump (medium.com)
        
       | tacitusarc wrote:
       | I think the virtual machine comments are the one area that
       | doesn't make sense to me. Everything else seems reasonable.
        
         | kris-nova wrote:
         | Good call out. I glossed over a lot of the semantics. To be
         | fair I could write an equally as long and drawn out article
         | about why virtualization and a lightweight hypervisor is in
         | scope for a process management mechanism.
        
           | sp332 wrote:
           | I get that VMs have certain advantages, but I was surprised
           | that they are the default (only?) choice for namespace
           | management. Is it that unusual for containers to mix and
           | match, e.g. have different filesystem namespaces, but share a
           | network namespace?
        
             | kris-nova wrote:
             | This is a good call out. One of the philosophies of the
             | project I am trying to maintain is instilling my opinion
             | into things while still having the project play nice with
             | the rest of the ecosystem.
             | 
             | On one hand I over-engineer a systemd hypervisor that is
             | only meaningful to me. On the hand I create another
             | ambiguous junk drawer that is meaningless without a team of
             | experts to tell you how to configure everything.
             | 
             | I think having what kubernetes calls "namespaces" as an
             | isolation boundary on each node running as a VM is the move
             | here. It SHOULD run like this as a default. Pods are
             | another story. Namespaces however -- should always have a
             | VM boundary.
             | 
             | Getting the network device integration is going to be a big
             | thing here. I suspect this means each namespace now has 1
             | or more NICs it will be able to leverage.
             | 
             | Firecracker went with the bridge mentality which I kind of
             | disagree with: https://github.com/firecracker-
             | microvm/firecracker/blob/main...
             | 
             | I want to see tools like Tailscale that leverage network
             | devices as the "true network interface" find value in the
             | guest namespace paradigm.
             | 
             | Hope this helps!
        
           | zozbot234 wrote:
           | Systemd can manage virtual machines already, see systemd-
           | machined and machinectl. It has no support for
           | clustered/distributed state as of yet, however.
        
             | cmeacham98 wrote:
             | Those are containers, not VMs.
        
               | zozbot234 wrote:
               | They can be either. This is expressly supported, e.g. by
               | the libvirt tooling ecosystem.
        
               | cmeacham98 wrote:
               | Ah, my bad, I wasn't aware machined supported VMs as
               | well. I wonder how that works with the other systemd
               | integrations (like viewing logs from the hypervisor OS
               | with journalctl), I suppose it just doesn't?
        
               | SahAssar wrote:
               | Many systemd commands have support for remote commands by
               | setting a --host param to the address of the machine you
               | want to invoke the command on.
               | 
               | It's similar to how --machine works for containers under
               | systemd, but using ssh to communicate.
        
         | topspin wrote:
         | The virtual machine comments make a great deal of sense to me.
         | Virtual machines bring an enormous amount of power and
         | foregoing that power because some container orchestration
         | system is oblivious to their existence is deeply foolish.
         | 
         | My dream is VMs as easy to manage (build, deploy, etc.) as
         | containers, hosting containers, and managed across a cluster of
         | hardware by an orchestration system. Trying to make container
         | orchestration do the things VMs do with ease produces nothing
         | but pain and waste.
         | 
         | kris-nova: yes, fully integrate VM management and fill this
         | yawning chasm. There is enormous demand for this, naysayers
         | notwithstanding.
        
       | traverseda wrote:
       | >Now where you lose me is where you start duplicating scheduling,
       | logging, security controls, and process management a cluster
       | level and fragment my systems yet-again.
       | 
       | That is sort of what we're talking about when we talk about unix
       | philosophy. If your daemons for handling all those aren't tightly
       | coupled then it's easier to use the same tools for different
       | tasks.
        
       | viraptor wrote:
       | It's an interesting project. Definitely like that it defines its
       | reasons to exist and goals / non-goals first. More big projects
       | should do that clearly.
       | 
       | I'm a bit confused by the early complaints though.
       | 
       | > It assumes there is a user in userspace. (EG: Exposing D-Bus
       | SSH/TCP)
       | 
       | What does this mean?
       | 
       | > Binary logs can be difficult to manage in the event the system
       | is broken and systemd can no longer start.
       | 
       | What kind of managing? To read them you can use --directory to
       | read a specific set of logs from a recovery system and systemd
       | doesn't need to run. What else am I missing?
        
       | denton-scratch wrote:
       | Doesn't sound like this fixes systemd, unless your problem is
       | that you depend on systemd + kubernetes. That is, I read this as
       | a Redhat/Gnome/Freedesktop problem; I'd love a fix for systemd,
       | but I don't think this one solves my problems.
        
         | kris-nova wrote:
         | Care to elaborate on your problems? Or maybe a more fundamental
         | question would be -- would you be interested in a project like
         | this solving whatever your particular gripes with systemd might
         | be?
        
           | denton-scratch wrote:
           | Thanks for a constructive response! I don't code any more,
           | and I was never an OS-level programmer. And the systemd
           | debate has been argued, and my champions lost. So I concede;
           | I don't want to argue about systemd. The problems are still
           | there though, and I'm interested in ideas that propose to
           | solve them.
           | 
           | This article was scratching different itches than the ones
           | that bother me. I'm not interested in Kubernetes, or any
           | other enterprise cloud stuff for that matter. I don't need
           | orchestration, my hardware never changes (so I don't need
           | anything like UDEV), and if there ever is an ad-hoc change I
           | can ad-hoc configure it.
        
             | majewsky wrote:
             | > my hardware never changes (so I don't need anything like
             | UDEV)
             | 
             | You don't own any USB peripherals?
        
       | miiiiiike wrote:
       | A gRPC dependency is a non-starter. The rest looks so kludged and
       | unnecessarily complicated that it's DOA.
        
         | mbrumlow wrote:
         | Why?
        
       | WhatIsDukkha wrote:
       | I hate STOP energy but this project seems like its coming from an
       | ill formed set of ideas.
       | 
       | "Problems with Systemd"
       | 
       | Seems pretty unserious. Vague reference to Erik Raymond the Unix
       | Philosophy (actually very debatable and not a useful way to think
       | about Linux or Unix).
       | 
       | Links to pretty useless and unserious anti systemd posts
       | (skimming them was a waste of time).
       | 
       | """I personally don't have a deep seeded loathing for systemd
       | like many of my peers."""
       | 
       | Author should definitely get out of their local slack channel
       | more, it not the result of a plot by Redhat and Sekrit powers
       | that systemd is everywhere.
       | 
       | If the author is going to replace systemd they'd be wise to
       | understand why it works the way it does and the epic amount of
       | problems it solved first.
       | 
       | The cognitive dissonance of systemd concerns expressed and then
       | somehow merging with some odd mutant kubernetes speaks for
       | itself....
        
       | betaby wrote:
       | I find kubernetes - systemd overlap angle as a very valid point.
       | When I first started to learn kubernetes more often than not I
       | was thinking 'hey, we can do that with systemd as well!'
        
         | pram wrote:
         | This is how fleet on CoreOS was designed. It just managed
         | systemd containers across your nodes (IIRC its been a while)
        
           | pengaru wrote:
           | More like it managed systemd services/units across the
           | cluster, like a distributed systemd.
           | 
           | That was still early days for systemd-nspawn, folks would be
           | using docker containers under systemd services. And that
           | caused heaps of other problems because docker was aspiring to
           | do PID 1's jobs as well.
        
         | e1g wrote:
         | Same - we now use systemd to sandbox our apps, set resource
         | limits, hide the rest of the system, mount read-only FS, etc.
         | Add a custom SELinux policy for your service, and you get a
         | compelling alternative to containerization (at least at
         | runtime).
        
           | GordonS wrote:
           | Do you have a blog perchance? I'd love to read more about
           | this, maybe with fully-featured examples.
        
             | e1g wrote:
             | No, but here is one of our service files for systemd to
             | manage a Node.js backend instance. Could be a useful
             | starting point to create your own https://gist.github.com/e
             | ugene1g/22aa2b2c8897a29a54fba876b0c...
             | 
             | Many sandboxing features require a relatively modern
             | systemd and will do nothing on older distros (we run
             | RHEL9).
        
               | GordonS wrote:
               | This is great, thanks, and I appreciate that's it's
               | commented too!
               | 
               | Also like the "negative" commented out config, that's
               | something I often do myself with footguns that feel
               | "obvious".
        
           | mike_hearn wrote:
           | Yes, I reached the same conclusion when setting up the
           | (fairly simple) servers needed for my company. SystemD +
           | standard scripting can take you a long way, perhaps with
           | Ansible if you need more. I've only used Kubernetes a bit,
           | but I used Borg for quite some years. A fantastic system if
           | you need an incredibly regularized prod environment with lots
           | of replicas for everything i.e. if you're trying to replicate
           | Google's internal setup. Extremely painful the moment you
           | need to step outside that world. I had to do that once or
           | twice at Google and quickly started wishing I could just grab
           | some normal Linux machines and install some packages that
           | depended on Apache and a DB.
           | 
           | Actually said company makes packaging tools and Linux
           | packages are fairly simple so we started with the ability to
           | make server packages that use systemd.
           | 
           | That support is still there and makes it fairly easy to build
           | a signed apt repository from build system outputs (e.g. jars,
           | native binaries), or even by re-packaging an existing server
           | download [1]. What you get out is debs that start up the
           | service on install, ensure it starts automatically at boot,
           | restarts it across upgrades and shuts it down cleanly if the
           | machine shuts down. Such packages can be built from any
           | machine including Windows and macOS. The nice thing about
           | doing packaging this way is you can specify dependencies
           | (both install and service startup) on things like postgresql,
           | and you get a template nginx/apache2 reverse proxy config as
           | well. Unfortunately there doesn't seem to be a way to make
           | those templates 100% usable out of the box due to limits in
           | their config languages, but it's still nice to have.
           | 
           | There's also pre-canned config snippets for sandboxing,
           | setting up cron jobs and other useful systemd features [2].
           | We package and run all our servers this way. One big rented
           | colo machine is sufficient right now. If there was a need for
           | clustering we'd just point the machines at the same apt repo
           | and use some shell scripting or Ansible, I think.
           | 
           | There's lots of tools for making Docker containers out there
           | already, but I never really clicked with Docker for various
           | reasons. Conveyor is mostly focused on desktop apps but
           | perhaps there's more demand for easily making systemd using
           | packages than I thought. The server side support is there and
           | we dogfood it, but isn't really advertised and it lacks a
           | tutorial.
           | 
           | [1] https://conveyor.hydraulic.dev/2.1/tasks/server/
           | 
           | [2] https://conveyor.hydraulic.dev/2.1/stdlib/systemd/
        
           | xani_ wrote:
           | More than that, you can automatically mount whole block
           | device (or file-as-device), even run integrity checks on it.
           | 
           | If someone is building say an embedded system that has to run
           | different services (say a NAS, or a router), it's pretty
           | compelling alternative to a bit more heavy weight containers
           | like docker.
        
         | JanMa wrote:
         | I often noticed this as well. After reading the article I
         | wonder: Would it be feasible to write a kubernetes-shim for
         | systemd to reduce the overlap instead of reinventing
         | everything? This would not fix any of the valid criticism the
         | author has of systemd but instead make it do even more things.
         | Nevertheless I somehow find the idea very intriguing
        
           | rckclmbr wrote:
           | CoreOS did pretty much this (minus k8s, but same
           | abstraction), and I loved it. Much easier for simple
           | deployments, harder for anything else
        
         | politelemon wrote:
         | I noticed the same but came away with a different conclusion,
         | "hey, k8s is reimplementing the OS.". In other words, concepts
         | which have existed for years, battle tested and well known, are
         | new and untested in the k8s world.
        
           | patrickthebold wrote:
           | But k8s is distributed across several computers. If you could
           | have Linux manage resources and run processes across several
           | computers I think people would do that.
        
           | andrewstuart2 wrote:
           | They're not even really new or untested, just being done at a
           | different plane of abstraction. Running a system efficiently
           | across multiple machines requires different implementations
           | of familiar abstractions than running on a single machine.
           | Even the single machine approach has changed dramatically in
           | the last 20+ years. We started single program, went to single
           | user, to multi-user, to multi-processor, to multi-core, to
           | NUMA, etc (not necessarily in that order), and a significant
           | subset of systems are now multi-machine, even if you ignore
           | the internet. All of these new paradigms tend to have similar
           | base requirements/optimizations that would kind of be
           | pointless to reinvent versus simply adapting to the new
           | infrastructure style.
        
             | andrewstuart wrote:
             | Hey! We're both here in the same thread.....
        
           | goodpoint wrote:
           | > concepts which have existed for years, battle tested and
           | well known, are new and untested in the k8s world
           | 
           | ...and they also create lock-in wrt k8s.
        
       | jordemort wrote:
       | I'm skeptical, and I think systemd gets way more hate than it
       | deserves, but it's interesting to see an alternative from someone
       | outside of the "Make Init Shell Again" crowd.
        
         | kris-nova wrote:
         | systemd is like chicken tendies -- its always been there for
         | me.
        
           | zzzeek wrote:
           | that line is in your article as well, can you explain what it
           | means? I am missing the reference
        
             | kris-nova wrote:
             | Its just a joke. Most folks from the U.S. who enjoy eating
             | chicken tenders (breaded chicken breast with sauce) view it
             | as a safe/comfort food that has never been a poor choice to
             | eat.
             | 
             | I feel the same way about systemd, its safe, reliable, and
             | always a good choice for "dinner".
             | 
             | Basically I am saying that systemd has withstood the test
             | of time and has never disappointed me.
        
               | nordsieck wrote:
               | > I feel the same way about systemd, its safe, reliable,
               | and always a good choice for "dinner".
               | 
               | > Basically I am saying that systemd has withstood the
               | test of time and has never disappointed me.
               | 
               | IMO, when you make a reference specifically to chicken
               | tenders, there is an implication of childishness -
               | implying that systemd is a bit of a toy implementation.
               | It doesn't sound like you mean that particular
               | interpretation.
               | 
               | It would be the same if you mentioned red jello, milk
               | boxes, capris suns, fruit snacks, or any other
               | stereotypically children's food.
        
               | kris-nova wrote:
               | Good feedback. I went ahead and removed the reference and
               | just called out my sentiment directly.
        
               | bee_rider wrote:
               | Ah, that's funny -- I took it to mean like... an OK in a
               | pinch and generally unobjectionable, but not something to
               | be excited about. Which is basically how I feel about
               | systemd (as someone who doesn't do systems engineering or
               | run a distro, so, probably I don't see where it becomes
               | exciting)
        
           | panick21_ wrote:
           | My first experience was with Arch Linux right when they
           | switched to Systemd. I hated it, but I didn't really get what
           | it was, not knowing what an init system or a process even is.
           | 
           | But I was then quite surprised a few years later when it was
           | just the normal thing for me when people in Debian were
           | basically fighting a war over it.
        
             | dijit wrote:
             | My first experience with systemd was similar, it was in
             | Fedora 15 and it basically was such an incredible source of
             | frustration.
             | 
             | Seems to be quite serviceable on the happy path, and I
             | quite like it for desktop linux honestly.
             | 
             | But I find it's usually a bit too opaque and "magic" for
             | servers; even if it's causing less and less issues.
             | 
             | I liken the issue to the same one pulseaudio had: a
             | software ecosystem that is poorly designed but foisted
             | together into a perfectly serviceable product by incredible
             | amounts of effort for all involved in its development and
             | release to public.
        
             | xani_ wrote:
             | It allowed us to remove literally thousands of lines of
             | code, including a bunch of rewritten init scripts, because
             | apparently "simple" SysV scripts still have plenty of traps
             | for the developers that start to rear their ugly heads when
             | used with say Pacemaker.
             | 
             | I dislike its "let's just reinvent features that worked
             | entirely fine" (like journalctl binary logs WITH NO FUCKING
             | INDEXING, so they are slower than text files for usual
             | operation somehow), but at its core competence it's
             | extremely useful.
        
       | kris-nova wrote:
       | I think one of the things to call out is that this is going to
       | take a lot of time. I don't realistically see something like this
       | being just another "get rich quick" CNCF project that pops up and
       | dies out in 24 months.
       | 
       | If we do this "right" it kind of needs to go at the pace of the
       | Kernel, and hold very tightly onto the API scope to protect the
       | project from scope creep.
        
       | andrewstuart wrote:
       | systemd is better than "fine" or "great" - systemd is awesome.
       | 
       | systemd is a real power tool and the more I learn about it the
       | better I like it.
       | 
       | When you need to get something done, systemd often as not has
       | your back.
       | 
       | Simple example - recently I needed to ensure a given service
       | could not send more data than a given limit. Easy - systemd
       | includes traffic accounting on a per service basis - all I needed
       | to do was switch it on with "IPAccounting=yes" and read the
       | values and kill the service if its data egress was beyond my
       | maximum.
       | 
       | This is just one simple example of how systemd makes life easy -
       | there's many, many more examples.
       | 
       | For example did you know that systemd/nspawnd lets you do
       | virtualisation with all the benefits of containerisation such as
       | resource/memory optimisation? The more you dig, the more you'll
       | find to like about systemd.
       | 
       | systemd haters gonna hate but show me something better.....
       | 
       | If anything, systemd influences what modern Linux is so heavily
       | that modern Linux should be referred to as linux/systemd instead
       | of gnu/linux.
        
         | andrewstuart3 wrote:
         | Those are common arguments for systemd, similarly to other
         | batteries-included software. But the opponents of such software
         | (and of systemd in particular) tend to expect the whole system
         | being like that: having all the useful components playing
         | nicely together, and easily swappable in addition to that.
         | Which is harder to achieve, but supposed to be nicer.
        
           | andrewstuart wrote:
           | https://images.news18.com/ibnlive/uploads/2021/12/spiderman-.
           | ..
           | 
           | It's like I'm in a hall of mirrors......
           | 
           | It's the integration of systemd, the common approach that
           | yields the giant payoff. I understand the "lots of
           | independent utilities" Unix philosophy but systemd's
           | consistent broad scope yields increasing benefits.
        
       | golf_mike wrote:
       | Great stuff! So would you say that, in a way this is implementing
       | a node-level control plane?
        
         | kris-nova wrote:
         | I think the term "control plane" is overloaded but -- yes.
         | 
         | I just think of it as a node API more than anything. Having a
         | comprehensive set of features/library/API for the node seems
         | like it would unlock a lot of features we are seeing in large
         | service mesh and large platform shops are turning to sidecars
         | to solve.
        
       | 727564797069706 wrote:
       | I think I'm stupid, because for me this all seems overly complex.
       | Why do the things get more and more complex instead of the
       | opposite? Is it really the only way? Like said, I'm clearly
       | stupid for not understanding it.
        
         | dijit wrote:
         | Simple systems are easy to replace, thus are replaced often
         | until eventually a complex replacement comes forth.
         | 
         | Complex systems are hard to replace, thus stay in place.
         | 
         | This is the way of all things unless there is a design
         | constraint placed on simplicity. In mechanical engineering they
         | design things to be cheap to manufacture: this is the design
         | constraint.
         | 
         | Computers do not have constraints on them, you can burn
         | resources or be as complex as you want: more powerful hardware
         | is around the corner, and after all: why shouldn't abstractions
         | be abstracted ad infinitum; as long as it's easier for people?
        
           | throwawaymaths wrote:
           | "Simple systems are easy to replace, thus are replaced often
           | until eventually a complex replacement comes forth. Complex
           | systems are hard to replace, thus stay in place."
           | 
           | Damn, this is very good and clarifying. Like Gresham's law,
           | but for software.
           | 
           | > why shouldn't abstractions be abstracted ad infinitum; as
           | long as it's easier for people?
           | 
           | Not pessimistic enough. Should be: _Even if it 's no easier
           | on anyone_
        
         | xani_ wrote:
         | Few reasons. Main one is that containers are a complex problem
         | to implement. You need to mount a bunch of filesystems, set a
         | bunch of limits what app running in container can and cannot
         | do, and then do all the plumbing to network it (firewalls,
         | redirects etcetera).
         | 
         | VM is essentially just the last part and usually simpler too
         | ("just connect it to switch like it is separate machine" is
         | common solution"). Sure you still have some complexity around
         | storage but that's just "here is a bunch of blocks", no need to
         | make overlay filesystem or mount image layers.
         | 
         | Then we are at level of generic OS. The features
         | docker/systemd/k8s uses were not "designed to make containers",
         | they were designed to be used for anything you wanted, from as
         | complex as containers to as simple as "let's put this processes
         | in a cgroup so we can limit memory usage together. And with
         | flexibility comes complexity.
         | 
         | Then we have a problem of both systemd and k8s/docker managing
         | same interfaces so both have to play nice with eachother. And
         | we get to _even more_ complexity when kubelet-controlled
         | containers also need to manage system stuff, like say
         | networking via kube-router.
         | 
         | It would be simpler if say k8s' kubelet directly managed
         | everything all at once but that means it would _also_ have to
         | do what systemd does (boot system, setup devices and partitions
         | etc.) so while total complexity would be lower, the kubelet
         | itself would be more complex.
        
         | shrubble wrote:
         | You're not the one who is insane.
         | 
         | Assume you have a $25k budget for a Dell server and $20k for
         | attached storage . Go see what you can build on Dell.com's
         | online configuration tool.
         | 
         | You could colocate everything you can buy for $45k in a half
         | rack at a reputable datacenter for $750 per month including a
         | burstable to 1gbit internet connection.
         | 
         | I estimate that at least 75% of the people who are commenting
         | on these threads, will not have a product or service that could
         | cause that server to choke on the load, nor saturate the 1gbit
         | feed with legitimate traffic from their applications.
         | 
         | But they will instead recommend that for 600k/year in spend you
         | over-engineer all aspects... I call it CVops instead of DevOps.
         | CV being another word for resume...
        
           | xani_ wrote:
           | Yes but you need semi-competent people to manage it and have
           | actual ops dept.
           | 
           | Then again we got a bunch of racks and our ops dept is 3
           | people.
           | 
           | We have few dozen different apps running on it, anything from
           | "just a wordpress" to k8s cluster (mostly because our
           | customers want us to deploy app they are paying us to develop
           | on k8s).
           | 
           | So far the only actual value k8s provides is self-service
           | aspect of it. Nothing that is running on it needs anything
           | special to k8s, all of it is just "few app servers + one or
           | few DB services".
           | 
           | Sure, there is value in not bothering ops to install yet
           | another VM and plumb the network for it, but you need pretty
           | big team for that kind of savings to be worth it.
           | 
           | There is also value in having complex app deployment
           | centralized in one manifest that can be run on prod or on dev
           | machine, but you can you know... not overcomplicate your apps
           | massively by making a bunch of microservices for a thing that
           | really should be just well structured single app. Again, not
           | really a benefit for small (let's say below 50) teams, for
           | bigger orgs, sure
           | 
           | > But they will instead recommend that for 600k/year in spend
           | you over-engineer all aspects... I call it CVops instead of
           | DevOps. CV being another word for resume...
           | 
           | It's a mix of bad goals and devs wanting to work on cool
           | stuff. I did mildly overengineer a lot of things but about
           | 8/10 out of them eventually became useful, but that's because
           | I'm in ops so _everything_ we do is long term by default.
        
           | Frost1x wrote:
           | >I call it CVops instead of DevOps. CV being another word for
           | resume...
           | 
           | I might borrow this, although I'm leaning towards Res[ume]Ops
        
         | 411111111111111 wrote:
         | Uh, what are you talking about? the article is talking about
         | significantly reducing the complexity of running a destributed
         | container platform...
        
       | [deleted]
        
       | kris-nova wrote:
       | For reference (from the author of systemd)
       | https://0pointer.de/blog/projects/the-biggest-myths.html
        
       | [deleted]
        
       | xani_ wrote:
       | At that point why not just plainly run kubelet as init 1? Add
       | whatever is required (mainly initial network and storage config),
       | why add the management layer on top of it at all ?
       | 
       | Like, it appears to target doing same thing systemd does but with
       | different APIs which is like... okay ? Sure, some consistency on
       | one side, on other it is now entirely dissimilar from anything
       | that looks like normal Linux userspace.
        
       | devmunchies wrote:
       | Does anyone know if it's possible to run multiple instances of an
       | app via systemd that all listen on the same port and it does some
       | kind of round robin load balancing to the services?
       | 
       | I could introduce my own load balancer proxy like Envoy but I
       | want to reduce complexity as much as possible for systemd to be a
       | viable alternative to k8s.
        
         | tyingq wrote:
         | Looks like there's some support for that already in systemd,
         | but it's not complete...see:
         | https://github.com/systemd/systemd/issues/8096
         | 
         | Also iptables would be a simple option for a round-robin or
         | random load balancer. No health checks or anything though. See:
         | https://scalingo.com/blog/iptables for an example.
        
         | andrewstuart wrote:
         | Start point would be to search for socket based activation.
         | 
         | http://0pointer.de/blog/projects/socket-activation.html
         | 
         | I've done it with Python it's very easy.
         | 
         | systemd listens on a port and when a connection request comes
         | in it starts your service. You could spawn a new process and
         | let systemd keep waiting for new inbound connections.
        
       | kirbyfan64sos wrote:
       | I'm interested to hear if there are any plans for sending file
       | descriptors over local gRPC, which afaik isn't really possible
       | right now but is something D-Bus does easily.
        
       | 0x457 wrote:
       | Honestly, surprised no one built kubelet-shim for systemd yet.
       | 
       | Neat project.
        
       | tempie_deleteme wrote:
       | asking "why?" is a double whammy question, the answer they are
       | looking for is a mix of what (to do) and how to do it, whatever
       | that 'solution' is called.
       | 
       | how to fix this "most-open unit of computing" will depend on
       | "what" do you think this 'unit of compute' even means, which of
       | course depends on who you are and what do you do with 'units of
       | compute' (buy? sell? resell? use up? build? oversee??)
        
       | PuercoPop wrote:
       | The author did a great job explaining the motivation for Aurae. I
       | fully agree that there should be tighter integration between k8s
       | and the init system on each node. But thinking of the desktop,
       | having a gRPC server on PID 1 seems unnecessary. dbus is the RPC
       | used by systemd. Is there a reason why gRPC instead of dbus[0]?
       | Or is the goal of Aurae to replace systemd in the cloud/server
       | Linux space?
       | 
       | [0]: dbus normally is used over a unix socket but afaik you can
       | use dbus over a regular tcp socket
        
         | kris-nova wrote:
         | To be clear I see the "gRPC server" that listens over a unix
         | domain socket being something more like pid 10-20.
         | 
         | If there is a network "gRPC server" as well, I suspect it would
         | be somewhere in the 20+ department.
         | 
         | I don't anticipate exposes the actual pid 1 over a network. I'm
         | not a monster. I suspect there will be be an init/jailer
         | mechanism that manages bringing some of the basics online such
         | a system logger and any kernel services (EG: ZFS) right away.
         | One of the first "services" would be a d-bus alternative that
         | is written in Rust and leverages gRPC.
         | 
         | The main motivation behind gRPC is cloud, mTLS, and the support
         | in rust. It comes with the ability to implement load balancing
         | and connection error management/retry capabilities. I have week
         | opinions on the technical detail as I don't suspect the network
         | traffic will be very large. gRPC is more familiar for folks in
         | cloud, as well as supports a large number of client languages
         | for generating clients.
        
           | kris-nova wrote:
           | Also thank you for the compliment. That was nice to read.
           | Some of the other comments just get straight to the nit
           | picks.
        
       | tln wrote:
       | Cool!
       | 
       | It looks like the Aurae language leverages Rhai? Or is that
       | temporary?
       | 
       | In "Aurae Scope", would logging be in scope as well?
       | 
       | Do you think Aurae would be a nice successor to docker swarm, or
       | is that not a product goal?
        
       | rwmj wrote:
       | Lack of Unix philosophy       [...]       Missing rest/gRPC/json
       | API
       | 
       | OK ...
        
         | yjftsjthsd-h wrote:
         | Those certainly do look a little funny next to each other in
         | the same list, but I think it's fair to argue that it's a
         | situation where one tool is being used in cases that don't
         | really overlap properly and it creates tension on both sides.
         | 
         | Edit after further reflection: Alternatively, perhaps both of
         | those complaints could be summarized as poor ability to
         | interoperate with other tooling, in which case they really do
         | fit together.
        
           | kris-nova wrote:
           | Yeah -- I think I was just reciting what I hear folks say a
           | lot. Basically its IPC and a stable interface for clients
           | that is the core of the concern. Thats really what the
           | takeaway is.
        
             | mariusor wrote:
             | But Systemd has IPC and a stable interface for clients,
             | it's "the D-Bus interface of systemd"[1]. I would think
             | that exposing that over http would be a little reckless,
             | but I'm sure one can create a proxy that can do it in a
             | "secure"-ish way.
             | 
             | [1] https://www.freedesktop.org/software/systemd/man/org.fr
             | eedes...
        
               | mike_hearn wrote:
               | There is this:                   systemctl
               | --host=whatever.abc status apache2
               | 
               | It uses ssh and UNIX domain socket forwarding behind the
               | scenes. There's a writeup of how to do this with minimal
               | privs here:
               | 
               | https://sleeplessbeastie.eu/2021/03/03/how-to-manage-
               | systemd...
               | 
               | If you want to speak the protocol without the systemctl
               | program then you can do the same trick. Use an SSH
               | library, connect to the dbus socket and connect to
               | systemd. You do need a dbus library though. They are less
               | common than HTTP stacks.
        
               | mariusor wrote:
               | TIL. I had no idea the systemd team implemented something
               | like this already, it's great info, thank you. :)
        
         | pjmlp wrote:
         | Plus systemd isn't UNIX, but going with Plan 9 concepts is now
         | UNIX?
         | 
         | Yeah, right.
        
           | jmclnx wrote:
           | Plan 9 was designed as an advancement on UNIX, where 100% of
           | everything is a file.
           | 
           | so, I say yes
        
             | pjmlp wrote:
             | Except the small detail it isn't POSIX, and the C compiler
             | is somehow special in its ways.
        
         | drewcoo wrote:
         | I think those are meant differently.
         | 
         | I don't think "Unix philosophy" means "communicate via streams"
         | in this context. I think it means things like the single
         | responsibility principle. Once we introduce APIs, we can't just
         | chain services together with pipes.
         | 
         | But I agree that this use of "Unix philosophy" is confusing.
        
           | kris-nova wrote:
           | That section of the post seemed to be generating a lot of
           | friction. I revised the language a little bit.
           | 
           | I think the original sound-byte I was trying to capture was
           | "do one thing" which in my opinion neither Kubernetes nor
           | Systemd do. To be fair -- neither would Aurae. So I just
           | scrapped the entire comment.
           | 
           | I wasn't trying to nitpick systemd as much as I was trying to
           | draw attention to the fact that it does in fact -- get nit
           | picked -- and often unnecessarily.
        
       ___________________________________________________________________
       (page generated 2022-09-18 23:01 UTC)