[HN Gopher] Uncloud - Tool for deploying containerised apps acro...
       ___________________________________________________________________
        
       Uncloud - Tool for deploying containerised apps across servers
       without k8s
        
       Author : rgun
       Score  : 339 points
       Date   : 2025-12-04 06:02 UTC (16 hours ago)
        
 (HTM) web link (uncloud.run)
 (TXT) w3m dump (uncloud.run)
        
       | psviderski wrote:
       | Hey, creator here. Thanks for sharing this!
       | 
       | Uncloud[0] is a container orchestrator without a control plane.
       | Think multi-machine Docker Compose with automatic WireGuard mesh,
       | service discovery, and HTTPS via Caddy. Each machine just keeps a
       | p2p-synced copy of cluster state (using Fly.io's Corrosion), so
       | there's no quorum to maintain.
       | 
       | I'm building Uncloud after years of managing Kubernetes in small
       | envs and at a unicorn. I keep seeing teams reach for K8s when
       | they really just need to run a bunch of containers across a few
       | machines with decent networking, rollouts, and HTTPS. The
       | operational overhead of k8s is brutal for what they actually
       | need.
       | 
       | A few things that make it unique:
       | 
       | - uses the familiar Docker Compose spec, no new DSL to learn
       | 
       | - builds and pushes your Docker images directly to your machines
       | without an external registry (via my other project unregistry
       | [1])
       | 
       | - imperative CLI (like Docker) rather than declarative
       | reconciliation. Easier mental model and debugging
       | 
       | - works across cloud VMs, bare metal, even a Raspberry Pi at home
       | behind NAT (all connected together)
       | 
       | - minimal resource footprint (<150MB ram)
       | 
       | [0]: https://github.com/psviderski/uncloud
       | 
       | [1]: https://github.com/psviderski/unregistry
        
         | olegp wrote:
         | How's this similar to and different from Kamal? https://kamal-
         | deploy.org/
        
           | psviderski wrote:
           | I took some inspiration from Kamal, e.g. the imperative model
           | but kamal is more a deployment tool.
           | 
           | In addition to deployments, uncloud handles clustering -
           | connects machines and containers together. Service containers
           | can discover other services via internal DNS and communicate
           | directly over the secure overlay network without opening any
           | ports on the hosts.
           | 
           | As far as I know kamal doesn't provide an easy way for
           | services to communicate across machines.
           | 
           | Services can also be scaled to multiple replicas across
           | machines.
        
             | olegp wrote:
             | Thanks! I noticed afterwards that you mention Kamal in your
             | readme, but you may want to add a comparison section that
             | you link to where you compare your solution to others.
             | 
             | Are you working on this full time and if so, how are you
             | funding it? Are you looking to monetize this somehow?
        
               | psviderski wrote:
               | Thank you for the suggestion!
               | 
               | I'm working full time on this, yes. Funding from my
               | savings at the moment and don't have plans for any
               | external funding or VC.
               | 
               | For monetisation, considering building a self-hosted and
               | managed (SaaS) webUI for managing remote clusters and
               | apps on them with value-added PaaS-like features.
        
               | olegp wrote:
               | That sounds interesting, maybe I could help on the
               | business side of things somehow. I'll email you my
               | calendar link.
        
               | psviderski wrote:
               | Awesome, will reach out!
        
             | cpursley wrote:
             | This is neat, regarding clustering - can this work with
             | distributed erlang/elixir?
        
               | psviderski wrote:
               | I don't know what the specific requirements for the
               | distributed erlang/elixir but I believe the networking
               | should support it. Containers get unique IPs on a
               | WireGuard mesh with direct connectivity and DNS-based
               | service discovery.
        
               | jabr wrote:
               | I haven't tried it, but EPMD with DNS discovery should
               | work just fine, and should be similar to this NATS
               | example: https://github.com/psviderski/uncloud-
               | recipes/blob/main/nats...
               | 
               | Basically just configure it with `{service-
               | name}.internal` to find other instances of the service.
        
         | topspin wrote:
         | "I keep seeing teams reach for K8s when they really just need
         | to run a bunch of containers across a few machines"
         | 
         | Since k8s is very effective at running a bunch of containers
         | across a few machines, it would appear to be exactly the
         | correct thing to reach for. At this point, running a small k8s
         | operation, with k3s or similar, has become so easy that I can't
         | find a rational reason to look elsewhere for container
         | "orchestration".
        
           | nullpoint420 wrote:
           | 100%. I'm really not sure why K8S has become the complexity
           | boogeyman. I've seen CDK apps or docker compose files that
           | are way more difficult to understand than the equivalent K8S
           | manifests.
        
             | esseph wrote:
             | Managing hundreds or thousands of containers across
             | hundreds or thousands of k8s nodes has a lot of operational
             | challenges.
             | 
             | Especially in-house on bare metal.
        
               | lnenad wrote:
               | But that's not what anyone is arguing here, nor what (to
               | me it seems at least) uncloud is about. It's about
               | simpler HA multinode setup with a single/low double digit
               | containers.
        
               | sceptic123 wrote:
               | I don't think that argument matches with they "just need
               | to run a bunch of containers across a few machines"
        
               | nullpoint420 wrote:
               | Talos has made this super easy in my experience.
        
               | Glemkloksdjf wrote:
               | Which is fine because it absolutly matches the result.
               | 
               | You would not be able to operate hundreds or thousand of
               | any nodes without operation complexlity and k8s helps you
               | here a lot.
        
             | this_user wrote:
             | Docker Compose is simple: You have a Compose file that just
             | needs Docker (or Podman).
             | 
             | With k8s you write a bunch of manifests that are 70%
             | repetitive boilerplate. But actually, there is something
             | you need that cannot be achieved with pure manifest, so you
             | reach for Kustomize. But Kustomize actually doesn't do what
             | you want, so you need to convert the entire thing to Helm.
             | 
             | You also still need to spin up your k8s cluster, which
             | itself consists of half a dozen pods just so you have
             | something where you can run your service. Oh, you wanted
             | your service to be accessible from outside the cluster?
             | Well, you need to install an ingress controller in your
             | cluster. Oh BTW, the nginx ingress controller is now
             | deprecated, so you have to choose from a handful of
             | alternatives, all of which have certain advantages and
             | disadvantages, and none of which are ideal for all
             | situations. Have fun choosing.
        
               | quectophoton wrote:
               | > Docker Compose is simple: You have a Compose file that
               | just needs Docker (or Podman).
               | 
               | And if you want to use more than one machine then you run
               | `docker swarm init`, and you can keep using the Compose
               | file you _already_ have, almost unchanged.
               | 
               | It's not a K8s replacement, but I'm guessing for some
               | people it would be enough and less effort than a full
               | migration to Kubernetes (e.g. hobby projects).
        
               | horsawlarway wrote:
               | This is some serious rose colored glasses happening here.
               | 
               | If you have a service with a simple compose file, you can
               | have a simple k8s manifest to do the same thing. Plenty
               | of tools convert right between the two (incl kompose,
               | which k8s literally hands you:
               | https://kubernetes.io/docs/tasks/configure-pod-
               | container/tra...)
               | 
               | Frankly, you're messing up by including kustomize or helm
               | at all in 80% of cases. Just write the (agreed on tedious
               | boilerplate - the manifest format is not my cup of tea)
               | yaml and be done with the problem.
               | 
               | And no - you don't need an ingress. Just spin up a
               | nodeport service, and you have the literal identical
               | experience to exposing ports with compose - it's just a
               | port on the machines running the cluster (any of them -
               | magic!).
               | 
               | You don't need to touch an ingress until you actually
               | want external traffic using a specific hostname (and
               | optionally tls), which is... the same as compose. And
               | frankly - at that point you probably _SHOULD_ be thinking
               | about the actual tooling you 're using to expose that, in
               | the same way you would if you ran it manually in compose.
               | And sure - arguably you could move to gateways now, but
               | in no way is the ingress api deprecated. They very
               | clearly state...
               | 
               | > "The Ingress API is generally available, and is subject
               | to the stability guarantees for generally available APIs.
               | The Kubernetes project has no plans to remove Ingress
               | from Kubernetes."
               | 
               | https://kubernetes.io/docs/concepts/services-
               | networking/ingr...
               | 
               | ---
               | 
               | Plenty of valid complaints for K8s (yaml config
               | boilerplate being a solid pick) but most of the rest of
               | your comment is basically just FUD. The complexity scale
               | for K8s _CAN_ get a lot higher than docker. Some
               | organizations convince themselves it should and make it
               | very complex (debatably for sane reasons). For personal
               | needs... Just run k3s (or minikube, or microk8s, or k3ds,
               | or etc...) and write some yaml. It 's at exactly the same
               | complexity as docker compose, with a slightly more
               | verbose syntax.
               | 
               | Honestly, it's not even as complex as configuring VMs in
               | vsphere or citrix.
        
               | KronisLV wrote:
               | > And no - you don't need an ingress. Just spin up a
               | nodeport service, and you have the literal identical
               | experience to exposing ports with compose - it's just a
               | port on the machines running the cluster (any of them -
               | magic!).
               | 
               | https://kubernetes.io/docs/concepts/services-
               | networking/serv...
               | 
               | Might need to redefine the port range from 30000-32767.
               | Actually, if you want to avoid the ingress abstraction
               | and maybe want to run a regular web server container of
               | your choice to act as it (maybe you just prefer a config
               | file, maybe that's what your legacy software is built
               | around, maybe you need/prefer Apache2, go figure), you'd
               | probably want to be able to run it on 80 and 443. Or 3000
               | or 8080 for some other software, out of convenience and
               | simplicity.
               | 
               | Depending on what kind of K8s distro you use, thankfully
               | not insanely hard to change though:
               | https://docs.k3s.io/cli/server#networking But again,
               | that's kind of going against the grain.
        
               | stego-tech wrote:
               | Literally got it in one, here. I'm not knocking
               | Kubernetes, mind, and I don't think anyone here is, not
               | even the project author. Rather, we're saying that the
               | _excess_ of K8s can sometimes get in the way of simpler
               | deployments. Even streamlined Kubernetes (microk8s, k3s,
               | etc) still ultimately bring _all of Kubernetes_ to the
               | table, and that invites complexity when the goal is
               | simplicity.
               | 
               | That's not _bad_ , but I want to spend more time trying
               | new things or enjoying the results of my efforts than
               | maintaining the underlying substrates. For that purpose,
               | K8s is consistently too complicated for my own ends - and
               | Uncloud looks to do exactly what I want.
        
             | everforward wrote:
             | It's not the manifests so much as the mountain of infra
             | underlying it. k8s is an amazing abstraction over dynamic
             | infra resources, but if your infra is fairly static then
             | you're introducing a lot of infra complexity for not a ton
             | of gain.
             | 
             | The network is complicated by the overlay network, so
             | "normal" troubleshooting tools aren't super helpful.
             | Storage is complicated by k8s wanting to fling pods around
             | so you need networked storage (or to pin the pods, which
             | removes almost all of k8s' value). Databases are annoying
             | on k8s without networked storage, so you usually run them
             | outside the cluster and now you have to manage bare metal
             | and k8s resources.
             | 
             | The manifests are largely fine, outside of some of the more
             | abnormal resources like setting up the nginx ingress with
             | certs.
        
           | psviderski wrote:
           | That's awesome if k3s works for you, nothing wrong with this.
           | You're simply not the target user then.
        
           | jabr wrote:
           | I can only speak for myself, but I considered a few options,
           | including "simple k8s" like
           | [Skate](https://skateco.github.io/), and ultimately decided
           | to build on uncloud.
           | 
           | It was as much personal "taste" than anything, and I would
           | describe the choice as similar to preferring JSON over XML.
           | 
           | For whatever reason, kubernetes just irritates me. I find it
           | unpleasant to use. And I don't think I'm unique in that
           | regard.
        
             | 1dom wrote:
             | > For whatever reason, kubernetes just irritates me. I find
             | it unpleasant to use. And I don't think I'm unique in that
             | regard.
             | 
             | I feel the same. I feel like it's a me problem. I was able
             | to build and run massive systems at scale and never used
             | kubernetes. Then, all of a sudden, around 2020, any time I
             | wanted to build or run or do anything at scale, everywhere
             | said I should just use kubernetes. And then when I wanted
             | to do anything with docker in production, not even at
             | scale, everywhere said I should just use kubernetes.
             | 
             | Then there was a brief period around 2021 where everyone -
             | even kubernetes fans - realised it was being used
             | everywhere, even when it didn't need to be. "You don't need
             | k8s" became a meme.
             | 
             | And now, here we are, again, lots of people saying "just
             | use k8s for everything".
             | 
             | I've learned it enough to know how to use it and what I can
             | do with it. I still prefer to use literally anything else
             | apart from k8s when building, and the only time I've ever
             | felt k8s has been really needed to solve a problem is when
             | the business has said "we're using k8s, deal with it".
             | 
             | It's like the Javascript or WordPress of the infrastructure
             | engineering world - it became the lazy answer, IMO. Or the
             | me problem angle: I'm just an aged engineer moaning at
             | having to learn new solutions to old problems.
        
               | hhh wrote:
               | It's a nice portable target, with very well defined
               | interfaces. It's easy to start with and pretty easy to
               | manage if you don't try to abuse it.
        
               | tick_tock_tick wrote:
               | I mean the real answer it is got easily to deploy k8s so
               | the justification for not using it kinda vanished.
        
             | tw04 wrote:
             | How many flawless, painless major version upgrades have you
             | had with literally any flavor of k8s? Because in my
             | experience, that's always a science experiment that results
             | in such pain people end up just sticking at their original
             | deployed version while praying they don't hit any critical
             | bugs or security vulnerabilities.
        
               | mxey wrote:
               | I've run Kubernetes since 2018 and I can count on one
               | hand the times there were major issues with an upgrade.
               | Have sensible change management and read the release
               | notes for breaking changes. The amount of breaking
               | changes has also gone way down in recent years.
        
               | jauntywundrkind wrote:
               | I applaud you for having a specific complaint. 'You might
               | not need it' 'its complex' and 'for some reason it
               | bothers me' are all these vibes based winges that are so
               | abundant. But with nothing specific, nothing contestable.
        
           | matijsvzuijlen wrote:
           | If you already know k8s, this is probably true. If you don't
           | it's hard to know what bits you need, and need to learn
           | about, to get something simple set up.
        
             | epgui wrote:
             | you could say that about anything...
        
               | morcus wrote:
               | I don't understand the point? You can say that about
               | anything, and that's the whole reason why it's good that
               | alternatives exist.
               | 
               | The clear target of this project is a k8s-like experience
               | for people who are already familiar with Docker and
               | docker compose but don't want to spend the energy to
               | learn a whole new thing for low stakes deployments.
        
               | Glemkloksdjf wrote:
               | Uncloud is so far away from k8s, its not k8s like.
               | 
               | A normal person wouldn't think 'hey lets use k8s for the
               | low stakes deployment over here'.
        
           | _joel wrote:
           | Indeed, it seems a knee jerk response without justification.
           | k3s is pretty damn minimal.
        
           | tevon wrote:
           | Perhaps it feels so easy given your familiarity with it.
           | 
           | I have struggled to get things like this stood up and hit
           | many footguns along the way
        
           | PunchyHamster wrote:
           | k3s makes it easy to deploy, not to debug any problems with
           | it. It's still essentially adding few hundred thousand lines
           | of code into your infrastructure, and if it is a small app
           | you need to deploy, also wasting a bit of ram
        
             | topspin wrote:
             | "not to debug any problems with it"
             | 
             | K3s is just a repackaged, simplified k8s distro. You get
             | the same behavior and the same tools as you have any time
             | you operate an on-premises k8s cluster, and these, in my
             | experience, are somewhere between good and excellent. So I
             | can't imagine what you have in mind here.
             | 
             | "It's still essentially adding few hundred thousand lines
             | of code into your infrastructure"
             | 
             | Sure. And they're all there for a reason: it's what one
             | needs to orchestrate containers via an API, as revealed by
             | a vast horde of users and years of refinement.
        
           | sureglymop wrote:
           | Except it isn't just "a way to run a bunch of containers
           | across a few machines".
           | 
           | It seems that way but in reality "resource" is a generic
           | concept in k8s. K8s is a management/collaboration platform
           | for "resources" and everything is a resource. You can define
           | your own resource types too. And who knows, maybe in the
           | future these won't be containers or even linux processes?
           | Well it would still work given this model.
           | 
           | But now, what if you _really_ just want to run a bunch of
           | containers across a few machines?
           | 
           | My point is, it's overcomplicated and abstracts too heavily.
           | Too smart even... I don't want my co workers to define our
           | own resource types, we're not at a google scale company.
        
         | mosselman wrote:
         | You have a graph that shows a multi provider setup for a
         | domain. Where would routing to either machine happen? As in
         | which ip would you use on the dns side?
        
           | calgoo wrote:
           | Not OP, but you could do "simple" dns load balancing between
           | both endpoints.
        
           | psviderski wrote:
           | For the public cluster with multiple ingress (caddy) nodes
           | you'd need a load balancer in front of them to properly
           | handle routing and outage of any of them. You'd use the IP of
           | the load balancer on the DNS side.
           | 
           | Note that a DNS A record with multiple IPs doesn't provide
           | failover, only round robin. But you can use the Cloudflare
           | DNS proxy feature as a poor man's LB. Just add 2+ proxied A
           | records (orange cloud) pointing to different machines. If one
           | goes down with a 52x error, Cloudflare automatically fails
           | over to the healthy one.
        
         | woile wrote:
         | does it support ipv6?
        
           | psviderski wrote:
           | There is an open issue that confirms enabling ipv6 for
           | containers works:
           | https://github.com/psviderski/uncloud/issues/126 But this
           | hasn't been enabled by default.
           | 
           | What specifically do you mean by ipv6 support?
        
             | miyuru wrote:
             | > What specifically do you mean by ipv6 support?
             | 
             | This question does not make sense. This is equivalent to
             | asking "What specifically do you mean by ipv4 support"
             | 
             | These days both protocols must be supported, and if there
             | is a blocker it should be clearly mentioned.
        
               | justincormack wrote:
               | How do you want to allocate ipv6 addresses to containers?
               | Turns out there are lots of answers. Some people even
               | want to do ipv6 NAT.
        
               | lifty wrote:
               | A really cool way to do it is how Yggdrasil project does
               | it (https://yggdrasil-
               | network.github.io/implementation.html#how-...). They
               | basically use public keys as identities and they
               | deterministically create an IPv6 address from the public
               | key. This is beautiful and works for private networks, as
               | well as for their global overlay IPv6 network.
               | 
               | What do you think about the general approach in Uncloud?
               | It almost feels like a cousin of Swarm. Would love to get
               | your take on it.
        
               | GoblinSlayer wrote:
               | Like docker? --fixed-cidr-v6=2001:db8:1::/64
        
             | woile wrote:
             | I'm no expert, so I'm not sure if I'll explain it
             | correctly. But I've been using docker swarm in a server, I
             | use traefik as reverse proxy, and it just doesn't seem to
             | work (I've tried a lot) with ipv6 (issue that might be
             | related https://github.com/moby/moby/issues/24379)
        
         | zbuttram wrote:
         | Very cool! I think I'll have some opportunity soon to give it a
         | shot, I have just the set of projects that have been needing a
         | tool like this. One thing I think I'm missing after perusing
         | the docs however is, how does one onboard other engineers to
         | the cluster after it has been set up? And similarly, how does
         | deployment from a CI/CD runner work? I don't see anything about
         | how to connect to an existing cluster from a new machine, or at
         | least not that I'm recognizing.
        
           | jabr wrote:
           | There isn't a cli function for adding a connection
           | (independently of adding a new machine/node) yet, but they
           | are in a simple config file (`~/.config/uncloud/config.yaml`)
           | that you can copy or easily create manually for now. It looks
           | like this:                   current_context: default
           | contexts:           default:             connections:
           | - ssh: admin@192.168.0.10                 ssh_key_file:
           | ~/.ssh/uncloud               - ssh: admin@192.168.0.11
           | ssh_key_file: ~/.ssh/uncloud               - ssh:
           | administrator@93.x.x.x                 ssh_key_file:
           | ~/.ssh/uncloud               - ssh: sysadmin@65.x.x.x
           | ssh_key_file: ~/.ssh/uncloud
           | 
           | And you really just need one entry for typical use. The
           | subsequent entries are only used if the previous node(s) are
           | down.
        
             | psviderski wrote:
             | For CI/CD, check out this GitHub Action:
             | https://github.com/thatskyapplication/uncloud-action.
             | 
             | You can either specify one of the machine SSH target in the
             | config.yaml or pass it directly to the 'uc' CLI command,
             | e.g.
             | 
             | uc --connect user@host deploy
        
         | utopiah wrote:
         | Neat, as you include quite a few tool for services to be
         | reachable together (not necessarily to the outside), do you
         | also have tooling to make those services more interoperable?
        
           | jabr wrote:
           | Do you have an example of what you mean? I'm not entirely
           | clear on your question.
        
         | unixfox wrote:
         | Awesome tool! Does it provide some basic features that you
         | would get from running a control plane.
         | 
         | Like rescheduling automatically a container on another server
         | if a server is down? Deploying on the less filled server first
         | if you have set limits in your containers?
        
           | psviderski wrote:
           | Thank you! That's actually the trade off.
           | 
           | There is no automatic rescheduling in uncloud by design. At
           | least for now. We will see how far we can get without it.
           | 
           | If you want your service to tolerate a host going down, you
           | should deploy multiple replicas for that service on multiple
           | machines in advance. 'uc scale' command can be used to run
           | more replicas for an already deployed service.
           | 
           | Longer term, I'm thinking we can have a concept of
           | primary/standby replicas for services that can only have one
           | running replica, e.g. databases. Something similar to how
           | Fly.io does this: https://fly.io/docs/apps/app-
           | availability/#standby-machines-...
           | 
           | Regarding deploying on the less filled machine first is
           | doable but not supported right now. By default, it picks the
           | first machine randomly and tries to distributes replicas
           | evenly among all available machines. You can also manually
           | specify what target machine(s) each service should run on in
           | your Compose file.
           | 
           | I want to avoid recreating the complexity with placement
           | constraints, (anti-)affinity, etc. that makes K8s hard to
           | reason about. There is a huge class of apps that need more or
           | less static infra, manual placement, and a certain level of
           | redundancy. That's what I'm targeting with Uncloud.
        
         | avan1 wrote:
         | Thanks for the both great tools. just i didn't understand one
         | thing ? the request flow, imaging we have 10 servers where we
         | choose this request goes to server 1 and the other goes to 7
         | for example. and since its zero down time, how it says server 5
         | is updating so till it gets up no request should go there.
        
           | psviderski wrote:
           | I think there are two different cases here. Not sure which
           | one you're talking about.
           | 
           | 1. External requests, e.g. from the internet via the reverse
           | proxy (Caddy) running in the cluster.
           | 
           | The rollout works on the container, not the server level.
           | Each container registers itself in Caddy so it knows which
           | containers to forward and distribute requests to.
           | 
           | When doing a rollout, a new version of container is started
           | first, registers in caddy, then the old one is removed. This
           | is repeated for each service container. This way, at any time
           | there are running containers that serve requests.
           | 
           | It doesn't say any server that requests shouldn't go there.
           | It just updates upstreams in the caddy config to send
           | requests to the containers that are up and healthy.
           | 
           | 2. Service to service requests within the cluster. In this
           | case, a service DNS name is resolved to a list of IP
           | addresses (running containers). And the client decides which
           | one to send a request to or whether to distribute requests
           | among them.
           | 
           | When the service is updated, the client needs to resolve the
           | name again to get the up-to-date list of IPs. Many http
           | clients handle this automatically so using http://service-
           | name as an endpoint typically just works. But zero downtime
           | should still be handled by the client in this case.
        
         | doctorpangloss wrote:
         | haha, uncloud does have a control plane: the mind of the person
         | running "uc" CLI commands
         | 
         | > I'm building Uncloud after years of managing Kubernetes
         | 
         | did you manage Kubernetes, or did you make the fateful mistake
         | of managing _microk8s_?
        
         | oulipo2 wrote:
         | So it's a kind of better Docker Swarm? It's interesting, but
         | honestly I'd rather have something declarative, so I can use it
         | with Pulumi, would it be complicated to add a declarative
         | engine on top of the tool? Which discovers what services are
         | already up, do a diff with the new declaration, and handles
         | changes?
        
           | psviderski wrote:
           | This is exactly how it works now. The Compose file is the
           | declarative specification of your services you want to run.
           | 
           | When you run 'uc deploy' command:
           | 
           | - it reads the spec from your compose.yaml
           | 
           | - inspects the current state of the services in the cluster
           | 
           | - computes the diff and deployment plan to reconcile it
           | 
           | - executes the plan after the confirmation
           | 
           | Please see the docs and demo:
           | https://uncloud.run/docs/guides/deployments/deploy-app
           | 
           | The main difference with Docker Swarm is that the
           | reconciliation process is run on your local/CI machine as
           | part of the 'uc deploy' CLI command execution, not on the
           | control plane nodes in the cluster.
           | 
           | And it's not running in the loop automatically. If the
           | command fails, you get an instant feedback with the errors
           | you can address or rerun the command again.
           | 
           | It should be pretty straightforward to wrap the CLI logic in
           | a Terraform or Pulumi provider. The design principals are
           | very similar and it's written in Go.
        
         | tex0 wrote:
         | This is a cool tool, I like the idea. But the way `uc machine
         | init` works under the hood is really scary. Lot's of `curl |
         | bash` run as root.
         | 
         | While I would love to test this tool, this is not something I
         | would run on any machine :/
        
           | redrove wrote:
           | +1 on this
           | 
           | I wanted to try it out but was put off by this[0]. It's just
           | straight up curl | bash as root from
           | raw.githubusercontent.com.
           | 
           | If this is the install process for a server (and not just for
           | the CLI) I don't want to think about security in general for
           | the product.
           | 
           | Sorry, I really wanted to like this, but pass.
           | 
           | [0] https://github.com/psviderski/uncloud/blob/ebd4622592bcec
           | edb...
        
           | psviderski wrote:
           | Totally valid concern. That was a shortcut to iterate quickly
           | in early development. It's time to do it properly now.
           | Appreciate the feedback. This is exactly the kind of thing I
           | need to hear before more people try it.
        
           | tontony wrote:
           | Curious, what would be an ideal (secure) approach for you to
           | install this (or similar) tool?
        
             | rovr138 wrote:
             | It's deploying a script, which then downloads uncloud using
             | curl.
             | 
             | The alternative is, deploying the script and with it have
             | the uncloud files it needs.
        
             | yabones wrote:
             | The correct way would be to publish packages on a proper
             | registry/repository and install them with a package
             | manager. For example, create a 3rd party Debian repository,
             | and import the config & signing key on install. It's more
             | work, sure, but it's been the best practice for decades and
             | I don't see that changing any time soon.
        
               | tontony wrote:
               | Sure, but it all boils down to trust at the end of the
               | day. Why would you trust a third-party Debian repository
               | (that e.g. has a different user namespace and no identity
               | linking to GitHub) more than running something from
               | evidently the same user from GitHub, in this specific
               | case?
               | 
               | I'm not arguing that a repository is nice because
               | versioning, signing, version yanking, etc, and I do agree
               | that the process should be more transparent and
               | verifiable for people who care about it.
        
           | jabr wrote:
           | There is a `--no-install` flag on both `uc machine init` and
           | `uc machine add` that skips that `curl | bash` install step.
           | 
           | You need to prepare the machine some other way first then,
           | but it's just installing docker and the uncloud service.
           | 
           | I use the `--no-install` option with my own cluster, as I
           | have my own pre-provisioning process that includes some
           | additional setup beyond the docker/uncloud elements.
        
         | Glemkloksdjf wrote:
         | So you build an insecure version of nomad/kubernetes and co?
         | 
         | If you do anything professional, you better choose proven
         | software like kubernetes or managed kubernetes or whatever else
         | all the hyperscalers provide.
         | 
         | And the complexity you are solving now or have to solve, k8s
         | solved. IaC for example, Cloud Provider Support for
         | provisioning a LB out of the box, cert-manager, all the helm
         | charts for observability, logging, a ecosystem to fall back to
         | (operators), ArgoCD <3, storage provisioning, proper high
         | availability, kind for e2e testing on cicd, etc.
         | 
         | I'm also aways lost why people think k8s is so hard to operate.
         | Just take a managed k8s. There are so many options out there
         | and they are all compatible with the whole k8s ecosystem.
         | 
         | Look if you don't get kubernetes, its use casees, advantages
         | etc. fine absolutly fine but your solution is not an
         | alternative to k8s. Its another container orchestrator like
         | nomad and k8s and co. with it own advantages and disadvantages.
        
           | mgaunard wrote:
           | Those are all sub-par cloud technologies which perform very
           | badly and do not scale at all.
           | 
           | Some people would rather build their own solutions to do
           | these things with fine-grain control and the ability to
           | handle workloads more complex that a shopping cart website.
        
           | bluepuma77 wrote:
           | It's not a k8s replacement. It's for the small dev team with
           | no k8s experience. For people that might not use Docker Swarm
           | because they see it's a pretty dead project. For people who
           | think "everyone uses k8s", so we should, too.
           | 
           | I need to run on-prem, so managed k8s is not an option.
           | Experts tells me I should have 2 FTE to run k8s, which I
           | don't have. k8s has so many components, how should I debug
           | that in case of issues without k8s experience? k8s APIs
           | change continuously, how should I manage that without k8s
           | experience?
           | 
           | It's not a k8s replacement. But I do see a sweet spot for
           | such a solution. We still run Docker Swarm on 5 servers, no
           | hyperscalers, no API changes expected ;-)
        
         | 11mariom wrote:
         | > - uses the familiar Docker Compose spec, no new DSL to learn
         | 
         | But this goes with assumption that one already know docker
         | compose spec. For exact same reason I'm in love for `podman
         | kube play` to just use k8s manifests to quickly test run on
         | local machine - and not bother with some "legacy" compose.
         | 
         | (I never liked Docker Inc. so I never learned THEIR tooling,
         | it's not needed to build/run containers)
        
           | TingPing wrote:
           | podman-compose works fine. It's a very simple format.
        
         | sam-cop-vimes wrote:
         | I really like what is on offer here - thank you for building
         | it. Re the private network it builds with Wireguard, how are
         | services running within this private network supposed to access
         | AWS services such as RDS securely? Tailscale has this:
         | https://tailscale.com/kb/1141/aws-rds
        
         | knowitnone3 wrote:
         | but if they already know how to use k8s, then they should use
         | it. Now they have to know k8s AND know this tool?
        
         | INTPenis wrote:
         | We have similar backgrounds, and I totally agree with your k8s
         | sentiment.
         | 
         | But I wonder what this solves?
         | 
         | Because I stopped abusing k8s and started using more container
         | hosts with quadlets instead, using Ansible or Terraform
         | depending on what the situation calls for.
         | 
         | It works just fine imho. The CI/CD pipeline triggers a podman
         | auto-update command, and just like that all containers are
         | running the latest version.
         | 
         | So what does uncloud add to this setup?
        
           | lifty wrote:
           | Uncloud seems much easier to manage than writing your own
           | ansible or terraform.
        
       | nake89 wrote:
       | How does this compare to k3s?
        
         | tontony wrote:
         | Uncloud is not a Kubernetes distribution and doesn't use K8s
         | primitives (although there are of course some similarities).
         | It's closer to Compose/Swarm in how you declare and manage your
         | services. Which has pros and cons depending on what you need
         | and what your (or your team's) experience with Kubernetes is.
        
       | JohnMakin wrote:
       | Having spent most of my career in kubernetes (usually managed by
       | cloud), I always wonder when I see things like this, what is the
       | use case or benefit of _not_ having a control plane?
       | 
       | To me, the control plane is the primary feature of kubernetes and
       | one I would not want to go without.
       | 
       | I know this describes operational overhead as a reason, but how
       | it relates to the control plane is not clear to me. even managing
       | a few hundred nodes and maybe 10,000 containers, relatively small
       | - I update once a year and the managed cluster updates machine
       | images and versions automatically. Are people trying to self host
       | kubernetes for production cases, and that's where this pain comes
       | from?
       | 
       | Sorry if it is a rude question.
        
         | baq wrote:
         | > Are people trying to self host kubernetes
         | 
         | Of course they are...? That's half the point of k8s - if you
         | want to self host, you can, but it's just like backups: if you
         | never try it, you should assume you can't do it when you need
         | to
        
         | psviderski wrote:
         | Not rude at all. The benefit is a much simpler model where you
         | simply connect machines in a network where every machine is
         | equal. You can add more, remove some. No need to worry about an
         | HA 3-node centralised "cluster brain". There isn't one.
         | 
         | It's a similar experience when a cloud provider manages the
         | control plane for you. But you have to worry about the
         | availability when you host everything yourself. Losing etcd
         | quorum results in an unusable cluster.
         | 
         | Many people want to avoid this, especially when running at a
         | smaller scale like a handful of machines.
         | 
         | The cluster network can even partition and each partition
         | continues to operate allowing to deploy/update apps
         | individually.
         | 
         | That's essentially what we all did in a pre-k8s era with chef
         | and ansible but without the boilerplate and reinventing the
         | wheel, and using the learnings from k8s and friends.
        
           | _joel wrote:
           | k3s uses sqlite, so not etcd.
        
             | davidgl wrote:
             | It can use sqlite (single master), or for cluster it can
             | use pg, or mysql, but etcd by default
        
               | _joel wrote:
               | No, it's not. Read the docs[1] - sqlite is the default.
               | 
               | "Lightweight datastore based on sqlite3 as the default
               | storage backend. etcd3, MySQL, and Postgres are also
               | available."
               | 
               | [1]https://docs.k3s.io/
        
               | cobolcomesback wrote:
               | This thread is about using multi-machine clusters, and
               | sqlite cannot be used for multi-machine clusters in k3s.
               | etcd is the default when starting k3s in cluster mode
               | [1].
               | 
               | [1] https://docs.k3s.io/datastore
        
               | _joel wrote:
               | No, this thread is about multiple containers across
               | machines. What you describe is multi-master for the
               | _server_. You can run multple agents across serveral
               | nodes therefore clustering the container workload across
               | multiple container hosting servers. Multi-master is
               | something different.
        
               | cobolcomesback wrote:
               | The very first paragraph of the first comment you replied
               | to is about multi-master HA. The second sentence in that
               | comment is about "every machine is equal". k3s with
               | sqlite is awesome, but it cannot do that.
        
               | _joel wrote:
               | apologies, I misread this and gave a terse reply.
        
           | JohnMakin wrote:
           | If you are a small operation and trying to self host k3s or
           | k8s or any number of out of the box installations that are
           | probably at least as complex as docker compose swarms, for
           | any non trivial production case, presents similar problems in
           | monitoring and availability as ones you'd get with off the
           | shelf cloud provider managed services, except the managed
           | solutions come without the pain in the ass. Except you don't
           | have a control plane.
           | 
           | I have managed custom server clusters in a self hosted
           | situation. the problems are hard, but if you're small, why
           | would you reach for such a solution in the first place? you'd
           | be better off paying for a managed service. What situation
           | forces so many people to reach to self hosted kubernetes?
        
         | kelnos wrote:
         | > _a few hundred nodes and maybe 10,000 containers, relatively
         | small_
         | 
         | That feels not small to me. For something I'm working on I'll
         | probably have two nodes and around 10 containers. If it works
         | out and I get some growth, maybe that will go up to, say, 5-7
         | nodes and 30 or so containers? I dunno. I'd like some
         | orchestration there, but k8s feels _way_ too heavy even for my
         | "grown" case.
         | 
         | I feel like there are potentially a lot of small businesses at
         | this sort of scale?
        
         | esseph wrote:
         | Try it on bare metal where you're managing the distributed
         | storage and the hardware and the network and the upgrades too
         | :)
        
           | JohnMakin wrote:
           | Why would you want to do that though?
           | 
           | On cloud, in my experience, you are mostly paying for compute
           | with managed kubernetes instances. The overhead and price is
           | almost never kubernetes itself, but the compute and storage
           | you are provisioning, which, thanks to the control plane, you
           | have complete control over. what am i missing?
           | 
           | I wouldn't dare try to with a small shop try to self host a
           | production kubernetes solution unless i was under duress. But
           | I just dont see what the control plane has to do with it.
           | It's the feature that makes kubernetes worth it.
        
             | esseph wrote:
             | May need low latency, may have several large data
             | warehouses different workloads need to connect to, etc.
             | 
             | Regulated industries, transportation and logistics
             | companies, critical industries, etc.
        
             | LelouBil wrote:
             | I am in the process of redoing all of my self-hosting
             | (cloud storage, sso, media server, and a lot more), which
             | previously was a bunch of docker compose files deployed by
             | Ansible. This quickly became unmanageable.
             | 
             | Now I almost finished the setting up part using a single-
             | node (for now) Kubernetes cluster running with Talos Linux,
             | and all of the manifest files managed with Cue lang
             | (seriously, I would have abandoned it if I had not
             | discovered Cue to generate and type check all of the yaml).
             | 
             | I think Kubernetes is the right solution for the complexity
             | of what I'm running, but even though it was a hassle to
             | manage the storage, the backups, the auth, the networking
             | and so on, I much prefer having all of this hosted at my
             | house.
             | 
             | But I agree with the control plane part, just pointing out
             | my use case for self-hosting k8s
        
           | lillecarl wrote:
           | Tinkerbell / MetalKube, ClusterAPI, Rook, Cilium?
           | 
           | A control plane makes controlling machines easier, that's the
           | point of a control plane.
        
             | esseph wrote:
             | I'm not debating the control plane, I'm responding to the
             | question about "are people running it themselves in
             | production?" (non-managed on-prem infra)
             | 
             | It can be challenging. Lots and lots of knobs.
        
         | motoboi wrote:
         | Kubernetes is not only an orchestrator but a scheduler.
         | 
         | Is a way to run arbitrary processes on a bunch of servers.
         | 
         | But what if your processes are known beforehand? Than you don't
         | need a scheduler, nor an orchestrator.
         | 
         | If it's just your web app with two containers and nothing more?
        
         | weitendorf wrote:
         | I'm working on a similar project (here's the v0 of its state
         | management and the libraries its "local control plane" will use
         | to implement a mesh https://github.com/accretional/collector)
         | and worked on the data plane for Google Cloud Run/Functions:
         | 
         | IMO kubernetes is great if your job is to fiddle with
         | Kubernetes. But damn, the overhead is insane. There is this
         | broad swathe of middle-sized tech companies and non-tech
         | Internet application providers (eg ecommerce, governments,
         | logistics, etc.) that spend a lot of their employees' time
         | operating Kubernetes clusters, and a lot of money on the
         | compute for those clusters, which they probably overprovision
         | and also overpay for through some kind of managed
         | Kubernetes/hyperscaler platform + a bunch of SaaS for things
         | like metrics and logging, container security products,
         | alerting. A lot of these guys are spending 10-40% of their
         | budget on compute, payroll, and SaaS to host CRUD applications
         | that could probably run on a small number of servers without a
         | "platform" team behind it, just a couple of developers who know
         | what they're doing.
         | 
         | Unless they're paying $$$ each of these deployments is running
         | their own control plane and dealing with all the operational
         | and cognitive overhead that entails. Most of those are running
         | in a small number of datacenters alongside a bunch of other
         | people running/managing/operating kubernetes clusters of their
         | own. It's insanely wasteful because if there were a proper
         | multitenant service mesh implementation (what I'm working on)
         | that was easy to use, everybody could share the same control
         | plane ~per datacenter and literally just consume the Kubernetes
         | APIs they actually need, the ones that let them run and
         | orchestrate/provision their application, and forget about all
         | the fucking configuration of their cluster. BTW, _that_ is how
         | Borg works, which Kubernetes was hastily cobbled-together to
         | mimic in order to capitalize on Containers Being So Hot Right
         | Now.
         | 
         | The vast majority of these Kubernetes users just want to run
         | their applications, their customers don't know or care that
         | Kubernetes is in the picture at all, and the people writing the
         | checks would LOVE to not be spending so much and money on the
         | same platform engineering problems as every other midsize
         | company on the Internet.
         | 
         | > what is the use case or benefit of not having a control
         | plane?
         | 
         | All that is to say, it's not having to pay for a bunch of
         | control plane nodes and SaaS and a Kubernetes guy/platform
         | team. At small and medium scales, it's running a bunch of
         | container instances as long as possible without embarking on a
         | 6-24mo, $100k-$10m+ expedition to Do Kubernetes. It's not
         | having to secure some fricking VPC with a million internal
         | components and plugins/SaaS, it's not letting some cloud
         | provider own your soul, and not locking you in to something so
         | expensive you have to hire an entire internal team of
         | Kubernetes-guys to set it up.
         | 
         | All the value in the software industry comes from the actual
         | applications people are paying for. So the better you can let
         | people do that without infrastructure getting in the way, the
         | better. Making developers deal with this bullshit (or deciding
         | to have 10-30% of your developers deal with it fulltime) is
         | what gets in the way:
         | https://kubernetes.io/docs/concepts/overview/components/
        
           | JohnMakin wrote:
           | The experience you are describing has overwhelmingly not been
           | my own, nor anyone in my space I know.
           | 
           | I can only speak most recently for EKS, but the cost is spent
           | almost entirely on compute. I'm a one man shop managing
           | 10,000 containers. I basically only spend on the compute
           | itself, which is not all that much, and certainly far, far
           | less than hiring a sys admin. Self hosted _anything_ would be
           | a huge PITA for me and likely end up costing more.
           | 
           | Yes, you can avoid kubernetes and being a "slave" to cloud
           | providers, but I personally believe you're making
           | infrastructure tradeoffs in a bad way, and likely spending as
           | much in the long run anyway.
           | 
           | maybe my disconnect here is that I mostly deal with full
           | production scale applications, not hobby projects I am
           | hosting on my own network (nothing wrong with that, and I
           | would agree k8s is overkill for something like that).
           | 
           | Eventually though, at scale, I strongly believe you will need
           | or want a control plane of some type for your container
           | fleets, and that typically ends up looking or acting like
           | k8s.
        
         | davedx wrote:
         | > a few hundred nodes and maybe 10,000 containers, relatively
         | small
         | 
         | And that's just your CI jobs, right? ;)
        
       | fuckinpuppers wrote:
       | Does it support a way to bundle things close to each other, for
       | example, not having a database container hosted in a different
       | datacenter than the web app?
        
         | jabr wrote:
         | The `compose.yaml` spec for services let's you specify which
         | machines to deploy it on, so you could target the database and
         | web app to the same machine (or subset of machines).
         | 
         | There is also an internal DNS for service discovery and it
         | supports a `nearest.` prefix, which will preferentially use
         | instances of a service running on the same machine. For
         | example, I run a globally replicated NATS service and then
         | connect to it from other services using the
         | `nearest.nats.internal` address to connect to the machine-local
         | NATS node.
        
       | stevefan1999 wrote:
       | If not K8S, why not Nomad (https://github.com/hashicorp/nomad)?
        
         | m1keil wrote:
         | Nomad is great, but you will still end up with a control plane.
        
           | sgt wrote:
           | Isn't Nomad pretty much dead now?
        
             | m1keil wrote:
             | They had quite a few release in the last year so it's not
             | dead that's for sure, but unclear how many new customers
             | they are able to sign up. And with IBM in charge, it's also
             | unclear at what moment they will loose interest.
        
         | tontony wrote:
         | Nomad still has a tangible learning curve, which (in my very
         | biased opinion) is almost non-existent with Uncloud assuming
         | the user has already heard about Docker and Compose.
        
         | weitendorf wrote:
         | Their license does not allow you to modify it and then offer it
         | as a service to others:
         | https://github.com/hashicorp/nomad/blob/main/LICENSE
         | 
         | You can't really do anything with it except work for Hashicorp
         | for free, or create a fork that nobody is allowed to use unless
         | they self-host it.
        
       | m1keil wrote:
       | Looks lovely.. I'll definitely will give it a try when time
       | comes.
        
       | sergioisidoro wrote:
       | This is extremely interesting to me. I've been using docker
       | swarm, but there is this growing feeling of staleness. Dokku
       | feels a bit too light, K8 absolutely too heavy. This proposition
       | strikes my sweet spot - especially the part where I keep my
       | existing docker compose declarations
        
         | psviderski wrote:
         | Come join our cozy Discord server https://uncloud.run/discord.
         | There is one guy joined recently who is migrating his homelab
         | setup from Swarm right now.
         | 
         | Also another community member shared his homelab with a couple
         | dozen services migrated from Docker Compose to Uncloud:
         | https://github.com/dasunsrule32/docker-compose-configs/tree/...
        
         | josegonzalez wrote:
         | Dokku Maintainer here.
         | 
         | Would love to hear about what you think is "light" about Dokku
         | if you have some time for feedback.
         | 
         | Regardless, hope you find a tool you're happy with :)
        
       | scottydelta wrote:
       | As a happy user of coolify, what's the difference between these
       | two?
       | 
       | Even coolify lets you add as many machines as you want and then
       | manage docker containers in all machines from one coolify
       | installation.
        
         | lillecarl wrote:
         | Vendor lock-in, Kubernetes is future proof.
        
       | throwaway77385 wrote:
       | How does this compare to something like Dokku?
        
         | Cwizard wrote:
         | Is dokku multi node?
        
           | throwaway77385 wrote:
           | It supports docker swarm, but I've never used it like that.
           | As I may need multi node in the future, I was asking the
           | question to see if it would make it 'easy' to orchestrate
           | multiple containers. The simplicity of Dokku is hard to beat,
           | however.
           | 
           | edit: Well, it would appear that the very maintainer of Dokku
           | himself replied to the parent comment. My information is
           | clearly outdated and I'd only look at this comment[0] to get
           | the proper info.
           | 
           | [0] https://news.ycombinator.com/item?id=46144275#46145919
        
           | josegonzalez wrote:
           | Dokku is multi node. It supports docker-local (single node)
           | and k3s (multi-node) as schedulers, with most features
           | implemented as expected when deploying to k3s.
        
       | apexalpha wrote:
       | Wow this looks really cool. Congratulations!
        
       | oulipo2 wrote:
       | I'm using Dokploy, would that be very similar, or quite
       | different?
        
         | psviderski wrote:
         | Uncloud is lower-level. Dokploy seems to be positioning as a
         | PaaS with a web UI using Docker Swarm under the hood for multi-
         | node container management. Uncloud operates at that same layer
         | as Swarm but with a simpler operating model that's friendlier
         | for troubleshooting, WireGuard mesh networking built in, and
         | the ability to connect nodes from different clouds or
         | locations.
         | 
         | No UI yet (planned) so if that's critical, Dokploy is likely a
         | better choice for now.
         | 
         | However, some unique features like building and pushing images
         | directly to your nodes without an external registry give
         | Uncloud a PaaS-like feel, just CLI-first. Really depends on
         | what you're hosting and what you're optimising for.
         | 
         | See short deploy demo:
         | https://uncloud.run/docs/guides/deployments/deploy-app
        
       | raw_anon_1111 wrote:
       | I know just enough about Kubernetes to not sound like an idiot
       | when I'm in the room and mostly I deploy Docker containers to
       | various managed services on AWS - Lambda, ECS, etc.
       | 
       | But, as a lead for implementations, I just couldn't in good
       | conscience permit something that is not an industry standard
       | _and_ not supported by my cloud provider. First from a self
       | interested standpoint, it looks a lot better on their resume to
       | say "I did $x using K8s".
       | 
       | From an onboarding standpoint, just telling a new employee "we
       | use K8s -here you go" means nothing new to learn.
       | 
       | If you are part of the industry, just suck it up and learn
       | Kubernetes. Your future self won't regret it - coming from
       | someone who in fact has not learn K8s.
       | 
       | This is a challenge any new framework is going to have.
        
         | psviderski wrote:
         | Fair points on the career and onboarding angle. It's hard to
         | argue against "everyone knows it". But with that mentality,
         | we'd never challenge anything. COBOL was the industry standard
         | once. So were bare metal servers or fat VMs without containers.
         | Someone had to say "this is more painful than it needs to be
         | and I want to try something different because I can".
         | 
         | I know how to use k8s but I really don't enjoy it. It feels so
         | distasteful to me that it triggered me to make an attempt at
         | designing a nicer experience, because why not. I remember how
         | much fun I had trying Docker when it first came out. That
         | inspires me to at least try. It doesn't seem like the k8s
         | community is even trying unfortunately.
        
           | raw_anon_1111 wrote:
           | The enterprise equivalent of COBOL today is Java and to a
           | much lesser extent C#. Those were both championed by large
           | corporations - Sun and Microsoft.
           | 
           | The move to VMs at first and then to the cloud were also
           | marketed by existing companies with huge budgets where people
           | who made decisions had the "No one ever got fired for
           | choosing $LargeWellknownCompany that is in the upper right
           | corner of Gartner's Magic Square".
           | 
           | I love Docker. I think everyone going to EKS before they need
           | to is dumb. There are dozens of services out there that let
           | you give it a Docker container and just run it.
           | 
           | And I think that spending energy avoiding "cloud lock-in" is
           | dumb. Choose your infrastructure and build. Migrations are
           | going to be a pain at any decent scale anyway and you are
           | optimizing for the wrong thing if you are worried about lock
           | in.
           | 
           | As an individual especially in today's market, it's foolish
           | (not referring to you - any developer or adjacent) not to
           | always be thinking of what keeps you the most employable if
           | the rug gets pulled from under you.
           | 
           | As a decision maker who is held accountable for architecture
           | and when things go wrong they look at or when the next person
           | has to come along to maintain it, they are going to look at
           | me like I am crazy if I choose a non industry standard
           | solution just because I was too lazy to choose the industry
           | standard.
           | 
           | Again I don't mean that you are being "lazy". That's how
           | people think.
           | 
           | But if I were hiring someone - and I'm often interviewing
           | people for cloudy/devOps type roles. Why would I hire someone
           | with experience with a Docker orchestration framework I never
           | heard of over someone who knew K8s?
           | 
           | And the final question you should ask yourself is why are you
           | really doing this?
           | 
           | Is it to scratch an itch out of passion and it's something
           | that you feel the world should have? If so in all sincerity,
           | I wish you luck on your endeavor. You might get lucky like
           | Bun just did. I had effusive praise for them doing something
           | out of passion instead of as VC bait.
           | 
           | Are you doing it for financial gain? If so, you have to come
           | up with a strategy to overcome resistance from people like
           | Ive outlined.
        
         | codegeek wrote:
         | With your logic, we will never have any new innovations being
         | tried. This is a Show HN for a product someone built. No one is
         | asking every corporation/company to replace K8s with this. It
         | is a different and simpler take on the complexity of deploying
         | containers without a control plane. I personally welcome these
         | types of post. A lot of great products started out with someone
         | just trying to solve their own problem.
        
           | raw_anon_1111 wrote:
           | In another reply, I commended him on doing it if the
           | reasoning was "because he felt like the world needed it" or
           | if it was to scratch an itch. But if he wants to make this a
           | business - that's a different beast with different challenges
           | that he needs to address.
           | 
           | I can't find it now, but I could swear I saw somewhere he
           | said he's working on this full time and living off of
           | savings. If I'm wrong he will correct me I am sure. If that's
           | the case, I assume he wants to make this a business.
           | 
           | He has to think about those objection. Would he better off by
           | first creating a known Kubernetes environment and then
           | building an easier wrapper around that so someone could
           | manage the K8s cluster in case they don't want a dependency
           | only on his product? I don't have those answers
        
       | indigodaddy wrote:
       | Very cool, so sort of like Dokku but simpler/easier to use?
       | 
       | Looks like the docs assume the management of a single cluster.
       | What if you want to manage multiple/distinct clusters from the
       | same uc client/management env?
        
         | tontony wrote:
         | > What if you want to manage multiple/distinct clusters
         | 
         | Uncloud supports having multiple contexts (think - clusters) in
         | the same configuration file, or you can also use separate
         | config files (via --uncloud-config attribute).
         | 
         | https://uncloud.run/docs/cli-reference/uc_ctx
        
       | HisNameIsTor wrote:
       | Very nice!
       | 
       | This would have been my choice had it existed three months ago.
       | Now it feels like I learned kubernetes in vain xD
        
       | sigmonsays wrote:
       | really need to disclose the status of this application. It's
       | ridiculous to pipe curl | bash in the actual code.
       | 
       | Gonna burn bridges with this lack of transparency. I love the
       | intent but the implementation is so bad that I probably wont look
       | back.
        
       | chuckadams wrote:
       | I'm pretty happy with k3s, but I'm also happy to see some
       | development happening in the space between docker compose and
       | full-blown kubernetes. The wireguard integration in particular
       | intrigues me.
        
       | raphinou wrote:
       | I'm a docker swarm user, and this is the first alternative that
       | looks interesting to me!
       | 
       | Some questions I have based on my swarm usage:
       | 
       | - do you plan to support secrets?
       | 
       | - with swarm and traefik, I can define url rewrite rules as
       | container labels. Is something equivalent available?
       | 
       | - if I deploy 2 compose 'stacks', do all containers have access
       | to all other containers, even in the other stack?
        
         | tontony wrote:
         | Secrets -- yes, it's being tracked here:
         | https://github.com/psviderski/uncloud/issues/75 Compose configs
         | are already supported and can be used to inject secrets as
         | well, but there'll be no encryption at rest there in that case,
         | so might not be ideal for everyone.
         | 
         | Regarding questions 2 and 3, the short answers are "not at the
         | moment" and "yes, for now", here's a relevant discussion that
         | touches on both points:
         | https://github.com/psviderski/uncloud/discussions/94
         | 
         | Speaking of Swarm and your experience with it: in your opinion,
         | is there anything that Swarm lacks or makes difficult, that
         | tools like Uncloud could conceptually "fix"?
        
       | wg0 wrote:
       | This look really neat and great! Amazing job!
       | 
       | BTW just looking at other variations on the theme:
       | 
       | - https://dokploy.com/
       | 
       | - https://coolify.io/
       | 
       | - https://demo.kubero.dev/
       | 
       | Feel free to add more.
        
         | dewey wrote:
         | There's a good list here: https://dbohdan.com/self-hosted-paas
         | 
         | I'm always looking for new alternatives there, I've recently
         | tried Coolify but it didn't feel very polished and mostly
         | clunky. I'm still happy with Dokku at this point but would love
         | to have a better UI for managing databases etc.
        
           | wg0 wrote:
           | Okay could you please share your thoughts as a user:
           | 
           | - What databases you want to work with?
           | 
           | - What functionality you want from such a UI?
           | 
           | - What database size we are talking here?
           | 
           | Asking because I am tinkering with a similar idea.
        
       | DeathArrow wrote:
       | Does Uncloud have auto scaling?
        
       | vanillax wrote:
       | Kubernetes is not hard. Taking the time to read the manual seems
       | to be hard for most people.
        
       ___________________________________________________________________
       (page generated 2025-12-04 23:01 UTC)