[HN Gopher] Show HN: Unregistry - "docker push" directly to serv...
       ___________________________________________________________________
        
       Show HN: Unregistry - "docker push" directly to servers without a
       registry
        
       I got tired of the push-to-registry/pull-from-registry dance every
       time I needed to deploy a Docker image.  In certain cases, using a
       full-fledged external (or even local) registry is annoying
       overhead. And if you think about it, there's already a form of
       registry present on any of your Docker-enabled hosts -- the
       Docker's own image storage.  So I built Unregistry [1] that exposes
       Docker's (containerd) image storage through a standard registry
       API. It adds a `docker pussh` command that pushes images directly
       to remote Docker daemons over SSH. It transfers only the missing
       layers, making it fast and efficient.                 docker pussh
       myapp:latest user@server       Under the hood, it starts a
       temporary unregistry container on the remote host, pushes to it
       through an SSH tunnel, and cleans up when done.  I've built it as a
       byproduct while working on Uncloud [2], a tool for deploying
       containers across a network of Docker hosts, and figured it'd be
       useful as a standalone project.  Would love to hear your thoughts
       and use cases!  [1]: https://github.com/psviderski/unregistry  [2]:
       https://github.com/psviderski/uncloud
        
       Author : psviderski
       Score  : 611 points
       Date   : 2025-06-18 23:17 UTC (23 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | koakuma-chan wrote:
       | This is really cool. Do you support or plan to support docker
       | compose?
        
         | psviderski wrote:
         | Thank you! Can you please clarify what kind of support you mean
         | for docker compose?
        
           | fardo wrote:
           | I assume that he means "rather than pushing up each
           | individual container for a project, it could take something
           | like a compose file over a list of underlying containers, and
           | push them all up to the endpoint."
        
             | koakuma-chan wrote:
             | Yes, pushing all containers one by one would not be very
             | convenient.
        
               | baobun wrote:
               | The right yq|xargs invocation on your compose file should
               | get you to a oneshot.
        
               | koakuma-chan wrote:
               | I would prefer docker compose pussh or whatever
        
               | psviderski wrote:
               | That's an interesting idea. I don't think you can create
               | a subcommand/plugin for compose but creating a 'docker
               | composepussh' command that parses the compose file and
               | runs 'docker pussh' should be possible.
               | 
               | My plan is to integrate Unregistry in Uncloud as the next
               | step to make the build/deploy flow super simple and
               | smooth. Check out Uncloud (link in the original post), it
               | uses Compose as well.
        
               | djfivyvusn wrote:
               | You can wrap docker in a bash function that passes
               | through to `command docker` when it's not a compose pussh
               | command.
        
           | jillesvangurp wrote:
           | Right now, I use ssh to trigger a docker compose restart that
           | pulls all the latest images on some of my servers (we have a
           | few dedicated hosting/on premise setups). That then needs to
           | reach out to our registry to pull images. So, it's this weird
           | mix of push pull that ends up needing a central registry.
           | 
           | What would be nicer instead is some variation of docker
           | compose pussh that pushes the latest versions of local images
           | to the remote host based on the remote docker-compose.yml
           | file. The alternative would be docker pusshing the affected
           | containers one by by one and then triggering a docker compose
           | restart. Automating that would be useful and probably not
           | that hard.
        
             | felbane wrote:
             | I've built a setup that orchestrates updates for any number
             | of remotes without needing a permanently hosted registry. I
             | have a container build VM at HQ that also runs a registry
             | container pointed at the local image store. Updates involve
             | connecting to remote hosts over SSH, establishing a reverse
             | tunnel, and triggering the remote hosts to pull from the
             | "localhost" registry (over the tunnel to my buildserver
             | registry).
             | 
             | The connection back to HQ only lasts as long as necessary
             | to pull the layers, tagging works as expected, etc etc.
             | It's like having an on-demand hosted registry and requires
             | no additional cruft on the remotes. I've been migrating to
             | Podman and this process works flawlessly there too, fwiw.
        
       | jlhawn wrote:
       | A quick and dirty version:                   docker -H host1
       | image save IMAGE | docker -H host2 image load
       | 
       | note: this isn't efficient at all (no compression or layer
       | caching)!
        
         | rgrau wrote:
         | I use a variant with ssh and some compression:
         | docker save $image | bzip2 | ssh "$host" 'bunzip2 | docker
         | load'
        
           | selcuka wrote:
           | If you are happy with bzip2-level compression, you could also
           | use `ssh -C` to enable automatic gzip compression.
        
         | selcuka wrote:
         | That method is actually mentioned in their README:
         | 
         | > Save/Load - `docker save | ssh | docker load` transfers the
         | entire image, even if 90% already exists on the server
        
         | alisonatwork wrote:
         | On podman this is built in as native command podman-image-
         | scp[0], which perhaps could be more efficient with SSH
         | compression.
         | 
         | [0] https://docs.podman.io/en/stable/markdown/podman-image-
         | scp.1...
        
           | travisgriggs wrote:
           | So with Podman, this exists already, but for docker, this has
           | to be created by the community.
           | 
           | I am a bystander to these technologies. I've built and
           | debug'ed the rare image, and I use docker desktop on my Mac
           | to isolate db images.
           | 
           | When I see things like these, I'm always curious why docker,
           | which seems so much more beaurecratic/convoluted, prevails
           | over podman. I totally admit this is a naive impression.
        
             | password4321 wrote:
             | > why docker, which seems so much more
             | beaurecratic/convoluted, prevails over podman
             | 
             | First mover advantage and ongoing VC-funded
             | marketing/DevRel
        
             | djfivyvusn wrote:
             | Something that took me 20 years to learn: Never
             | underestimate the value of a slick gui.
        
           | psviderski wrote:
           | Ah neat I didn't know that podman has 'image scp'. Thank you
           | for sharing. Do you think it was more straightforward to
           | implement this in podman because you can easily access its
           | images and metadata as files on the file system without
           | having to coordinate with any daemon?
           | 
           | Docker and containerd also store their images using a
           | specific file system layout and a boltdb for metadata but I
           | was afraid to access them directly. The owners and
           | coordinators are still Docker/containerd so proper locks
           | should be handled through them. As a result we become limited
           | by the API that docker/containerd daemons provide.
           | 
           | For example, Docker daemon API doesn't provide a way to get
           | or upload a particular image layer. That's why unregistry
           | uses the containerd image store, not the classic Docker image
           | store.
        
       | nothrabannosir wrote:
       | What's the difference between this and skopeo? Is it the ssh
       | support ? I'm not super familiar with skopeo forgive my ignorance
       | 
       | https://github.com/containers/skopeo
        
         | yibers wrote:
         | "skopeo" seems to related to managing registeries, very
         | different from this.
        
           | NewJazz wrote:
           | Skopeo manages images, copies them and stuff.
        
         | jlcummings wrote:
         | Skopeo lets you work with remote registries and local images
         | without a docker/podman/etc daemon.
         | 
         | We use to 'clone' across deployment environments and across
         | providers outside of the build pipeline as an adhoc job.
        
       | s1mplicissimus wrote:
       | very cool. now lets integrate this such that we can do
       | `docker/podman push localimage:localtag
       | ssh://hostname:port/remoteimage:remotetag` without extra software
       | installed :)
        
         | brirec wrote:
         | I was informed that Podman at least has a `podman image scp`
         | function for doing just this...
        
           | someothherguyy wrote:
           | https://www.redhat.com/en/blog/podman-transfer-container-
           | ima...
        
       | dzonga wrote:
       | this is nice, hopefully DHH and the folks working on Kamal adopt
       | this.
       | 
       | the whole reason I didn't end up using kamal was the 'need' a
       | docker registry thing. when I can easily push a dockerfile /
       | compose to my vps build an image there and restart to deploy via
       | a make command
        
         | rudasn wrote:
         | Build the image on the deployment server? Why not build
         | somewhere else once and save time during deployments?
         | 
         | I'm most familiar with on-prem deployments and quickly realised
         | that it's much faster to build once, push to registry (eg
         | github) and docker compose pull during deployments.
        
           | christiangenco wrote:
           | I think the idea with unregistry is that you're still
           | building somewhere else once but then instead of pushing
           | everything to a registry once you push your unique layers
           | directly to each server you're deploying.
        
         | psviderski wrote:
         | I don't see a reason to not adopt this in Kamal. I'm also
         | building Uncloud that took a lot of inspiration from Kamal,
         | please check it out. I will integrate unregistry into uncloud
         | soon to make the build/deploy process a breeze.
        
       | bradly wrote:
       | As a long ago fan of chef-solo, this is really cool.
       | 
       | Currently, I need to use a docker registry for my Kamal
       | deployments. Are you familiar with it and if this removes the 3rd
       | party dependency?
        
         | psviderski wrote:
         | Yep, I'm familiar with Kamal and it actually inspired me to
         | build Uncloud using similar principles but with more cluster-
         | like capabilities.
         | 
         | I built Unregistry for Uncloud but I belive Kamal could also
         | benefit from using it.
        
           | christiangenco wrote:
           | I think it'd be a perfect fit. We'll see what happens:
           | https://github.com/basecamp/kamal/issues/1588
        
       | nine_k wrote:
       | Nice. And the `pussh` command definitely deserves the distinction
       | of one of the most elegant puns: easy to remember, self-
       | explanatory, and just one letter away from its sister standard
       | command.
        
         | EricRiese wrote:
         | > The extra 's' is for 'sssh'
         | 
         | > What's that extra 's' for?
         | 
         | > That's a typo
        
           | causasui wrote:
           | https://www.youtube.com/watch?v=3m6Blqs0IgY
        
         | gchamonlive wrote:
         | It's fine, but it wouldn't hurt to have a more formal alias
         | like `docker push-over-ssh`.
         | 
         | EDIT: why I think it's important because on automations that
         | are developed collaboratively, "pussh" could be seen as a typo
         | by someone unfamiliar with the feature and cause unnecessary
         | confusion, whereas "push-over-ssh" is clearly deliberate. Think
         | of them maybe as short-hand/full flags.
        
           | psviderski wrote:
           | That's a valid concern. You can very easily give it whatever
           | name you like. Docker looks for `docker-COMAND` executables
           | in ~/.docker/cli-plugins directory making COMMAND a `docker`
           | subcommand.
           | 
           | Rename the file to whatever you like, e.g. to get `docker
           | pushoverssh`:                 mv ~/.docker/cli-
           | plugins/docker-pussh ~/.docker/cli-plugins/docker-pushoverssh
           | 
           | Note that Docker doesn't allow dashes in plugin commands.
        
           | whalesalad wrote:
           | can easily see an engineer spotting pussh in a ci/cd workflow
           | or something and thinking "this is a mistake" and changing
           | it.
        
         | someothherguyy wrote:
         | and prone to collision!
        
           | nine_k wrote:
           | Indeed so! Because it's art, not engineering. The engineering
           | approach would require a recognizably distinct command,
           | eliminating the possibility of such a pun.
        
             | rollcat wrote:
             | I used to have an alias em=mg, because mg(1) is a small
             | Emacs, so "em" seemed like a fun name for a command.
             | 
             | Until one day I made _that_ typo.
        
               | bobbiechen wrote:
               | I'm a fan of installing sl(1), the terminal steam
               | locomotive. I mistype it every couple months and it
               | always gives me a laugh.
               | 
               | https://github.com/mtoyoda/sl
        
               | danillonunes wrote:
               | In the same spirit there's gti
               | https://r-wos.org/hacks/gti
        
       | armx40 wrote:
       | How about using docker context. I use that a lot and works
       | nicely.
        
         | Snawoot wrote:
         | How do docker contexts help with the transfer of image between
         | hosts?
        
           | dobremeno wrote:
           | I assume OP meant something like this, building the image on
           | the remote host directly using a docker context (which is
           | different from a build context)                 docker
           | context create my-awesome-remote-context --docker
           | "host=ssh://user@remote-host"            docker --context my-
           | awesome-remote-context build . -t my-image:latest
           | 
           | This way you end up with `my-image:latest` on the remote host
           | too. It has the advantage of not transferring the entire
           | image but only transferring the build context. It builds the
           | actual image on the remote host.
        
             | revicon wrote:
             | This is exactly what I do, make a context pointing to the
             | remote host, use docker compose build / up to launch it on
             | the remote system.
        
       | lxe wrote:
       | Ooh this made me discover uncloud. Sounds like exactly what I was
       | looking for. I wanted something like dokku but beefier for a
       | sideproject server setup.
        
         | nodesocket wrote:
         | A recommendation for Portainer if you haven't used or
         | considered it. I'm running two EC2 instances on AWS using
         | portainer community edition and portainer agent and works
         | really well. The stack feature (which is just docker compose)
         | is also super nice. One EC2 instance; running Portainer agent
         | runs Caddy in a container which acts as the load balancer and
         | reverse proxy.
        
           | lxe wrote:
           | I'm actually running portainer for my homelab setup hosting
           | things like octoprint and omada controller etc.
        
         | vhodges wrote:
         | There is also https://skateco.github.io/ which (at quick
         | glance) seems similar
        
           | byrnedo wrote:
           | Skate author here: please try it out! I haven't gotten round
           | to diving deep into uncloud yet, but I think maybe the two
           | projects differ in that skate has no control plane; the cli
           | is the control plane.
           | 
           | I built skate out of that exact desire to have a dokku like
           | experience that was multi host and used a standard deployment
           | configuration syntax ( k8s manifests ).
           | 
           | https://skateco.github.io/docs/getting-started/
        
             | benwaffle wrote:
             | Looks like uncloud has no control plane, just a CLI:
             | https://github.com/psviderski/uncloud#-features
        
         | psviderski wrote:
         | I'm glad the idea of uncloud resonated with you. Feel free to
         | join our Discord if you have questions or need help
        
       | actinium226 wrote:
       | This is excellent. I've been doing the save/load and it works
       | fine for me, but I like the idea that this only transfers missing
       | layers.
       | 
       | FWIW I've been saving then using mscp to transfer the file. It
       | basically does multiple scp connections to speed it up and it
       | works great.
        
       | remram wrote:
       | Does it start a unregistry container on the remote/receiving end
       | or the local/sending end? I think that runs remotely. I wonder if
       | you could go the other way instead?
        
         | selcuka wrote:
         | You mean ssh'ing into the remote server, then pulling image
         | from local? That would require your local host to be accessible
         | from the remote host, or setting up some kind of ssh tunneling.
        
           | mdaniel wrote:
           | `ssh -R` and `ssh -L` are amazing, and I just learned that -L
           | and -R both support unix sockets on either end and also unix
           | socket to tcp socket https://manpages.ubuntu.com/manpages/nob
           | le/man1/ssh.1.html#:...
           | 
           | I would presume it's something akin to $(ssh -L
           | /var/run/docker.sock:/tmp/d.sock sh -c 'docker -H
           | unix:///tmp/d.sock save | docker load') type deal
        
           | matt_kantor wrote:
           | This is what docker-pushmi-pullyu[1] does, using `ssh -R` as
           | suggested by a sibling comment.
           | 
           | [1]: https://github.com/mkantor/docker-pushmi-pullyu
        
         | psviderski wrote:
         | It starts an unregistry container on the remote side. I wonder,
         | what's the use case on your mind for doing it the other way
         | around?
        
       | esafak wrote:
       | You can do these image acrobatics with the dagger shell too, but
       | I don't have enough experience with it to give you the
       | incantation: https://docs.dagger.io/features/shell/
        
         | throwaway314155 wrote:
         | I assume you can do these "image acrobatics" in any shell.
        
           | esafak wrote:
           | The dagger shell is built for devops, and can pipe first
           | class dagger objects like services and containers to enable
           | things like
           | github.com/dagger/dagger/modules/wolfi@v0.16.2 |
           | container |       with-exec ls /etc/ |       stdout
           | 
           | What's interesting here is that the first line demonstrates
           | invocation of a remote module (building a Wolfi Linux
           | container), of which there is an ecosystem:
           | https://daggerverse.dev/
        
       | yjftsjthsd-h wrote:
       | What is the container for / what does this do that `docker save
       | some:img | ssh wherever docker load` doesn't? More efficient
       | handling of layers or something?
        
         | psviderski wrote:
         | Yeah exactly, which is crucial for large images if you change
         | only a few last layers.
         | 
         | The unregistry container provides a standard registry API you
         | can pull images from as well. This could be useful in a cluster
         | environment where you upload an image over ssh to one node and
         | then pull it from there to other nodes.
         | 
         | This is what I'm planning to implement for Uncloud. Unregistry
         | is so lightweight so we can embed it in every machine daemon.
         | This will allow machines in the cluster to pull images from
         | each other.
        
         | pragmatick wrote:
         | Relatively early on the page it says:
         | 
         | "docker save | ssh | docker load transfers the entire image,
         | even if 90% already exists on the server"
        
       | metadat wrote:
       | This should have always been a thing! Brilliant.
       | 
       | Docker registries have their place but are overall over-
       | engineered and an antithesis to the hacker mentality.
        
         | password4321 wrote:
         | As a VC-funded company Docker had to make money somehow.
        
         | dreis_sw wrote:
         | I recommend using GitHub's registry, ghcr.io, with GitHub
         | Actions.
         | 
         | I invested just 20 minutes to setup a .yaml workflow that
         | builds and pushes an image to my private registry on ghcr.io,
         | and 5 minutes to allow my server to pull images from it.
         | 
         | It's a very practical setup.
        
         | ezekg wrote:
         | I think the complexity lies in the dance required to push blobs
         | to the registry. I've built an OCI-compliant pull-only registry
         | before and it wasn't that complicated.
        
       | scott113341 wrote:
       | Neat project and approach! I got fed up with expensive registries
       | and ended up self-hosting Zot [1], but this seems way easier for
       | some use cases. Does anyone else wish there was an easy-to-
       | configure, cheap & usage-based, private registry service?
       | 
       | [1]: https://zotregistry.dev
        
         | stroebs wrote:
         | Your SSL certificate for zothub.io has expired in case you
         | weren't aware.
        
       | isaacvando wrote:
       | Love it!
        
       | layoric wrote:
       | I'm so glad there are tools like this and swing back to
       | selfhosted solutions, especially leveraging SSH tooling. Well
       | done and thanks for sharing, will definitely be giving it a spin.
        
       | alisonatwork wrote:
       | This is a cool idea that seems like it would integrate well with
       | systems already using push deploy tooling like Ansible. It also
       | seems like it would work as a good hotfix deployment mechanism at
       | companies where the Docker registry doesn't have 24/7 support.
       | 
       | Does it integrate cleanly with OCI tooling like buildah etc, or
       | if you need to have a full-blown Docker install on both ends? I
       | haven't dug deeply into this yet because it's related to some
       | upcoming work, but it seems like bootstrapping a mini registry on
       | the remote server is the missing piece for skopeo to be able to
       | work for this kind of setup.
        
         | 0x457 wrote:
         | It needs docker daemon on both ends. This is just a clever way
         | to share layer between two daemons via ssh.
        
         | psviderski wrote:
         | You need a containerd on the remote end (Docker and Kubernetes
         | use containerd) and anything that speaks registry API (OCI
         | Distribution spec:
         | https://github.com/opencontainers/distribution-spec) on the
         | client. Unregistry reuses the official Docker registry code for
         | the API layer so it looks and feels like
         | https://hub.docker.com/_/registry
         | 
         | You can use skopeo, crane, regclient, BuildKit, anything that
         | speaks OCI-registry on the client. Although you will need to
         | manually run unregistry on the remote host to use them. 'docker
         | pussh' command just automates the workflow using the local
         | Docker.
         | 
         | Just check it out, it's a bash script:
         | https://github.com/psviderski/unregistry/blob/main/docker-pu...
         | 
         | You can hack your own way pretty easily.
        
         | dirkc wrote:
         | I agree! For a bunch of services I manage I build the image
         | locally, save it and then use ansible to upload the archive and
         | restore the image. This usually takes a lot longer than I want
         | it to!
        
       | modeless wrote:
       | It's very silly that Docker didn't work this way to start with.
       | Thank you, it looks cool!
        
         | TheRoque wrote:
         | You can already achieve the same thing by making your image
         | into an archive, pushing it to your server, and then running it
         | from the archive on your server.
         | 
         | Saving as archive looks like this: `docker save -o may-app.tar
         | my-app:latest`
         | 
         | And loading it looks like this: `docker load -i /path/to/my-
         | app.tar`
         | 
         | Using a tool like ansible, you can achieve easily what
         | "Unregistry" is doing automatically. According to the github
         | repo, save/load has the drawback of tranfering the whole image
         | over the network, which could be an issue that's true. And
         | managing the images instead of archive files seems more
         | convenient.
        
           | nine_k wrote:
           | If you have an image with 100MB worth of bottom layers, and
           | only change the tiny top layer, the unregistry will only send
           | the top layer, while save / load would send the whole 100MB+.
           | 
           | Hence the value.
        
             | isoprophlex wrote:
             | yeah i deal with horrible, bloated python machine learnings
             | shits; >1 GB images are nothing. this is excellent, and i
             | never knew how much i needed this tool until now.
        
             | throwaway290 wrote:
             | Docker also has export/load commands. They only exports the
             | current layer filesystem.
        
           | authorfly wrote:
           | Good advice and beware the difference between docker export
           | (which will fail if you lack enough storage, since it saves
           | volumes) and docker save. Running the wrong command might
           | knock out your only running docker server into an
           | unrecoverable state...
        
       | fellatio wrote:
       | Neat idea. This probably has the disadvantage of coupling
       | deployment to a service. For example how do you scale up or
       | red/green (you'd need the thing that does this to be aware of the
       | push).
       | 
       | Edit: that thing exists it is uncloud. Just found out!
       | 
       | That said it's a tradeoff. If you are small, have one Hetzner VM
       | and are happy with simplicity (and don't mind building images
       | locally) it is great.
        
         | psviderski wrote:
         | For sure, it's always a tradeoff and it's great to have options
         | so you can choose the best tool for every job.
        
       | mountainriver wrote:
       | I've wanted unregistry for a long time, thanks so much for the
       | awesome work!
        
         | psviderski wrote:
         | Met too, you're welcome! Please create an issue on github if
         | you find any bugs
        
       | jokethrowaway wrote:
       | Very nice! I used to run a private registry on the same server to
       | achieve this - then I moved to building the image on the server
       | itself.
       | 
       | Both approaches are inferior to yours because of the load on the
       | server (one way or another).
       | 
       | Personally, I feel like we need to go one step further and just
       | build locally, merge all layers, ship a tar of the entire (micro)
       | distro + app and run it with lxc. Get rid of docker entirely.
       | 
       | The size of my images are tiny, the extra complexity is
       | unwarranted.
       | 
       | Then of course I'm not a 1000 people company with 1GB docker
       | images.
        
       | quantadev wrote:
       | I always just use "docker save" to generate a TAR file, then copy
       | the TAR file to the server, and then run "docker load" (on the
       | server) to install the TAR file on the target machine.
        
       | cultureulterior wrote:
       | This is super slick. I really wish there was something that did
       | the same, but using torrent protocol, so all your servers shared
       | it.
        
         | psviderski wrote:
         | Not a torrent protocol but p2p, check out
         | https://github.com/spegel-org/spegel it's super cool.
         | 
         | I took inspiration from spegel but built a more focused
         | solution to make a registry out of a Docker/containerd daemon.
         | A lot of other cool stuff and workflows can be built on top of
         | it.
        
       | czhu12 wrote:
       | Does this work with Kubernetes image pulls?
        
         | psviderski wrote:
         | I guess you're asking about the registry part (not 'pussh'
         | command). It exposes the containerd image store as standard
         | registry API so you can use any tools that work with regular
         | registry to pull/push images to it.
         | 
         | You should be able to run unregistry as a standalone service on
         | one of the nodes. Kubernetes uses containerd for storing images
         | on nodes. So unregistry will expose the node's images as a
         | registry. Then you should be able to run k8s deployments using
         | 'unregistry.NAMESPACE:5000/image-name:tag' image. kubelets on
         | other nodes will be pulling the image from unregistry.
         | 
         | You may want to take a look at https://spegel.dev/ which works
         | similarly but was created specifically for Kubernetes.
        
       | MotiBanana wrote:
       | I've been using ttl.sh for a long time, but only for public,
       | temporary code. This is a really cool idea!
        
         | psviderski wrote:
         | Wow ttl.sh is a really neat idea, thank you for sharing!
        
       | politelemon wrote:
       | Considering the nature of servers, security boundaries and
       | hardening,
       | 
       | > Linux via Homebrew
       | 
       | Please don't encourage this on Linux. It happens to offer a Linux
       | setup as an afterthought but behaves like a pigeon on a
       | chessboard rather than a package manager.
        
         | djfivyvusn wrote:
         | Brew is such a cute little package manager. Updating its repo
         | every time you install something. Randomly self updating like a
         | virus.
        
           | v5v3 wrote:
           | That made me laugh lol
        
         | yrro wrote:
         | Well put, but it's a shame this comment is the first thing I
         | read, rather than comments about the tool itself!
        
         | cyberax wrote:
         | We're using it to distribute internal tools across macOS and
         | Linux developers. It excels in this.
         | 
         | Are there any good alternatives?
        
           | carlhjerpe wrote:
           | 100% Nix, it works on every distro, MacOS, WSL2 and won't
           | pollute your system (it'll create /nix and patch your bashrc
           | on installation and everything from there on goes into /nix).
        
             | cyberax wrote:
             | Downside: it's Nix.
             | 
             | I tried it, but I have not been able to easily replicate
             | our Homebrew env. We have a private repo with pre-compiled
             | binaries, and a simple Homebrew formula that downloads the
             | utilities and installs them. Compiling the binaries
             | requires quite a few tools (C++, sigh).
             | 
             | I got stuck at the point where I needed to use a private
             | repo in Nix.
        
               | lloeki wrote:
               | > We have a private repo with pre-compiled binaries, and
               | a simple Homebrew formula that downloads the utilities
               | and installs them.
               | 
               | Perfectly doable with Nix. Ignore the purists and do the
               | hackiest way that works. It's too bad that tutorials get
               | lost on concepts (which are useful to know but a real
               | turn down) instead of focusing on some hands-on practical
               | how-to.
               | 
               | This should about do it and is really not that different
               | nor difficult than formulas or brew install:
               | git init mychannel         cd mychannel
               | cat > default.nix <<'NIX'         {           pkgs ?
               | import <nixpkgs> { },         }:                  {
               | foo = pkgs.callPackage ./pkgs/foo { };         }
               | NIX                  mkdir -p pkgs/foo         cat >
               | pkgs/foo/default.nix <<'NIX'         { pkgs, stdenv, lib
               | }:                  stdenv.mkDerivation {           pname
               | = "foo";           version = "1.0";                    #
               | if you have something to fetch           # src = fetchurl
               | {           #   url =
               | http://example.org/foo-1.2.3.tar.bz2;           #   # if
               | you don't know the hash, put some lib.fakeSha256 there
               | #   sha256 =
               | "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
               | # };                buildInputs = [             # add any
               | deps           ];                    # this example just
               | builds in place, so skip unpack           unpackPhase =
               | "true"; # no src attribute                    # optional
               | if you just want to copy from your source above
               | # build trivial example script in place
               | buildPhase = ''             cat > foo <<'SHELL'
               | #!/bin/bash             echo 'foo!'             SHELL
               | chmod +x foo           '';                    # just copy
               | whatever           installPhase = ''             mkdir -p
               | $out/bin             cp foo $out/bin           '';
               | }         NIX              nix-build -A foo -o out/foo  #
               | you should have your build in '/out/foo'
               | ./out/foo/bin/foo  # => foo!              git add .
               | git commit -a -m 'init channel'         git add origin
               | git@github.com:OWNER/mychannel         git push origin
               | main                  nix-channel --add
               | https://github.com/OWNER/mychannel/archive/main.tar.gz
               | mychannel         nix-channel --update
               | nix-env -iA mychannel.foo         foo  # => foo!
               | 
               | (I just cobbled that up together, if it doesn't work as
               | is it's damn close; flakes left as an exercise to the
               | reader)
               | 
               | Note: if it's a private repo then in /etc/nix/netrc (or
               | ~/.config/nix/netrc for single user installs):
               | machine github.com             password ghp_YOurToKEn
               | 
               | > Compiling the binaries requires quite a few tools (C++,
               | sigh).
               | 
               | Instantly sounds like a whole reason to use nix and
               | capture those tools as part of the dependency set.
        
               | cyberax wrote:
               | Hm. That actually sounds doable (we do have hashes for
               | integrity). I'll try that and see how it goes.
               | 
               | > Instantly sounds like a whole reason to use nix and
               | capture those tools as part of the dependency set.
               | 
               | It's tempting, and I tried that, but ran away crying.
               | We're using Docker images instead for now.
               | 
               | We are also using direnv that transparently execs
               | commands inside Docker containers, this works
               | surprisingly well.
        
       | peyloride wrote:
       | This is awesome, thanks!
        
       | larsnystrom wrote:
       | Nice to only have to push the layers that changed. For me it's
       | been enough to just do "docker save my-image | ssh host 'docker
       | load'" but I don't push images very often so for me it's fine to
       | push all layers every time.
        
       | iw7tdb2kqo9 wrote:
       | I think it will be a good fit for me. Currently our 3GB docker
       | image takes a lot of time to push to Github package registry from
       | Github Action and pull from EC2.
        
       | bflesch wrote:
       | this is useful. thanks for sharing
        
       | amne wrote:
       | _Takes a look at pipeline that builds image in gitlab, pushes to
       | artifactory, triggers deployment that pulls from artifactory and
       | pushes to AWS ECR, then updates deployment template in EKS which
       | pulls from ECR to node and boots pod container._
       | 
       | I need this in my life.
        
         | maccard wrote:
         | My last projects pipeline spent more time pulling and pushing
         | containers than it did actually building the app. All of that
         | was dwarfed by the health check waiting period, when we knew in
         | less than a second from startup if we were actually healthy or
         | not.
        
       | victorbjorklund wrote:
       | Sweet. I been wanting this for long.
        
       | alibarber wrote:
       | This is timely for me!
       | 
       | I personally run a small instance with Hetzner that has K3s
       | running. I'm quite familiar with K8s from my day job so it is
       | nice when I want to do a personal project to be able to just use
       | similar tools.
       | 
       | I have a Macbook and, for some reason I really dislike the idea
       | of running docker (or podman, etc) on it. Now of course I could
       | have GitHub actions building the project and pushing it to a
       | registry, then pull that to the server, but it's another step
       | between code and server that I wanted to avoid.
       | 
       | Fortunately, it's trivial to sync the code to a pod over kubectl,
       | and have podman build it there - but the registry (the step from
       | pod to cluster) was the missing step, and it infuriated me that
       | even with save/load, so much was going to be duplicated, on the
       | same effective VM. I'll need to give this a try, and it's
       | inspired me to create some dev automation and share it.
       | 
       | Of course, this is all overkill for hobby apps, but it's a hobby
       | and I can do it the way I like, and it's nice to see others also
       | coming up with interesting approaches.
        
       | rcarmo wrote:
       | I think this is great and have long wondered why it wasn't an out
       | of the box feature in Docker itself.
        
       | spwa4 wrote:
       | THANK you. Can you do the same for kubernetes somehow?
        
         | tontony wrote:
         | A few thoughts/ideas on using this in Kubernetes are discussed
         | in this issue:
         | https://github.com/psviderski/unregistry/issues/4; generally,
         | should be possible with the same idea, but with some tweaking.
         | 
         | Also have a look at https://spegel.dev/, it's basically a
         | daemonset running in your k8s cluster that implements a
         | (mirror) registry using locally cached images and peer-to-peer
         | communication.
        
       | jdsleppy wrote:
       | I've been very happy doing this:
       | 
       | DOCKER_HOST="ssh://user@remotehost" docker-compose up -d
       | 
       | It works with plain docker, too. Another user is getting at the
       | same idea when they mention docker contexts, which is just a
       | different way to set the variable.
       | 
       | Did you know about this approach? In the snippet above, the image
       | will be built on the remote machine and then run. The context
       | (files) are sent over the wire as needed. Subsequent runs will
       | use the remote machine's docker cache. It's slightly different
       | than your approach of building locally, but much simpler.
        
         | kosolam wrote:
         | This approach is akin to the prod server pulling an image from
         | a registry. The op method is push based.
        
           | jdsleppy wrote:
           | No, in my example the docker-compose.yml would exist
           | alongside your application's source code and you can use the
           | `build` directive https://docs.docker.com/reference/compose-
           | file/services/#bui... to instruct the remote host (Hetzner
           | VPS, or whatever else) to build the image. That image does
           | not go to an external registry, but is used internal to that
           | remote host.
           | 
           | For 3rd party images like `postgres`, etc., then yes it will
           | pull those from DockerHub or the registry you configure.
           | 
           | But in this method you push the source code, not a finished
           | docker image, to the server.
        
             | quantadev wrote:
             | Seems like it makes more sense to build on the build
             | machine, and then just copy images out to PROD servers.
             | Having source code on PROD servers is generally considered
             | bad practice.
        
               | jdsleppy wrote:
               | The source code does not get to the filesystem on the
               | prod server. It is sent to the Docker daemon when it
               | builds the image. After the build ends, there's only the
               | image on the prod server.
               | 
               | I am now convinced that this is a hidden docker feature
               | that too many people aren't aware of and do not
               | understand.
        
       | hoppp wrote:
       | Oh this is great, its a problem I also have.
        
       | ajd555 wrote:
       | This is great! I wonder how well it works in case of Disaster
       | Recovery though. Perhaps it is not intended for production
       | environments with strict SLAs and uptime requirements, but if you
       | have 20 servers in a cluster that you're migrating to another
       | region or even cloud provider, the pull approach from a registry
       | seems like the safest and most scalable approach
        
       | Aaargh20318 wrote:
       | I simply use "docker save <imagename>:<version> | ssh
       | <remoteserver> docker load"
        
       | dboreham wrote:
       | I like the idea, but I'd want this functionality "unbundled".
       | 
       | Being able to run a registry server over the local containerd
       | image store is great.
       | 
       | The details of how some other machine's containerd gets images
       | from that registry to me is a separate concern. docker pull will
       | work just fine provided it is given a suitable registry url and
       | credentials. There are many ways to provide the necessary network
       | connectivity and credentials sharing and so I don't want that
       | aspect to be baked in.
       | 
       | Very slick though.
        
       | matt_kantor wrote:
       | Functionality-wise this is a lot like docker-pushmi-pullyu[1]
       | (which I wrote), except docker-pushmi-pullyu is a single
       | relatively-simple shell script, and uses the official registry
       | image[2] rather than a custom server implementation.
       | 
       | @psviderski I'm curious why you implemented your own registry for
       | this, was it just to keep the image as small as possible?
       | 
       | [1]: https://github.com/mkantor/docker-pushmi-pullyu
       | 
       | [2]: https://hub.docker.com/_/registry
        
         | matt_kantor wrote:
         | After taking a closer look it seems the main conceptual
         | difference between unregistry/docker-pussh and docker-pushmi-
         | pullyu is that the former runs the temporary registry on the
         | remote host, while the latter runs it locally. Although in both
         | cases this is not something users should typically have to care
         | about.
        
         | westurner wrote:
         | Do docker-pussh or docker-pushmi-pullyu verify container image
         | signatures and attestations?
         | 
         | From "About Docker Content Trust (DCT)"
         | https://docs.docker.com/engine/security/trust/ :
         | > Image consumers can enable DCT to ensure that images they use
         | were signed. If a consumer enables DCT, they can only pull,
         | run, or build with trusted images.             export
         | DOCKER_CONTENT_TRUST=1
         | 
         | cosign > verifying containers > verify attestation:
         | https://docs.sigstore.dev/cosign/verifying/verify/#verify-at...
         | 
         | /? difference between docker content trust dct and cosign:
         | https://www.google.com/search?q=difference+between+docker+co...
        
       | revicon wrote:
       | Is this different from using a remote docker context?
       | 
       | My workflow in my homelab is to create a remote docker context
       | like this...
       | 
       | (from my local development machine)
       | 
       | > docker context create mylinuxserver --docker
       | "host=ssh://revicon@192.168.50.70"
       | 
       | Then I can do...
       | 
       | > docker context use mylinuxserver
       | 
       | > docker compose build
       | 
       | > docker compose up -d
       | 
       | And all the images contained in my docker-compose.yml file are
       | built, deployed and running in my remote linux server.
       | 
       | No fuss, registry, no extra applications needed.
       | 
       | Way simpler than using docker swarm, Kubernetes or whatever.
       | Maybe I'm missing something that @psviderski is doing that I
       | don't get with my method.
        
         | matt_kantor wrote:
         | Assuming I understand your workflow, one difference is that
         | unregistry works with already-built images. They aren't built
         | on the remote host, just pushed there. This means you can be
         | confident that the image on your server is exactly the same as
         | the one you tested locally, and also will typically be much
         | faster (assuming well-structured Dockerfiles with small layers,
         | etc).
        
           | pbh101 wrote:
           | This is probably an anti-feature in most contexts.
        
             | akovaski wrote:
             | The ability to push a verified artifact is an anti-feature
             | in most contexts? How so?
        
         | tontony wrote:
         | Totally valid approach if that works for you, the docker
         | context feature is indeed nice.
         | 
         | But if we're talking about hosts that run production-like
         | workloads, using them to perform potentially cpu-/io-intensive
         | build processes might be undesirable. A dedicated build host
         | and context can help mitigate this, but then you again face the
         | challenge of transferring the built images to the production
         | machine, that's where the unregistry approach should help.
        
       | richardc323 wrote:
       | I naively sent the Docker developers a PR[1] to add this
       | functionality into mainline Docker back in 2015. I was rapidly
       | redirected into helping out in other areas - not having to use a
       | registry undermined their business model too much I guess.
       | 
       | [1]: https://github.com/richardcrichardc/docker2docker
        
       ___________________________________________________________________
       (page generated 2025-06-19 23:00 UTC)