[HN Gopher] Runtipi: Docker-based home server management
___________________________________________________________________
Runtipi: Docker-based home server management
Author : ensocode
Score : 105 points
Date : 2024-04-04 15:02 UTC (7 hours ago)
(HTM) web link (runtipi.io)
(TXT) w3m dump (runtipi.io)
| razerbeans wrote:
| Anyone have any experience using this? I've been managing most of
| my homelab infrastructure with a combination of saltstack and
| docker compose files and I'm curious how this would stack up.
| eightysixfour wrote:
| I used to run it and generally liked it, but eventually felt
| limited in the things I could do. At the time it was a hassle
| to run non-tipi apps behind the traefik instance and eventually
| I wanted SSO.
|
| I ended up in a similar place with proxmox, docker-compose, and
| portainer but I have it on my backlog to try a competitor,
| Cosmos, which says many of the things I want to hear. User
| auth, bring your own apps and docker configs, etc.
|
| https://github.com/azukaar/Cosmos-Server/
| pjerem wrote:
| I tried it a few months and it was nice. But I think it lacks a
| way to configure mounting points for the apps storage.
|
| By default, each app have its own storage folder and not really
| a useful default in the use case of a home lab : you probably
| want, idk, Syncthing, Nextcloud and Transmission to be able to
| access the same folders.
|
| It's doable but you have to edit the yaml files yourself which
| I thought removed most of the interest of the project.
| sanex wrote:
| Ran into the same problem with umbrel. Wanted to use
| photoprism for a gallery but nextcloud and syncthing to
| backup the photos. Was easier to just manage the containers
| myself.
| filmgirlcw wrote:
| I've been evaluating it alongside Cosmos and Umbrel, in
| addition to tools I've used before like CapRover. I like it but
| I don't have any strong feelings yet. I will probably do some
| sort of writeup after I do more evaluations and tests and play
| with more things but I haven't had the time to dedicate to it.
|
| If you're already familiar with setting things up
| Salt/Ansible/whatever and Docker compose, you might not need
| something like this -- especially if you're already using a
| dashboard like Dashy or whatever.
|
| The biggest thing is that these types of tools make it a lot
| easier to set things up -- there are inherent security risks
| too if you don't know what you are doing, though I argue this
| is a great way to learn (and it isn't a guarantee that simply
| knowing how to use Salt or Ansible or another infrastructure as
| code tool will mean any of the stuff you deploy is any more
| secure) and a good entryway for people who don't want to do the
| Synology thing.
|
| I like these sorts of projects because even though I can do
| most of the stuff manually (I'd obv. automate using Ansible or
| something), I often don't want to if I just want to play with
| stuff on a box and the app store nature of these things is
| often preferable to finding the right docker image (or
| modifying it) for something. I'm lazy and I like turnkey
| solutions.
| ziofill wrote:
| I've been using it for a few months on a raspberry pi 4. I
| installed (via runtipi) pi hole for dns, Netdata for monitoring
| and Tailscale for remote access. Works great for me and my
| family, I can stream videos from jellyfin for my kids when we
| are on the go and all the family devices use pihole.
|
| I tried to do things myself in the past but this is so much
| easier if you don't have particular needs.
| apitman wrote:
| > At its core, Tipi is designed to be easy to use and accessible
| for everyone
|
| > This guide will help you install Tipi on your server. Make sure
| your server is secured and you have followed basic security
| practices before installing Tipi. (e.g. firewall, ssh keys, root
| access, etc.)
|
| I love to see efforts like this, please keep it up.
|
| But expecting users to learn everything necessary to run a secure
| server is simply not going to achieve the stated goal of being
| accessible to everyone.
|
| We need something like an app that you can install on your laptop
| from the Windows store, with a quick OAuth flow to handle all
| networking through a service like Cloudflare Tunnel, and
| automatic updates and backups of all apps.
| AdrienPoupa wrote:
| I'd argue the easiest way to achieve this is to refrain from
| opening any ports, and using Tailscale to get remote access.
| elashri wrote:
| I doubt that with level of accessibility that the GP suggest
| that would be easy. It would be easy to have integrated
| firewall management that just expose 443/80 ports for reverse
| proxy and handle communication with docker networks. Also it
| can help setup vpn server and disallow accessing the server
| except via approved client.
|
| Someone suggested cosmos in the comment. I think this is the
| closest to what I am saying. However I am into self hosting
| for couple of years now with development experience so I
| would be biased. That would be probably different for average
| person without deep knowledge.
| AdrienPoupa wrote:
| But then, your firewall or Cosmos is exposed to the
| internet waiting for a 0day to be released, and chances
| here they will not be updated as soon as it comes out.
|
| VPN server is already what Tailscale does at this point.
| I'm not a shill by the way, just a regular user impressed
| by the ease of installation/use of their product.
| Cieric wrote:
| Does anyone have any recommendations on top of this? I personally
| run portainer and would like more features like grouping
| containers post creation and container start order. I also have
| an issue where my VPN container if updated breaks all containers
| that depended on it. Portainer handles a lot, but I need the
| little bit more so I have to look at the panel less. I'm not sure
| if this would work for me since I build a lot of custom
| containers and this looks more like it's better for purpose built
| containers.
| exabyte wrote:
| Umbrel, citadel, start9, MASH playbook.
|
| Sorry, on mobile right now, but these are great alternative
| projects
| BirAdam wrote:
| Personal opinion, a home lab is the perfect place to learn how to
| actually configure things and properly set them up: no docker, no
| ansible, no salt... take off the training wheels and learn it.
| Then, learn to write your own playbooks, your own compose files,
| etc.
|
| Additionally, if people think that learning how to configure and
| deploy stuff is too tedious and/or too difficult, write software
| that has better UI, not more layers of configuration and
| administration.
|
| Final thought, git is better than ansible/salt/chef/puppet, and
| containers are silly.
| apitman wrote:
| I think this is good advice for technical people, but I also
| think we need to drastically lower the barrier of entry for
| people who would benefit from owning their compute and data but
| don't have the skills or interest necessary.
| filmgirlcw wrote:
| I agree we need to do that, but I would argue at this point,
| the best option for the people without the skills/interest is
| probably buying a Synology. It would be awesome if there was
| a free OSS solution that people could just follow
| instructions and deploy on an inexpensive miniPC and not have
| to worry about anything and have auto-updates and zero
| problems, but I don't know if that is realistic. If we do
| ever see it, I feel confident it won't be free.
|
| I see tools like this as a really good middle ground between
| piecing all the different parts together and managing
| runbooks and whatnot and buying an off-the-shelf appliance
| like a Synology or other consumer/prosumer/SMB NAS.
| skydhash wrote:
| And that's basically defining protocols and let people build
| interfaces. But that goes against many companies' objectives.
| Everyone is trying to move you to their database and cloud
| computing. And most people prefer convenience over ownership
| or privacy. Installing Gitea on a VPS is basically a walk in
| the park. Not as easy as clicking a button in cPanel, but I
| think some things should be learned and not everything has to
| be "magic". You don't need to be a mechanic to drive a car,
| but some knowledge is required when someone is overselling
| engine oil.
| planb wrote:
| Don't do this to learn if you really want to use the servers
| for important data and expose them to the internet. You can
| shoot your foot with docker containers, but you can shoot your
| face by blindly installing dozens of packages and their
| dependencies on a single box.
| eternauta3k wrote:
| You can keep them separate with Docker. I'm not sure what the
| workflow looks like, however: you make a simple image with
| just (say) alpine + apache, you run a shell there, set
| everything up, and when it works you basically try to
| replicate all the things you did manually in the Dockerfile?
| So in the end, you have to do everything twice?
| eddd-ddde wrote:
| I always just use a systemd or open rc service to run my stuff,
| pretty much a shell script and voila. As long as you know to
| set proper firewall rules and run things as non-root your
| pretty much ready.
| watermelon0 wrote:
| Not only does using Docker avoid dependency hell (which is
| IMO the biggest problem when running different software on a
| single machine), it has good sandboxing capabilities (out-of-
| the-box, no messing with systemd-nspawn) and UX that's miles
| ahead of systemd.
| whalesalad wrote:
| umm, homelab is also the perfect place to play with docker,
| ansible, kube, etc. that's the whole point.
|
| git and ansible are not mutually exclusive. containers are not
| silly.
|
| you'll come around some day
| alex_lav wrote:
| In my experience, people that disparage containerization are
| either forgetting or weren't around to know about how awful
| bespoke and project specific build and deploy practices
| actually were. Not that docker is perfect, but the concept of
| containerization is so much better than the alternative it's
| actually kind of insane.
| mplewis wrote:
| Exactly. If containerization solves one thing, it's
| installing two pieces of software that e.g. want a
| different version of system Python installed. I love Debian
| but it won't help you much here.
| evantbyrne wrote:
| Docker has come a long way. There was a time when the
| documentation struggled to coherently describe what images
| were, and other tools were good-enough to not bother. Today
| there are imagines for most things, the documentation is
| clear, and support is widespread. But being able to setup a
| server on a Linux box is still a basic skill that any web
| administrator should know how to do. People make the
| mistake of thinking because they use Docker that they
| should use proprietary cloud hosting. My controversial
| belief is that the right solution for at least 90% of web
| services just slapping docker-compose on a VM, running
| SQLite in WAL backed up by Litestream, and maybe sync logs
| with Vector if you want to get real crazy. Run a production
| web service for $30/mo without cloud lock-in or bs. It's
| kind of funny I built a CD for AWS and it made me
| disillusioned with the whole thing. But yeah, Docker is
| good.
| radus wrote:
| docker swarm is also a decent solution if you do need to
| distribute some workloads, while still using a docker
| compose file with a few extra tweaks. I use this to
| distribute compute intensive jobs across a few servers
| and it pretty much just works at this scale. The sharp
| edges I've come across are related to differences between
| the compose file versions supported by compose and swarm.
| Swarm continues to use Compose file version 3 which was
| used by Compose V1 [1].
|
| 1: https://docs.docker.com/engine/swarm/stack-deploy/
| alex_lav wrote:
| I'm curious how UI impacts software configuration and
| deployment? Are you using a UI to set your compiler flags or
| something?
| jrm4 wrote:
| "Containers are silly" might be the literal worst take I've
| seen in this space, especially if one is at all interested in
| "onboarding more moderately techy folk" to the idea of
| homelabs, and self-hosting, which I think is _incredibly
| important._
|
| Once you know Docker, you can abstract away SO MUCH and get
| things up and running incredibly quickly.
|
| Let's say you have someone who now hates Spotify. Perfect.
| Before Docker it was a huge pain to try to set up one and hope
| you get it right.
|
| After Docker? Just TRY ALL OF THEM. Don't like
| Navidrome/mstream/whatever? Okay, docker-compose down and try
| the next one.
| watermelon0 wrote:
| TBH, containers are the wrong way of doing things, but they
| are the least bad solution of shipping software that we
| currently have.
| goodpoint wrote:
| > get things up and running incredibly quickly
|
| Optimizing for the wrong metric.
| code_biologist wrote:
| Accessibility is _the_ metric in software adoption.
| Raspberry Pi and Arduino have made those computing form
| factors wildly more approachable for a general audience.
| Are they optimizing for the wrong metric?
| vundercind wrote:
| Did that for years. Spend so much more time fiddling with
| computers than getting any actual benefit from them.
|
| Now all my shit's in docker, launched by one shell script per
| service. I let docker take care of restarts. There's no mystery
| about where exactly _all_ of the config and important files for
| any of the services lives.
|
| I barely even have to care which distro I'm running it on. It's
| great.
|
| What's funny is that the benefits of docker, for me, have
| little to do with its notable technical features.
|
| 1. It provides a common cross-distro interface for managing
| services.
|
| 2. It provides a cross-disro rolling-release package manager--
| that actually works pretty well and has well-maintained, and
| often official, packages.
|
| 3. It strongly encourages people building packages for that
| package manager to make their config more straightforward and
| to make locations of config files and data explicit and
| documented thoroughly in a single place. That is, it forces a
| lot of shitty software to de-shittify itself at the point I'll
| interact with it (via Docker).
| raggi wrote:
| This appears to suffer from the same mistake as many of these
| things do in this space: it focuses on making it really easy to
| run lots of software, but has a very poor story when it comes to
| making the data and time you put in safe across upgrades and
| issues. The only documented page on backing up requires taking
| the entire system down, and there appears to be no guidance or
| provision for safely handling software upgrades. This sets people
| up for the worst kind of self-hosting failure: they get all
| excited about setting up a bunch of potentially really useful
| applications, invest time and data into them, then get burned
| really badly when it all comes crashing down in an upgrade or
| hardware failure, with improper preparation. This is how people
| move back to saas and never look back again, it's utterly
| critical to get right, and completely missing here.
| yonixw wrote:
| That's exactly how I feel. Not to mention that I am always
| looking for how to monitor the system, and there is no uniform
| standard, if it is possible to monitor at all. As if there is a
| hidden message that apart from monitoring how much disk space
| is left or CPU everything else is irrelevant. But for such VMs
| that share many processes, it is exactly the opposite! I must
| know when there is a problem who exactly is responsible!
| everforward wrote:
| I'm working on something similar, and the crux of that issue is
| configurability vs automation. I.e. it's very easy to make
| backups for a system that users can't configure at all. You
| just ship the image with some rsync commands or something and
| done.
|
| Once you start letting people edit the config files, you get
| into a spot where now you basically need to be able to parse
| the config files to read file paths. That often means making
| version-specific parsers for configuration options that are
| introduced or removed in some versions, or have differing
| defaults (i.e. in 1.2 if they don't set "storage_path" it's at
| /x/, but in 1.3 if they don't set it, it defaults to /y/).
|
| That gets to be a lot of work.
|
| Then it gets even worse when the users can edit the Docker
| config for the images, because all bets are off at that point.
| The user could do all kinds of weird, fucky shit that
| infinitely loops naive backup scripts because of host volume
| mounts or have one of the mounts actually be part of NFS mount
| on the host with like 200ms of latency so backups just hang for
| forever and etc.
|
| It's just begging for an infinite series of bugs from people
| who did something weird with their config and ended up backing
| up their whole drive instead of the Docker folder, or removing
| their whole root drive because they mount their host FS root in
| the backups folder and it got deleted by the "old backup
| cleanup" script, or who knows what.
|
| At some point, it's easier to just make your own setup where
| you define the limitations of that setup than it is to use
| someone else's setup but have to find the limitations on your
| own.
| ocdtrekkie wrote:
| This is one of the reasons I groan every time there's a new
| selfhosting thing that uses Docker. It's easy to start because
| all the wiring is there and everyone already makes their apps
| available with a Dockerfile, but it's not a remotely good
| solution for selfhosting and people should stop trying to use
| it that way.
| wmf wrote:
| A more detailed explanation from ten years ago:
| https://sandstorm.io/news/2014-08-19-why-not-run-docker-apps
| xjlin0 wrote:
| Is this dockerized alternative to https://yunohost.org/ ?
| riedel wrote:
| Quite a few out there, I guess. I always liked the idea of
| sandbox.io .
| birdman3131 wrote:
| How does this differ from Caprover?
| TechDebtDevin wrote:
| These services are cool but I almost always end up doing it
| myself anyways. Doing it yourself is more fun anyways.
|
| Typically I just make my own one click deploys that fit my
| preferences. Not knowing how your container starts and runs is a
| recipe for disaster.
| angra_mainyu wrote:
| I find terraform + acme provider + docker provider (w/ ssh uri)
| to be the best combo.
|
| All my images live on a private GitLab registry, and terraform
| provisions them.
|
| Keeping my infra up-to-date is as simple as "terraform plan -out
| infra.plan && terraform apply infra.plan" (yes, I know I
| shouldn't blindly accept, but it's my home lab and I'll accept if
| I want to).
|
| Note: SSH access is only allowed from my IP address, and I have a
| one-liner that updates the allowed IP address in my infra's L3
| firewall.
| Cyph0n wrote:
| I think this is useful for less tech oriented people to get a
| basic homelab setup.
|
| But I personally find it much more straightforward and
| maintainable to just use Compose). Virtually every service you
| would want to run has first-class support for Docker/Podman and
| Compose.
___________________________________________________________________
(page generated 2024-04-04 23:00 UTC)