[HN Gopher] My Self-Hosting Setup
       ___________________________________________________________________
        
       My Self-Hosting Setup
        
       Author : mirdaki
       Score  : 519 points
       Date   : 2025-07-19 02:56 UTC (20 hours ago)
        
 (HTM) web link (codecaptured.com)
 (TXT) w3m dump (codecaptured.com)
        
       | mirdaki wrote:
       | Hey y'all, I know getting a setup that feels "right" can be a
       | process. We all have different goals, tech preferences, etc.
       | 
       | I wanted to a share my blog post walking through how I finally
       | built a setup that I can just be happy with and use. It goes over
       | my goals, requirements, tech choices, layout, and some specific
       | problems I've resolved.
       | 
       | Where I've landed of course isn't where everyone else will, but I
       | hope it can serve as a good reference. I've really benefited from
       | the content and software folks have freely shared, and hope I can
       | continue that and help others.
        
         | redrove wrote:
         | How are you finding Nix for the homelab to be? Every time I try
         | it I just end up confused, maybe next time will be the charm.
         | 
         | The reason I ask is I homelab "hardcore"; i.e. I have a 25U
         | rack and I run a small Kubernetes cluster and ceph via Talos
         | Linux.
         | 
         | Due to various reasons, including me running k8s in the lab for
         | about 7 years now, I've been itching to change and consolidate
         | and simplify, and every time i think about my requirements I
         | somehow end up where you did: Nix and ZFS.
         | 
         | All those services and problems are very very familiar to me,
         | feel free to ask me questions back btw.
        
           | MisterKent wrote:
           | I've been trying to switch my home cluster from Debian + K3s
           | to Talos but keep running into issues.
           | 
           | What does your persistent storage layer look like on Talos?
           | How have you found it's hardware stability over the long
           | term?
        
             | esseph wrote:
             | Talos is the Linux kernel at heart, so.. just fine.
        
             | redrove wrote:
             | >What does your persistent storage layer look like on
             | Talos?
             | 
             | Well, for its own storage: it's an immutable OS that you
             | can configure via a single YAML file, it automatically
             | provisions appropriate partitions for you, or you can even
             | install the ZFS extension and have it use ZFS (no zfs on
             | root though).
             | 
             | For application/data storage there's a myriad of options to
             | choose from[0]; after going back and forth a few times
             | years ago with Longhorn and other solutions, I ended up at
             | rook-ceph for PVCs and I've been using it for many years
             | without any issues. If you don't have 10gig networking you
             | can even do iSCSI from another host (or nvmeof via
             | democratic-csi but that's quite esoteric).
             | 
             | >How have you found it's hardware stability over the long
             | term?
             | 
             | It's Linux so pretty good! No complaints and everything
             | just works. If something is down it's always me
             | misconfiguring or a hardware failure.
             | 
             | [0] https://www.talos.dev/v1.11/kubernetes-
             | guides/configuration/...
        
               | avtar wrote:
               | > after going back and forth a few times years ago with
               | Longhorn and other solutions, I ended up at rook-ceph for
               | PVCs
               | 
               | Curious to know what issues you ran into with Longhorn.
        
           | udev4096 wrote:
           | Honestly, I personally like the combination of
           | keepalived+docker(swarm if needed)+rsync for syncing config
           | files. keepalived uses VRRP, which creates a floating IP.
           | It's extremely lightweight and works like a charm. You won't
           | even notice the downtime, the switch to another server IP is
           | instant
        
             | cess11 wrote:
             | Keepalived is great. Learning about it was one of the best
             | things I got from building HA-aiming infra at a job once.
        
           | mirdaki wrote:
           | I certainly didn't take to Nix the first few times I looked
           | at it. The language itself is unusual and the error messages
           | leave much to be desired. And the split around Flakes just
           | complicates things further (though I do recommend using them,
           | once you set it up, it's simple and the added reproducibility
           | gives nice peace of mind)
           | 
           | But once I fully understood how it's features really make it
           | easy for you to recover from mistakes and how useful the
           | package options available from nixpkgs are, I decided it was
           | time to sink in and figure it out. Looking at other folks nix
           | config on GitHub (especially for specific services you're
           | wanting to use) is incredibly helpful (mine is also linked in
           | the post)
           | 
           | I certainly don't consider myself to be a nix expert, but the
           | nice thing is you can do most things by using other examples
           | and modifying them till you feel good about it. Then overtime
           | you just get more familiar with and just grow your skill
           | 
           | Oh man, having a 25U rack sounds really fun. I have a
           | moderate size cabinet I keep my server, desktop, a UPS, 10Gig
           | switch, and my little fanless Home Assistant box. What's
           | yours look like?
           | 
           | I should add it to the article, but one of my anti-
           | requirements was anything in the realm of high availability.
           | It's neat tech to play with, but I can deal with downtime for
           | most things if the trade off is everything being much
           | simpler. I've played a little bit with Kubernetes at work,
           | but that is a whole ecosystem I've yet to tackle
        
             | redrove wrote:
             | >The language itself is unusual and the error messages
             | leave much to be desired. And the split around Flakes just
             | complicates things further
             | 
             | Those are my chief complaints as well, actually. I never
             | quite got to the point where I grasped how all the bits fit
             | together. I understand the DSL (though the errors are
             | cryptic as you said) and the flakes seemed recommended by
             | everyone yet felt like an addon that was forgotten about
             | (you needed to turn them on through some experimental flag
             | IIRC?).
             | 
             | I'll give it another shot some day, maybe it'll finally
             | make sense.
             | 
             | >Oh man, having a 25U rack sounds really fun. I have a
             | moderate size cabinet I keep my server, desktop, a UPS,
             | 10Gig switch, and my little fanless Home Assistant box.
             | What's yours look like?
             | 
             | * 2 UPSes (one for networking one for compute + storage)
             | 
             | * a JBOD with about 400TB raw in ZFS RAID10
             | 
             | * a little intertech case with a supermicro board running
             | TrueNAS (that connects to the JBOD)
             | 
             | * 3 to 6 NUCs depending on the usage, all running Talos,
             | rook-ceph cluster on the NVMEs, all NUCs have a Sonnet Solo
             | 10G Thunderbolt NIC
             | 
             | * 10 Gig unifi networking and a UDM Pro
             | 
             | * misc other stuff like a zima blade, a pikvm, shelves,
             | fans, ISP modem, etc
             | 
             | I'm not necessarily thinking about downsizing but the NUCs
             | have been acting up and I've gotten tired of replacing them
             | or their drives so I thought I'd maybe build a new machine
             | to rule them all in terms of compute and if I only want one
             | host then k8s starts making less sense. Mini PCs are fine
             | if you don't push them to the brim like I do.
             | 
             | I'm a professional k8s engineer I guess, so on the software
             | side most of this comes naturally at this point.
        
               | bjoli wrote:
               | 400 TB?! do you collect Linux ISOs or are you doing
               | photography?
        
               | redrove wrote:
               | Linux ISOs and backups.
        
             | sgarland wrote:
             | > Oh man, having a 25U rack sounds really fun.
             | 
             | For certain definitions of the word "fun," yes. I have a
             | 35U (I don't need that many slots, but at the time I did
             | need it tall enough that my kids couldn't reach the top,
             | where I put the keys), with:
             | 
             | * 3x Dell R620
             | 
             | * 2x Supermicro (one X9, one X11)
             | 
             | * 1x APC UPS w/ external battery
             | 
             | * Unifi UDM Pro
             | 
             | * Unifi Enterprise 24-port switch
             | 
             | The Dells have Samsung PM863 NVMe drives which are used by
             | Ceph (managed by Proxmox), with traffic sent over an
             | Infiniband mesh network via Mellanox ConnectX3-Pro.
             | 
             | The Dells run K3OS in a VM, which is a dead project. Big
             | mistake there.
             | 
             | The Supermicros have various spinners, and are in a ZFS
             | pool. One of them is technically a backup that should power
             | up daily to ingest snapshots, then power off, but there's
             | been some issue preventing that, so...
             | 
             | It was all very fun to set up, and has been eminently
             | reliable, but it's a bit much. While you can in fact make
             | R620s relatively quiet, they're still 1U, and those little
             | 40mm fans are gonna whine. It's background noise to me, but
             | guests definitely mention it if we're close to the rack.
             | 
             | Also, I'm now in the uncomfortable position of being stuck
             | on Proxmox 7, because v8 (or more accurately, the
             | underlying Debian release) dropped support for my HBAs, so
             | the NAS would be dead in the water. I mean, I could compile
             | my own kernel, or perhaps leverage DKMS, but that somewhat
             | defeats the purpose of having a nice AIO like Proxmox.
             | Similarly, my choose of K3OS means at some point I need to
             | spend the time to rip everything out and start over with
             | Talos.
             | 
             | Or - just maybe - I've done enough playing, and I should
             | simply buy a JBOD chassis and a relatively new and quiet
             | server (4U under light load means you can get away with
             | much quieter fans), and just run stuff in Docker or _gasp_
             | systemd. Or, hell, single-node K8s. My point is that it is
             | fun, but eventually your day job becomes exhausting and you
             | tire of troubleshooting all the various jank accumulating
             | at home, and you stop caring about most of it.
        
         | raybb wrote:
         | Did you come across or consider using coolify at any point?
         | I've been using it for over a year and quite enjoyed it for
         | it's Heroku type ease of use and auto deployments from GitHub.
         | 
         | https://coolify.io/
        
           | mirdaki wrote:
           | No I haven't heard of it before. I do like the idea though,
           | especially for side projects. Thanks for sharing, I'll look
           | more at it!
        
             | oulipo wrote:
             | Check out Dokploy too! https://dokploy.com/
        
               | un1970ix wrote:
               | Their license is still ambiguous, and I don't like how
               | they communicate with those who inquire about it.
        
         | colordrops wrote:
         | Hi! Really excited by your work! I'm working on a similar
         | project built on NixOS and curious what you thing.
         | 
         | My goal is to have a small nearly zero-conf apple-device-like
         | box that anyone can install by just plugging it into their
         | modem then going through a web-based installation. It's still
         | very nascent but I'm already running it at home. It is a hybrid
         | router (think OPNSense/PFSense) + app server (nextcloud,
         | synology, yunohost etc). All config is handled through a single
         | Nix module. It automatically configures dynamic DNS,
         | Letsencrypt TLS certs, and subdomains for each app. It's got
         | built in ad blocking and headscale.
         | 
         | I'm working on SSO at the moment. I'll take a look at your work
         | and maybe steal some ideas.
         | 
         | The project is currently self-hosted in my closet:
         | 
         | https://homefree.host
        
           | mirdaki wrote:
           | Oh that sounds really rad! Certainly could have it's use
           | cases. I really appreciate how NixOS enables projects like
           | this. Best of luck with it!
        
         | ultra2d wrote:
         | Do you use encrypted ZFS?
         | 
         | I have dabbled before with FreeIPA and other VMs on a Debian
         | host with ZFS. For simplicity, I switched to running Seafile
         | with encrypted libraries on a VPS and back that up to a local
         | server via ZFS send/receive. That local server switches itself
         | on every night, updates, syncs and then goes into sleep again.
         | For additional resiliency, I'm thinking of switching to ZFS on
         | Linux desktop (currently Fedora), fully encrypted except for
         | Steam. Then sync that every hour or so to another drive in the
         | same machine, and sync less frequently to a local server. Since
         | the dataset is already encrypted, I can either sync to an
         | external drive or some cloud service. Another reason to do it
         | like this is that storing a full photo archive within Seafile
         | on a VPS is too costly.
        
           | mirdaki wrote:
           | Yes! On top of the data safety features of ZFS, the fact you
           | can encrypt a dataset and incremental send/receive is a
           | fantastic ability
        
         | A4ET8a8uTh0_v2 wrote:
         | I appreciated the in depth look and while some ideas from your
         | setup will take more time to implement, I just added flame for
         | the dashboard and see how it fares with family.
        
           | mirdaki wrote:
           | Thank you! It's all a journey, hope flame works well for you!
        
       | tripdout wrote:
       | Read the first paragraph and knew you were gonna talk about Nix.
        
         | senectus1 wrote:
         | I love the idea of nix... but i want ubuntu nix or fedora nix
         | :-P
         | 
         | It needs to be stupid easy and reliable.
        
           | mirdaki wrote:
           | I think that's a fair point. Kinda like with Arch, you do
           | have to know what you want to setup NixOS right now
           | 
           | I really like what's happening in the ublue space were folks
           | are tweaking and optimizing distros for specific use cases
           | (like bazzite for gaming) and then sharing them
           | 
           | NixOS does support that to an extent, but it certainly
           | doesn't have the same community movement behind it like those
        
         | mirdaki wrote:
         | Lol, yeah. It was a journey to get to it, and a slightly
         | shorter journey to feel comfortable with it, but it has won me
         | over
        
       | dr_kiszonka wrote:
       | I am curious what are some good enough cheapskate self-hosting
       | setups?
       | 
       | I want to self-host one of those floss Pocket replacements but I
       | don't want to pay more than what these projects charge for
       | hosting the software themselves (~$5). I am also considering
       | self-hosting n8n. I don't have any sophisticated requirements. If
       | it was possible I would host it from my phone with a backup to
       | Google Drive.
        
         | redrove wrote:
         | I would look up intel N100 mini PCs. Extremely low power and
         | fast enough (it's even got hardware decoding).
        
           | demaga wrote:
           | I built a NAS with N100 and am pretty happy with it. Price to
           | performance is really good. It runs several services, no
           | issues yet
        
             | redrove wrote:
             | Which board did you get?
        
               | demaga wrote:
               | ASUS PRIME N100I-D D4; put 16 GB of RAM in there, but
               | that was probably an overkill
        
         | qmr wrote:
         | Used NUCs, Raspberry Pi / pi zero.
         | 
         | Any old PC with low idle power draw.
        
         | solraph wrote:
         | Any of the 1L PCs from Dell, HP, or Lenovo. They sip power
         | (5~10 watts), and take up minimal space. I've got a 6 or 7 VMs
         | running on one, and it barely breaks 5% CPU usage.
         | 
         | See https://www.servethehome.com/introducing-project-
         | tinyminimic... for a good list of reviews.
        
           | abeindoria wrote:
           | Seconded. A dell optiplex micro or hp pro desk with 7th Gen
           | or 8th Gen i5 is approx $40-55 on eBay if you look. Works
           | flawlessly.
        
             | mirdaki wrote:
             | Agree. If low cost and maximum value is you're goal, grab a
             | used one of these or similar speed laptop (and you sort of
             | get battery back up in that case)
             | 
             | Really, any machine from the last decade will be enough, so
             | if you or someone you know have something lying around, go
             | use that
             | 
             | The two main points to keep in mind are power draw (older
             | things are usually going to be worse here) and storage
             | expandability options (you may not need much storage for
             | your use case though). Worse case you can plug in a USB
             | external drive, but bare in mind that USB connection might
             | be a little flaky
        
           | danieldk wrote:
           | This, when I was a student and had to live frugal (2001-2008
           | or so), I got a second-hand Dell, put it on top of a high
           | cupboard in my dorm room, and installed a bunch of services
           | (e.g. Trac was very popular in the day for hosting projects).
           | 
           | It won't give you 99.999% uptime, but for that stage in my
           | life it was just stellar. I even had an open source project
           | (Slackware fork) where I collaborated with someone else
           | through that little machine.
           | 
           | Second-hand hardware is also a great way to get high-quality
           | enterprise hardware. E.g. during the same time period I had a
           | Dell workstation with two Xeon CPUs (not multi-core, my first
           | SMP machine) and Rambus DRAM (very expensive, but the seller
           | maxed it out).
        
         | pedro_caetano wrote:
         | As a former firefox pocket user, what are the replacements?
         | 
         | I've looked into Wallabag but perhaps there are more I don't
         | know?
        
         | poulpy123 wrote:
         | I'm personally happy with my mini-pc+tailscale which is quite
         | cheap, although if it's just for one service and $5/month I
         | don't think it's worthwhile
        
       | sgc wrote:
       | How are you securing taris? Where is your local network firewall?
       | Which one are you using?
       | 
       | Why did you go with Nextcloud instead of using something more
       | barebones, for example a restic server?
        
         | mirdaki wrote:
         | This article (https://xeiaso.net/blog/paranoid-
         | nixos-2021-07-18/) walks through a lot of the steps I've done
         | on all my NixOS systems
         | 
         | As for Nextcloud vs a restic server, Nextcloud is heavier, but
         | I do benefit from it's extra features (like Calendar and
         | Contact management) as well as use a couple of apps (Memories
         | for photos is quite nice). Plus it's much more family friendly,
         | which was a core requirement for my setup
        
           | sgc wrote:
           | As you said, everybody will have their own approach. For me,
           | setting up a dedicated firewall to protect the soft
           | underbelly of my less tech savvy family computers was the
           | primary motivation for starting a home lab. I could not
           | imagine a network architecture without it.
        
       | zer00eyz wrote:
       | It's nice to see a home lab on HN. Hardware has become a lost art
       | for many.
       | 
       | If you dont have a home lab, start one. Grab a 1l pc off of ebay.
       | Think center m720q or m920q with an i5 is a great place to start.
       | It will cost you less than 200 bucks and if you want to turn it
       | into a NAS or an Opnsense box later you can.
       | 
       | When it arrives toss Proxmox on it and get your toys from the
       | community scripts section... it will let you get set up on 'easy
       | mode'. Fair warning, having a home lab is an addiction, and will
       | change how you look at development if you get into it deeply.
        
         | leovander wrote:
         | Not sure if it happens to most, but I have looped back around
         | to not wanting to play sysadmin at home. Most of the stuff I
         | have running I haven't updated in a awhile, luckily since I own
         | it and it's all internal I don't need to worry about anyone
         | taking away my locally hosted apps. Thank the IT gods for
         | docker compose, and tools like portainer to minimize the amount
         | of fuddling around I have to do.
        
           | __turbobrew__ wrote:
           | Same, replaced the ISP router with my own and have a single
           | box which has storage and compute for running VMs and NFS and
           | that is it. Last thing I want to be doing on a Friday night
           | is debugging why my home network is broken.
        
         | nathan_douglas wrote:
         | I credit homelabbing through my twenties with just about
         | everything good that's happened to me in my career. I certainly
         | didn't end up being moderately employable because I'm smart,
         | charismatic, incisive, creative, lucky, educated, diligent,
         | connected, handsome, sanitary, interesting, or thoughtful; no,
         | it's because I have a tendency toward obsession, delusions of
         | grandeur, and absolutely terrible impulse control.
         | 
         | So I started buying junk on eBay and trying to connect it
         | together and make it do things, and the more frustrated I got,
         | the less able I was to think about literally anything else, and
         | I'd spend all night poking around on Sourceforge or random
         | phpBBs trying to get the damn things to compile or communicate
         | or tftp boot or whatever I wanted them to do.
         | 
         | The only problem was eventually I got good enough that I
         | actually _could_ keep the thing running and my wife and kid and
         | I started putting good stuff on my computers, like movies and
         | TV shows and music and pictures and it started to actually be a
         | big deal when I blew something up. Like, it wasn't just that I
         | felt like a failure, but that I felt like a failure AND my kid
         | couldn't watch _Avatar_ and that 's literally all he wanted to
         | watch.
         | 
         | So now I have two homelabs, one that keeps my family happy and
         | one that's basically the Cato to my Clouseau, a sort of
         | infrastructural nemesis that will just actually try to kill me.
         | Y'know, for fulfillment.
        
       | jjangkke wrote:
       | im using proxmox but struggling to setup subnets and vms
       | 
       | should I be using terraform and ansible?
       | 
       | im using cursor to ssh and it constantly needs to run commands to
       | get "state" of the setup.
       | 
       | basically im trying to do what I used to do on AWS: setup VMs on
       | private network talking to each other with one gateway dedicated
       | to internet connection but this is proving to be extremely
       | difficult with the bash scripts generated by cursor
       | 
       | if anyone can help me continue my journey with self hosting
       | instead of relying on AWS that would be great
        
         | sgc wrote:
         | > im using proxmox but struggling to setup subnets and vms
         | 
         | That is a pretty broad target. I would say start by setting up
         | an opnsense vm, from there you can do very little to start,
         | just lock down your network so you can work in peace. But it
         | can control your subnet traffic, host your tailscale, dchp
         | server, and adguard home, etc.
         | 
         | As somebody who was quite used to hosting my own servers,
         | before I first set up my homelab I thought proxmox would be the
         | heart of it. Actually opnsense is the heart of the network,
         | proxmox is much more in the background.
         | 
         | I think proxmox + opnsense is great tech and you should not be
         | adding in terraform and ansible, but I am not sure that using
         | cursor is helping you. You need a really good grasp of what is
         | going on if your entire digital life is going to be controlled
         | centrally. I would lean heavily on the proxmox tutorials and
         | forums, and even more on the opnsense tutorials and forums.
         | Using cursor for less important things afterwards, or to
         | clarify a fine point every once in a while would make more
         | sense.
        
         | esseph wrote:
         | You don't need any scripts to do that.
         | 
         | Read the docs!
         | 
         | https://pve.proxmox.com/wiki/Network_Configuration#_choosing...
        
         | redrove wrote:
         | I agree Proxmox default networking is lacking/insufficient at
         | best. If you have VLANs or want to do LACP, anything more
         | advanced than a simple interface you'll run into the
         | limitations of the Proxmox implementation quite quickly.
         | 
         | I think the networking experience for hosts is one of the worst
         | things about Proxmox.
        
         | ethan_smith wrote:
         | Try using Proxmox's web UI to create a Linux Bridge for each
         | subnet, then attach VMs to appropriate bridges and configure a
         | VM with two interfaces as your router between networks.
        
         | mirdaki wrote:
         | I've found a lot of docs (Proxmox and TrueNAS are both guilty
         | of this) assume you have existing domain or tool knowledge. I'd
         | recommend checking out some videos from selfhosting YouTubers.
         | They often explain more about what's actually happening than
         | just what buttons to select
         | 
         | Also, I found TrueNAS's interface a little more understandable.
         | If Proxmox isn't jiving with you, you could give that a try
        
         | poulpy123 wrote:
         | Do you really need proxmox ? Would some docker not enough ?
        
         | philjohn wrote:
         | Handle subnets on your router, then in ProxMox make the primary
         | network interface you'll be passing to VM's or Containers VLAN
         | aware, with the VLAN tags that it'll support defined and you're
         | good to go.
        
       | jauntywundrkind wrote:
       | > _Relatively easy for family and friends to use_
       | 
       | > _This means keep one login per person, ideally with SSO, for as
       | many services as I can_
       | 
       | Truly S-tier target. Incredible hard, incredible awesome.
       | 
       | I've said for a long time that Linux & open source is kind of a
       | paradox. It goes everywhere, it speaks every protocol. But as a
       | client, as an end. The whole task of coordinating, of
       | groupwareing, of bringing together networks: that's all much
       | harder, much more to be defined.
       | 
       | Making the many systems work together, having directory
       | infrastructure: that stuff is amazing. For years I assumed that
       | someday I'd be running FreeIPA or some Windows compatible
       | directory service, but it sort of feels like maybe some OpenID
       | type world might possibly be gel'ing into place.
        
         | Abishek_Muthian wrote:
         | I completely agree with the paradox, just yesterday I posted
         | how FOSS is not accessible to non-techies on my problem
         | validation platform[1].
         | 
         | I've been thinking if a platform which connects techies to non-
         | techies can help solve that, say like a systems integrator for
         | individuals.
         | 
         | [1] https://needgap.com/problems/484-foss-are-not-accessible-
         | to-...
        
           | udev4096 wrote:
           | It's not supposed to be. You put in time, use your brain to
           | understand the system. Even a non-techie can easily
           | understand OIDC and Oauth2, it's not that hard
        
             | Thorrez wrote:
             | As a techie, experienced in security, reading the OIDC
             | spec... there are definitely some things I don't understand
             | in there. I'm not sure the authors even understand what's
             | going on.
             | 
             | On 2023-12-15 they published an update to OpenID Connect
             | Core 1.0, called "errata set 2". Previously it said to
             | verify an ID token in a token response, the client needs to
             | 
             | > * If the ID Token contains multiple audiences, the Client
             | SHOULD verify that an azp Claim is present.
             | 
             | > * If an azp (authorized party) Claim is present, the
             | Client SHOULD verify that its client_id is the Claim Value.
             | 
             | The new version is quite different. Now it says
             | 
             | > * If the implementation is using extensions (which are
             | beyond the scope of this specification) that result in the
             | azp (authorized party) Claim being present, it SHOULD
             | validate the azp value as specified by those extensions.
             | 
             | > * This validation MAY include that when an azp
             | (authorized party) Claim is present, the Client SHOULD
             | verify that its client_id is the Claim Value.
             | 
             | So core parts of the security of the ID Token are being
             | changed in errata updates. What was the old purpose of azp?
             | What is the new purpose of azp? Hard to tell. Did all the
             | OIDC implementations in existence change to follow the new
             | errata update (which didn't update the version number)? I
             | doubt it.
             | 
             | https://openid.net/specs/openid-connect-core-1_0.html
             | 
             | https://web.archive.org/web/20231214085702/https://openid.n
             | e...
             | 
             | Or how about a more fundamental question: Why does the ID
             | Token have a signature? What attack does that signature
             | prevent? What use cases does the signature allow? The spec
             | doesn't explain that.
        
               | dragonwriter wrote:
               | > Did all the OIDC implementations in existence change to
               | follow the new errata update (which didn't update the
               | version number)?
               | 
               | I mean, both the old and new version (at least, the parts
               | quoted upthread) are exclusively SHOULD and MAY with no
               | MUST, so (assuming, for the SHOULDs, the implementer had
               | what they felt was sufficiently good reason) literally
               | _any_ behavior is possible while following the spec.
        
               | 01HNNWZ0MV43FF wrote:
               | I think I could handle up to 20 users with `.htaccess`
               | and just handing out passwords to my friends, actually
        
               | maxwellg wrote:
               | > Why does the ID Token have a signature?
               | 
               | The ID Token can be passed from the Identity Provider to
               | the Relying Party (RP) in a few ways.
               | 
               | When `response_mode=id_token` is used, the ID Token can
               | be passed in the front channel directly to the RP during
               | a browser redirect. Since the ID Token is coming from the
               | browser, it must be signed to ensure that a malicious
               | actor can't tamper with it. Otherwise an actor could swap
               | out a `sub` or an `email` claim and the RP would be none
               | the wiser.
               | 
               | The ID Token can also be returned from the `/token`
               | endpoint after exchanging an authorization code. Since
               | the `/token` endpoint is a back channel call over HTTPS,
               | the ID Token doesn't necessarily need to be signed here
               | to avoid tampering. The RP can trust that TLS gets the
               | job done. However, there are substantial benefits to
               | having it be signed:
               | 
               | - If ID tokens were only signed sometimes, we'd have two
               | different standards for how to construct and handle an ID
               | Token, which would be quite confusing.
               | 
               | - Signed ID Tokens can be passed around to assert
               | identity to other entities within the system. For
               | example, there are some promising draft specifications
               | that explore exchanging ID Tokens for access tokens in
               | other systems. This is only possible because the ID Token
               | cannot be tampered with.
               | 
               | https://datatracker.ietf.org/doc/draft-parecki-oauth-
               | identit...
        
             | nativeit wrote:
             | This seems like a very naive perspective.
        
         | mirdaki wrote:
         | Appreciate that! Simple login and access was certainly the
         | hardest requirement to hit, but it can be the difference
         | between people using something and not
         | 
         | And I agree with the feeling that open source is everywhere, up
         | until a regular user picks up something. I think part of the
         | paradox you mention is that every project is trying to work on
         | their own thing, which is great, but also means there isn't a
         | single entity pushing it all in one direction
         | 
         | But that doesn't mean we can't get to nice user experiences.
         | Just in the self-hosting space, things have gotten way more
         | usable in the last 5 years, both from a setup and usage
         | perspective
        
         | cycomanic wrote:
         | It's not really that hard to be honest. If you are not dead set
         | on specific services, but make sso compatability the main
         | selection metric for the services it's very feasible and not
         | that difficult. I had very little experience when I set up my
         | self hosted system and was set up very quickly using caddy and
         | authentik. Alternatively yunohost is a very easy to use
         | distribution that sets up everything using SSO.
        
           | throwup238 wrote:
           | Agreed. I use Cloudflare Zero Trust for SSO authentication (I
           | use Github for my self and Google/Apple for family) and
           | Cloudflare Tunnels to expose homelab services behind NAT. It
           | took an afternoon to set up the first service and adding
           | services via IaC like terraform is easy.
           | 
           | The only time consuming thing since then has been figuring
           | out how to use the Cloudflare auth header to authenticate
           | with each individual app (and many don't have multiuser
           | capability so it's not a big deal).
        
           | jauntywundrkind wrote:
           | Yuno-host doing SSO is something I deeply deeply respect &
           | love about it! Yes!
           | 
           | I haven't pulled it down yet to find out just how extensive
           | the SSO is. My understanding is that it's not totally
           | universal... But that it is like ~50%, which is super not bad
           | given how many packages it has!
        
         | Bombthecat wrote:
         | I use authentik with sso via Google, discord or GitHub.
         | 
         | It's good enough for everyone.
        
       | xyzzy123 wrote:
       | Sometimes when I think about my home network, I think about it in
       | terms of what will happen when I die and what I will be
       | inflicting on my family as the ridiculous setups stop working. Or
       | like, how much it would cost a police forensics team to try to
       | make any sense of it.
       | 
       | I think "home labbing" fulfils much the same urge / need as the
       | old guys (I hate to say it but very much mostly guys) met by
       | creating hugely detailed scale model railways in their basement.
       | I don't mean that in a particularly derogatory way, I just think
       | some people have a deep need for pocket worlds they can control
       | absolutely.
        
         | udev4096 wrote:
         | Curious about your setup. Is it extremely unmanageable or have
         | you gone out of your way to make it so?
        
           | xyzzy123 wrote:
           | Unifi network; small proxmox vms for core services; big
           | truenas box for movies, storage, "apps ecosystem" stuff like
           | minecraft servers; baremetal 12 node k8s cluster on opi5s for
           | "research" (coz I do lots of k8s at work).
           | 
           | Each "stage" above is like incremental failure domains, unifi
           | only keeps internet working, core vms add functionality (like
           | unifi mgmt, rancher, etc), truenas is for "fun extras" etc.
           | k8s lab has nothing I need to keep on it because distributed
           | storage operators are still kind of explodey.
           | 
           | Like each part makes sense individually but when I look at
           | the whole thing I start to question my mental health.
        
           | ffsm8 wrote:
           | Let's explore the implied argument a lil:
           | 
           | Imagine simplest possible deployment you've cooked up.
           | 
           | Now imagine explaining your mother how to maintain it after
           | you're dead and she needs to access the files on the service
           | you setup.
           | 
           | usually, selfhosting is not particularly _hard_. It 's just
           | conceptually way beyond what the average joe is able to do.
           | (Not because they're not smart enough, but simply because
           | they never learned to and will not learn now because they
           | don't want to form that skill set. And I'm not hating on
           | boomers, you can make the same argument with your
           | hypothetical kids or spouse. The parents are just an easy
           | placeholder because you're biologically required to have
           | them, which isn't the case for any other familial
           | relationship)
        
             | nothrabannosir wrote:
             | why does it have to be a non-technical next of kin ? Write
             | down the details for a technically inclined person to
             | follow, maybe a specific friend. Print at the top of the
             | page "show this to X". In the document explain how to
             | recover the necessary data and replace the setup with a
             | standard one.
             | 
             | I assume most people know at least one person who would do
             | this for them , in the event of their death?
        
               | whatevaa wrote:
               | Your assumptions is wrong. Don't assume, verify.
               | Assumptions are the source of many evils.
        
               | nothrabannosir wrote:
               | How do you know? Did you verify or did you just assume?
        
         | zeagle wrote:
         | I have given this a lot of thought. I assume the nas and its
         | docker services won't boot starting everything up for someone
         | else. My offsite encrypted backup is probably not recoverable
         | without hiring someone. So:
         | 
         | - I have an ntfs formatted external USB drive to which cron
         | copies over a snapshot of changed daily into a new folder.
         | Stuff like paperless, flat file copy of seafile libraries. The
         | size of that stuff is small <50gb, duplication is cheap. In
         | event of death or dismemberment... that drive needs to be
         | plugged into another machine. There are also seafile whole
         | library copies on our various laptops without the iterative
         | changes. Sync breaks... keep using your laptop.
         | 
         | - I've been meaning to put a small pc/rpi at a friend's
         | place/work with a similar hard drive.
         | 
         | - the email domain is renewed for a decade and is hosted on
         | iCloud for ease of renewal. Although I am not impressed that it
         | bounces emails when storage is full from family member photos
         | which happens regularly so may switch back to migadu.
        
         | mirdaki wrote:
         | I think planning for what happens once you aren't there to
         | manage the setup (whether it be a vacation, hospital stay, or
         | death) is important. It's not something I built specifically to
         | make easy and I should think more on it
         | 
         | The most important thing is to be able to get important data
         | off of it and have access to credentials that facilitate that.
         | You could setup something like Nextcloud to always sync
         | important data onto other people's devices, so make part of
         | that easier
         | 
         | But I think another important aspect is making folks invested
         | in the services. I don't expect my partner to care about or use
         | most of them, but she does know as much as I do about using and
         | automating Home Assistant (the little we've done). Things like
         | that should keep working because of how core they can become to
         | living our lives. It being a separate "appliance" and not a VM
         | will also help manage that
         | 
         | But also that's a lot of hope and guessing. I think sitting
         | down with whoever might be left with it and putting together a
         | detailed plan is critical to any of that being successful
        
         | Aeolun wrote:
         | I think the pocket railways are a lot more comprehensible than
         | my local network setup.
        
         | darkwater wrote:
         | Just ignore the useless threat scenario of someone stealing
         | your physical disks to obtain your precious family data and you
         | will be fine. In other words, just store all the photos and
         | important documents in clear, some written down instructions
         | and you should good to go.
         | 
         | I'm more worried by home automation in my case ^^;
        
           | dwedge wrote:
           | The chance of someone breaking in to steal your sensitive
           | files is next to nil I agree.
           | 
           | The chance of someone breaking into your house is sadly much
           | more likely, and them choosing to take any computers they see
           | is almost a certainty at that point.
           | 
           | Your drives are unencrypted. What's your next step if you
           | come home tonight and find the house ransacked and the server
           | gone?
        
             | TacticalCoder wrote:
             | > Your drives are unencrypted. What's your next step if you
             | come home tonight and find the house ransacked and the
             | server gone?
             | 
             | My drives are encrypted and so are my backups (with backups
             | everywhere). But they're symmetrically encrypted with a
             | password. The backup procedure contains a step verifying
             | that decryption works.
             | 
             | Family knows the password: password is stored at different
             | places on laminated paper (friends and family) but not
             | alongside the backups.
             | 
             | Decryption of the backups is one command at the CLI (both
             | brother and wife knows how to use a CLI and soon the kid
             | shall too: already dabbled with it).
             | 
             | The one command is explained alongside the password, on the
             | same laminated paper as the backups.
             | 
             | Yup I did really think this out, including rehearsals where
             | I, literally, fake my own death (I fake a heart attack) in
             | front of my brother and wife and I have to _shut the fuck
             | up_ while they open a CLI, hook up one of the backup hard
             | disk and decrypt the backups.
             | 
             | Once a year we rehearse.
             | 
             | That way they are confident they can restore the backups. I
             | know they can and I don't need reassuring, but they do
             | (well less and less because know they began realizing I
             | really thought this out).
             | 
             | > The chance of someone breaking into your house is sadly
             | much more likely, and them choosing to take any computers
             | they see is almost a certainty at that point.
             | 
             | Got a house break in years ago, they stole no computers.
             | 
             | > What's your next step if you come home tonight and find
             | the house ransacked and the server gone?
             | 
             | Go to the bank, take of one my backup hard drive. Buy a
             | computer, reinstall Proxmox, a VM, Docker CE, redeploy my
             | infra. They still don't have the Yubikeys on my keychain.
             | They still don't have what's on my phone.
             | 
             | Don't think some people here didn't plan for death / theft
             | / etc.
        
             | fmajid wrote:
             | It can be a side-effect. Francis Ford Coppola lost all his
             | family photos when his PC was stolen in a burglary while in
             | Argentina to shoot "Tetro". Of course, he didn't have
             | backups.
        
           | beala wrote:
           | I think a much more likely scenario is an unencrypted drive
           | fails and then what? Do you send it to the landfill
           | unencrypted? Or do you have some process to physically
           | destroy it? Encryption means you can just toss it and feel
           | reasonably confident the data isn't coming back to haunt you.
        
           | sneak wrote:
           | You should see the hilaribad basis given in affidavits for
           | search warrants that get rubber stamped by judges.
           | 
           | There is no burden of proof and no consequence for perjury.
           | 100% of the search or seizure warrants I have read have had
           | obvious perjury in them.
           | 
           | I encrypt my data at rest not because I fear a burglar
           | breaking in, but because I fear the FBI coming in the front
           | door on some trumped up bullshit. Everyone has a right to
           | privacy, even (and perhaps especially) if they are doing
           | nothing wrong.
           | 
           | I've read too many stories of writers and activists getting
           | bogus warrants and charges and arrests thrown at them to
           | inconvenience and harass them to ever have a single
           | unencrypted disk in my house.
        
         | deanc wrote:
         | There is a dead man's switch service [1] which can send an
         | email if you die. In theory if you self host you could trigger
         | something when their email arrives to an inbox you control.
         | 
         | I've been thinking of making a version of this that does a
         | webhook but it doesn't offer a huge amount of value over the
         | email method.
        
           | beala wrote:
           | Is the dead man's switch necessary? Unless your homelab
           | contains secrets you don't want revealed until after your
           | death, I'd just put this in a Google doc.
        
             | deanc wrote:
             | Depends how convoluted your setup is. For some use cases
             | releasing the location of passwords physically written down
             | might help matters or maybe trigger a process to export all
             | data and upload somewhere - somehow.
             | 
             | Seeing some of the discussions around home labs with server
             | racks and k8s doesn't fill me with confidence that for a
             | majority of use cases a family member would be able to get
             | the data if needed.
        
         | numb7rs wrote:
         | I'm glad to see this comment here. People build these projects
         | for family and friends - which is great - and encourage their
         | use, without considering what happens if the only sysadmin
         | suddenly dies. You wouldn't let one person at work hold all of
         | the keys, so the same should be true for your homelab.
         | 
         | While I haven't given all of my keys to my family, there's a
         | clear route for them to get them, and written instructions how
         | to do so. Along with an overview of the setup and a list of
         | friends and colleagues they can turn to, this is enough for
         | them to get access to everything and then decide if they want
         | to carry on using it, or migrate the data somewhere else.
        
           | JW_00000 wrote:
           | To be frank, if you die, isn't it much more likely your
           | friends and family will just stop using your homelab setup?
           | They'll switch back from Jellyfin to Netflix, replace the
           | smart light bulbs with regular ones, etc.
        
             | icedrift wrote:
             | Could be important data like family photos and financial
             | information in the system.
        
             | numb7rs wrote:
             | Yes, of course. They still need to get the all the photos
             | and documents though.
        
         | beala wrote:
         | This applies to so many other things. Who in your house does
         | the taxes? If it's you, would your SO be able to pick up the
         | slack in the event of your death? Can they access all the
         | accounts? Do they even know what all the accounts are? I keep
         | telling myself I need to put together a "what to do if I'm
         | dead" Google doc, but haven't gotten around to it.
        
           | stephenlf wrote:
           | I pay $3/mo or whatever for Bitwarden family. It's wonderful.
           | My wife and I can access all our passwords (and OTP codes!)
           | in one spot. I grouped passwords into folders like "Health"
           | and "Finances". It has taken us far.
        
           | sneak wrote:
           | You can put it off indefinitely because nobody anticipates
           | their own death.
        
         | wkjagt wrote:
         | I have our family pictures on a RAID 1 array in my home lab.
         | Every night they are rsynced to an external drive on a little
         | computer at my in-laws. Both as a backup, and as an "if
         | something happens to me" easy access. My wife doesn't have any
         | interest in tech, so I wanted to make accessing it "just in
         | case" as straightforward as possible. I told her that that is
         | where all the photos are, and that it's just a USB drive she
         | can connect to her laptop in case something happens.
        
         | fmajid wrote:
         | You need to add to your threat model having a stroke, where you
         | no longer remember your passwords.
        
           | petee wrote:
           | Just a year before my dad's stroke, my parents documented
           | every account, password, service they had; it was incredibly
           | helpful after he passed with all the stress and chaos
        
           | zrail wrote:
           | 1Password is amazing for this, IMO. My spouse and I have been
           | using 1Password together for more than a decade. One of the
           | first things I set up is a "AAA Read Me First" note with
           | links to a bunch of other notes and documents, including our
           | estate planning stuff.
           | 
           | The biggest thing that makes me stick with 1Password, despite
           | the semi-recent VC shenanigans, is the fact that if for some
           | reason we fall behind on billing (for example, because the
           | credit card got cancelled because I died) the account goes
           | into read only mode forever. As long as 1P is a going concern
           | the data we choose to put there is safe from the biggest risk
           | in our threat model.
        
             | fmajid wrote:
             | Sounds like Stockholm Syndrome to me. If you used
             | KeepassXC, you wouldn't need to worry about this at all
             | since if is entirely free.
        
               | zrail wrote:
               | I have never used KeepassXC so I looked it up. It seems
               | like it solves a very different use case than what I
               | need:
               | 
               | * shared vault with my spouse's user in our organization
               | account
               | 
               | * multiplatform with sync
               | 
               | * most importantly, available without any of the hardware
               | that I manage being in a usable state
               | 
               | KeepassXC doesn't solve for any of those as far as I can
               | tell.
        
         | betaby wrote:
         | > what will happen when I die
         | 
         | Once a year I write couple of DVD with photos. That kind of
         | archaic but easy to understand and reason about media.
         | 
         | Once in a year or two I print some photos in print shop.
        
         | BrandoElFollito wrote:
         | I ahve exactly the same thoughts and I wrote a document to be
         | used in case I die.
         | 
         | Part one is money and where the important papers are.
         | 
         | Part twonis hiw to dulb down my home. How to remove the smart
         | switches (how to wire back the traditionnal switches). How to
         | mive self hosted key services to the cloud (bitwarden, mostly)
         | and what to pay for (domain and mail). Hiw to remove the access
         | point and go back to the isp box.
         | 
         | My wife is not supportive of the smart stuff but now that she
         | knows she can dumb it down she is fine. Honestly she does not
         | realize what strp back the lack of all this stuff will be. But
         | at least it won't be my problem anymore:)
        
         | data-ottawa wrote:
         | A good excercise when I had my will written was to create a
         | document describing exactly the case of what happens when I
         | die.
         | 
         | It's probably worth it for most people to go through the
         | excercise
        
       | perelin wrote:
       | Outside of the stated requirements because its not fully open
       | source, but https://www.cloudron.io/ made all my self hosting
       | pains go away.
        
       | burnt-resistor wrote:
       | Setup your own WireGuard rather than Tailscale.. this is too much
       | like Authy delegating AAA to a third-party.
       | 
       | - Store your SSH public keys and host keys in LDAP.
       | 
       | - Use real Solaris ZFS that works well or stick with
       | mdraid10+XFS, and/or use Ceph. ZoL bit me by creating unmountable
       | volumes and offering zero support when their stuff borked.
       | 
       | - Application-notified, quiesced backups to some other nearline
       | box.
       | 
       | - Do not give all things internet access.
       | 
       | - Have a pair (or a few) bastion jumpboxes, preferably one of the
       | BSDs like OpenBSD. WG and SSH+Yubikey as the only ways inside,
       | both protected by SPA port knocking.
       | 
       | - Divy up hardware with a type 1 hypervisor and run kubernetes
       | inside guests in those.
       | 
       | - Standardize as much as possible.
       | 
       | - Use configuration and infrastructure management tools checked
       | into git. If it ain't automated, it's just a big ball of mud no
       | one know how to recreate.
       | 
       | - Have extra infrastructure capacity for testing and failure hot
       | replacements.
        
         | udev4096 wrote:
         | How can one run vanilla wireguard and leverage features offered
         | by headscale? At minimum, a bunch of bash scripts would do the
         | exact same thing, if not worse
        
           | burnt-resistor wrote:
           | Don't do it with bash. You can at least use Ruby, Python to
           | make an API for it, or use configuration management. They
           | really didn't think about being (local) runtime configurable
           | for the dev/ops UX being too ultra *NIX purist with single
           | file plain text configuration. At least it could have a plain
           | text watch directory like daemontools for dynamic
           | reconfiguration.
        
             | udev4096 wrote:
             | Headscale already has a clean API in go, why recreate the
             | wheel? For fun, sure but production use, I am gonna stick
             | with it
        
         | OptionOfT wrote:
         | Annoying thing about WireGuard is their outdated and buggy iOS
         | client. When you set up a dns with A and AAAA it'll prefer the
         | A address, even when you're on a 646xlat network, so now that
         | connection is proxied and will time out after a while.
        
           | burnt-resistor wrote:
           | Yep. Other reasons I had to go for IPv4 only a while despite
           | everything else being dual stack. "Argh!" at that one vendor
           | who can't get their act together.
        
       | lofaszvanitt wrote:
       | Why people need these overly complicated setups and why do they
       | need to have an access point to reach their "den" from anywhere
       | is beyond me. People and their digital gadget delusion.
       | 
       | Security paranoia, but here are the details of my home lab. WHY?
       | If god forbid someone gets in they could in an instant identify
       | the target...
        
       | dedge wrote:
       | IMO this is too complicated. I think products like the Synology
       | Disk Station strike a better balance between ownership of data
       | and maintenance over time. Tailscale even publishes a client for
       | Synology products.
        
         | mirdaki wrote:
         | Everyone will have different goals and preferences. For
         | instance, my dad just wanted a way to backup and remotely
         | access some files, so we got him a Synology NAS. It's great for
         | it's target users and if you're one of them, awesome!
         | 
         | I just don't like the lock-in that you get Synology. Plus I do
         | enjoy tinkering with these things, so I wanted to put together
         | something that balances usability, complexity while minimizing
         | that lock-in
        
         | deanc wrote:
         | Is this the same Synology that a few years ago numerous people
         | had their boxes ransomwared when they were open to the public
         | internet. Synology continues to be shit value for the tools you
         | get and as much as I want the convenience of ready to go
         | software and hardware they cannot be relied on.
        
           | hum3hum3 wrote:
           | I am thinking about replacing my Synology BUT I have had
           | three and now ECC over the last 20 years and they have done
           | their job faullessly. There are stupid things now like
           | complaining about non Synology ram modules.
        
             | deanc wrote:
             | The latest news I heard is DSM complaining about non
             | Synology hard drives too.
        
       | jancsika wrote:
       | Is there home lab for isolated LAN and "self-sufficient" devices?
       | 
       | I want to have a block of gunk on the LAN, and to connect devices
       | to the LAN and be able to seamlessly copy that block to them.
       | 
       | Bonus: any gunk I bring home gets added to the block.
       | 
       | First part works with navidrome: I just connect through the LAN
       | to my phone with amperfy and check the box to cache the songs.
       | Now my song gunk is sync'd to the phone before I leave home.
       | 
       | This obviously would fit a different mindset. Author has a setup
       | optimized for maximum conceivable gunk, whereas mine would need
       | to be limited to the maximum gunk you'd want to have on the
       | smallest device. (But I do like that constraint.)
        
       | sandreas wrote:
       | Nice writeup, thank you. I already thought about having NixOS on
       | my server, but currently I prefer proxmox. There are projects
       | with NixOS + Proxmox, but I did not test it yet.
       | 
       | > My main storage setup is pretty simple. It a ZFS pool with four
       | 10TB hard drives in a RAIDZ2 data vdev with an additional 256GB
       | SDD as a cache vdev. That means two hard drives can die without
       | me loosing that data. That gives me ~19TB of usable storage,
       | which I'm currently using less than 10% of. Leaving plenty of
       | room to grow.
       | 
       | I would question this when buying a new system and not having a
       | bunch of disks laying around... having a RAID-Z2 with four 10GB
       | disks offers the same space as a RAID1 with two 20GB disks. Since
       | you don't need the space NOW, you could even go RAID1 with two
       | 10TB disks and grow it by replacing it with two 20TB as soon as
       | you need more. This in my opinion would be more cost effective,
       | since you only need to replace 2 disks instead of 4 to grow. This
       | would take less time and since prices per TB are probably getting
       | lower over time, it could also save you a ton of money. I would
       | also say that the ability of losing 2 disks won't save you from
       | having a backup somewhere...
        
         | mirdaki wrote:
         | Oh yeah, I don't think the way I went about it was necessarily
         | the most cost effective. I bought half of them on sale one
         | year, didn't get around to setting things up, then bought the
         | other two a year later on another sale once I finally got my
         | server put together. I got them before I had my current plan in
         | place. At one point I thought about having more services in a
         | Kubernets cluster or something, but dropped that idea
         | 
         | Also agree, RAID isn't a replacement for a backup. I have all
         | my important data on my desktop and laptop with plans for a
         | dedicated backup server in the future. RAID does give you more
         | breathing room if things go wrong, and I decided that was worth
         | it
        
           | sandreas wrote:
           | I went through the same situation and noticed that modern
           | hard drives are big enough for fitting RAID1 into nearly
           | every homelab use case except high-res Video footage
           | (including a bluray movie collection)
           | 
           | Two drives are easy to replace, easy to spare, consume less
           | power and are quieter than 4+.
           | 
           | The only advantage i See in raid5/6 is on 25Tb of storage
           | requirement within 3 years.
        
         | vladvasiliu wrote:
         | As OP says, I think this is the kind of thing that needs to be
         | considered whenever the decision is made.
         | 
         | As another data point, my NAS runs 4x4TB drives. When I bought
         | them new some 2-3 years ago, all at the same time, they were
         | cheaper than buying the equivalent 2x8TB.
         | 
         | My situation was somewhat different, though, since I'm running
         | raidz1. But I did consider running a mirror, specifically in
         | order to ease upgrading the capacity. However, I didn't expect
         | to fill them /that/ quickly and I was right: yesterday it was
         | still less than 70% full.
        
           | sandreas wrote:
           | You are right, but to be max cost effective you could have
           | gone 2 * 4tb or 2 * 6tb for 18 months and then sell the
           | drives still in warranty to upgrade to more storage...
           | 
           | Estimating storage growth is hard but when you monitor it
           | regularly, its saving you much money
        
             | vladvasiliu wrote:
             | Maybe... When I was younger, I used to buy and sell
             | computer stuff. Didn't have much money, so it kinda made
             | sense. But it required me to keep up to date with prices,
             | specs, figure what's the best value, follow markets and
             | jump on an occasion, etc. It got old after a while. There's
             | also value in getting something that just works for you, if
             | it's not absurdly expensive, and forget about it. Do
             | something else with my time.
             | 
             | I still love to tinker and set up a homelab and whatnot,
             | but I don't care that much about hardware anymore. For my
             | needs, if it's at least a 6th gen Intel and I can't hear it
             | in my living room, it's good enough. The NAS lives in my
             | parents' basement, so it can be somewhat louder (with 4
             | drives instead of two).
             | 
             | For this particular setup, my initial usage was above 4 TB,
             | so I should have went with 2x6, which was /maybe/ cheaper
             | (don't remember), but then it would have required me to
             | deal with selling used gear and go through the motions of
             | upgrading again. Doing this every 4-5 years? Sure. Every
             | year? Hell no.
        
       | meehai wrote:
       | Mine is much more barebone:
       | 
       | - one single machine - nginx proxy - many services on the same
       | machine; some are internal, some are supposed to be public, are
       | all accessible via the web! - internal ones have a humongous
       | large password for HTTP basic auth that I store in an external
       | password manager (firefox built in one) - public ones are either
       | public or have google oauth
       | 
       | I coded all of them from scratch as that's the point of what I'm
       | doing with homelabbing. You want images? browsers can read them.
       | Videos? Browsers can play them.
       | 
       | The hard part is the backend for me. The frontend is very much
       | "90s html".
        
         | mirdaki wrote:
         | Nice! I have a friend who is starting to program his
         | infrastructure/services from scratch. It's a neat way to learn
         | and make things fit well for your own needs
        
         | qmr wrote:
         | HTTP sends password in cleartext. Better to use a self signed
         | certificate at least.
        
       | Aeolun wrote:
       | I've got to appreciate putting the matrix server on Coruscant if
       | nothing else :)
        
         | mirdaki wrote:
         | Thank you! The naming add a little bit of extra fun to it
        
       | irusensei wrote:
       | > Authelia provides authentication and authorization for services
       | in a couple of ways. For services that support OpenID Connect it
       | provides a very simple single sign on experience. If not,
       | Authelia can integrate with my reverse proxy (nginx) and require
       | the user login before the reverse proxy allows access to a
       | service.
       | 
       | Recently I found out Gitea or Forgejo can act as an Oauth
       | provider. And since these support ldap you can for example deploy
       | a Samba AD and set it up as an authentication source for
       | Gitea/Forgejo. If you enable the OAuth feature you can connect
       | stuff like grafana and log in with your Samba AD credentials.
       | 
       | To me this is more convenient than running a dedicated auth
       | service considering Forgejo can also provide git, wiki, docker
       | registry (also authenticated) and other function. It's such an
       | underrated piece of software and uses so few resources.
        
       | denkmoon wrote:
       | I woke up today with a plan of making my DNS at a separate site
       | work properly with ipv6, over my wireguard. I use ULAs for the
       | point to point wireguard link, and GUAs don't like routing to
       | ULAs. I figured the choice was between routing my two sites GUAs
       | over the wireguard when talking to each other, or deploy ULAs in
       | my networks. 4hrs later I had everything set up with ULAs. Had
       | lunch. Decided that was awful. 3hrs after that I've got my GUAs
       | going over the wireguard.
       | 
       | Homelabbing is fun :')
        
         | mirdaki wrote:
         | Yes it is, rock on!
        
       | noncoml wrote:
       | What's the power consumption?
        
         | mirdaki wrote:
         | That is a great question I don't actually know the answer to. I
         | need to grab something to track it
        
       | mmcnl wrote:
       | I too use LLDAP and Authelia. I use Caddy (no Traefik) as a
       | reverse proxy to protect my services using 2FA SSO. It's very
       | easy to use and I can access all my services anywhere in the
       | world without bothering with a VPN.
        
       | dakiol wrote:
       | I like it. Why Flame, though? It's built using node, react,
       | redux... meaning you are bringing dozens (if not hundreds) of
       | third party dependencies to your secure kingdom. I don't think
       | it's worth it for the start page (could easily be a single html
       | page with hardcoded links)
        
         | mirdaki wrote:
         | It's entirely because I've used it before. I just wanted
         | something familiar to solve a problem quickly. I also think it
         | looks nice. I'm not too worried about the security
         | implications, since it is behind Tailscale and Authelia. I'm
         | not committed to it, and do want to explore other options in
         | the future
        
       | qiine wrote:
       | Very interesting write-up!
       | 
       | At this rate if I keep seeing good article about NixOS I might
       | actually switch for real haha!
        
       | fariszr wrote:
       | Great blog post, but unfortunately from my experience with my
       | kinda tech-friendly family, i can tell you that not exposing
       | service publicly is an absolute UX killer.
       | 
       | Nobody uses the local nextcloud because they just don't think
       | they can rely on it, it doesn't always work from their
       | perspective, and is too finicky to use, because it needs an
       | external app (Tailscale).
       | 
       | This can be only fixed when the app itself can trigger a vpn
       | connection, and I don't think this is going to happen any time
       | soon.
        
         | mirdaki wrote:
         | I do have to sit down and walk folks through setting up
         | Tailscale, Nextcloud, etc on their devices. So far though, I
         | haven't had any complaints once that is done. Nextcloud just
         | syncs in the background and they can navigate to sites like
         | normal. But my family is probably more tech literate than most,
         | so that helps
        
           | fariszr wrote:
           | Yeah but that means they have to be aware of the need to
           | activate tailscale on their phones manually everytime they
           | want to use your apps.
           | 
           | On PC I agree, you can just leave it running, on mobile
           | though it chews through the battery like it's nothing.
        
         | zbentley wrote:
         | An easy solve for this is to buy public domains for the sites
         | you want to use, run a static website on them that says "turn
         | on tailscale to access this site, set that up here (link to
         | download a client preconfigured for my tailnet, invite only of
         | course)", then use tailscale DNS overrides to set up CNAMEs for
         | that public DNS's (sub)domains which point to the tailnet
         | internal service domains.
        
           | fariszr wrote:
           | This would work if they didn't use the apps, which in the
           | case of nextcloud they do.
        
       | nicomt wrote:
       | It's not open-source or self-hosted but putting it out there:
       | CloudFlare zero-trust is amazing and free. In my setup, I have a
       | cloudflared tunnel configured in my homelab machine and I expose
       | individual services without a VPN or opening up my firewall. You
       | can also set up authentication with SSO, and it happens before
       | reaching the backend application which makes it more secure. This
       | is easy for family and friends to use, because they don't need to
       | setup anything from their side, just go to the URL and login.
       | https://developers.cloudflare.com/cloudflare-one/connections...
        
         | cromka wrote:
         | I seriously don't understand why would people choose this over
         | _not_ exposing anything at all, except for Wireguard port. I
         | have my client to automatically connect my home LAN when I'm
         | not on WiFi and get access to all my self-hosted services
         | without risking anything. You rely on third party solution
         | which may or may not be made to government agencies. You also
         | need to trust they Cloudflare doesn't make mistakes, either.
         | 
         | Also, how do you configure Cloudflare for a road warrior setup?
         | How do you track ever changing dynamic IPs? As mentioned, all
         | _I_ need is a Wireguard client and I'm golden.
        
           | nicomt wrote:
           | > You rely on third party solution which may or may not be
           | made to government agencies.
           | 
           | That's a fair point, but for my use case, I feel comfortable
           | enough with CloudFlare given the trade-offs.
           | 
           | > You also need to trust they Cloudflare doesn't make
           | mistakes, either.
           | 
           | I think the chances of CloudFlare making a mistake are much
           | lower than me or any other individual Developer.
           | 
           | > Cloudflare for a road warrior setup? How do you track ever
           | changing dynamic IPs?
           | 
           | I think you need to read the docs. All of that works without
           | any extra config when using tunnels.
        
         | javier2 wrote:
         | CloudFlare zero-trust is very good, but i thought you need to
         | have Cloudflare as man-in-the-middle on your domain to have
         | this authentication flow work? ie. the TLS certs needs to live
         | with Cloudflare.
        
           | nicomt wrote:
           | Yeah, that is how I use it. You can technically host any TCP
           | including end to end encrypted data through CloudFlare
           | tunnels but you need the cloudflared app installed on the
           | client side to access it (SSO still works even for this
           | case). I find having to manage certificates and installing
           | cloudflared everywhere is too much of a hassle. I understand
           | that proxing through CloudFlare gives them a lot of
           | visibility and control, but I find that risk acceptable given
           | my application.
        
       | nitnelave wrote:
       | LDAP author here. I'm happy that LLDAP is mentioned and yet that
       | it is not highlighted. The goal of the project was to have a
       | simple LDAP server that is easy to install/manage for self-
       | hosters without knowledge of LDAP required. Cheers and congrats
       | on your setup!
        
         | mirdaki wrote:
         | Thank you for the work and the kind words! I've had a great
         | experience with LLDAP. Really appreciate it
        
       | threemux wrote:
       | I don't have a very complex setup but I've been super happy with
       | gokrazy for my rpis:
       | 
       | https://gokrazy.org/
       | 
       | OS upgrades are easy now and it's declarative but I don't have to
       | learn Nix
        
         | hum3hum3 wrote:
         | Me too. It works really well now that I have a silent version
         | with SSD and passive heat sink. This avoids my son turning it
         | off because the fan noise annoys him. I am thinking about
         | adding kubernetes for failure resilience but that is a work in
         | progress.
         | 
         | I am happy to start digging into Authelia.
         | 
         | Are you using the gokrazy router as well?
        
           | threemux wrote:
           | I'm not using the router, but I do like reading about
           | Stapelberg's quest for ridiculous home Internet speeds! I
           | make do with standard gigabit fiber haha
        
       | piyuv wrote:
       | Excellent write up. Can I ask why did you choose headscale
       | instead of WireGuard?
        
         | mirdaki wrote:
         | I found the Tailscale client experience is quite nice and
         | headscale had built in OIDC support (so easy auth for my users)
         | 
         | If I started this setup later I might have also used pangolin,
         | which also provides a nice management interface on top of
         | WireGuard https://github.com/fosrl/pangolin
        
       | evnix wrote:
       | I wish I had the time to do any of this. I could probably do it
       | on a weekend but maintaining it, upgrading it to keep up with new
       | releases would be something I wouldn't have time for.
       | 
       | I end up just paying a cloud provider and forget about it.
       | 
       | Anyone else on the same boat? What has been your approach?
        
         | SomeoneOnTheWeb wrote:
         | Honestly, I self-host about a dozen services and upgrades take
         | me less than a minute per month usually.
         | 
         | I simply have one folder per service, each folder contains a
         | docker-compose stack and a storage directory. Updating is
         | simply a matter of running `docker compose pull` and `docker
         | compose up -d`. Nothing more.
         | 
         | Breaking updates requiring to tweak the config are very
         | uncommon, and even when they happen it's only a few minutes of
         | checking the updated config and applying it.
         | 
         | IMO this is the simplest way to self-host. No VM, no complex
         | software install, nothing more than a simple Docker Compose
         | setup that's fully automated.
        
           | doubled112 wrote:
           | That sounds similar to my setup, but each folder is a btrfs
           | subvolume and my update script takes a snapshot before
           | updating. I keep the Docker compose file together with the
           | volumes in that subvolume.
           | 
           | If something breaks I can decide to figure out why, or
           | revert.
        
         | beala wrote:
         | It's usually not a single weekend. If you're like me, it starts
         | out with thinking it'd be nice to install Plex on an old gaming
         | PC. A year later, it has organically grown into a rube goldberg
         | machine of proxmox and home automation. Which I guess just
         | reinforces your point.
         | 
         | Joking aside, a minimal setup just using docker compose is
         | pretty manageable. Self hosting many projects is as easy as
         | 'docker compose up -d', and upgrades are straightforward as
         | others have pointed out.
        
         | mirdaki wrote:
         | With previous setups, I was certainly guilt of not upgrading
         | and doing the maintenance needed. That's one reason why I like
         | using NixOS and ZFS. Both provide really easy rollback options.
         | So all I need to do is run an update and rebuild. If things
         | work, no more for me to do. If things don't, I can try
         | debugging or just revert back to the previous release till I
         | have time to
         | 
         | But also I think using a cloud provider is fine if you're happy
         | with the experience. It is a time sink to get things setup and
         | it's not zero maintenance time. It's reasonable to weight those
         | costs
        
       | codethief wrote:
       | > Here is a diagram of where I've ended up:
       | 
       | In case the author is around: On mobile (Chrome on Android) the
       | screenshot is not readable at all and there is also no way to
       | open an enlarged version, let alone zoom into the page.
        
         | aembleton wrote:
         | Same on Firefox. Here's the diagram as a zoomable image
         | https://codecaptured.com/blog/images/ultimate-self-hosting/d...
        
         | mirdaki wrote:
         | Oh thanks for pointing it out! I've updated it so clicking on
         | the diagram opens it up directly
        
       | djhworld wrote:
       | I've been tempted to use NixOS for my self hosted setup but I
       | just can't bring myself to do it.
       | 
       | My setup is quite simple, it's just a few VMs with one docker
       | compose file for each. I have an ansible playbook that copies the
       | docker compose files across and that's it. There's really nothing
       | more to it then that, and maintenance is just upgrading the OS
       | (Fedora Server) once the version reaches EOL. I tend to stay 1
       | version behind the release cycle so upgrade whenever that gets
       | bumped.
       | 
       | I do use nix-darwin on my macs so I do _see_ the value of using a
       | nix configuration, but I find it difficult to see if the effort
       | in porting my setup to Nix is worth it in the long run,
       | configuration files don't get written in a short time. Maybe LLMs
       | could speed this up, but I just don't have it in me right now to
       | make that leap
        
         | Havoc wrote:
         | Explored it a bit, but found the incremental gain to be not
         | massive if you're already using IaC of some sort
        
         | entropie wrote:
         | > I've been tempted to use NixOS for my self hosted setup but I
         | just can't bring myself to do it.
         | 
         | I recently tried nixos and after spending a week trying it out,
         | I switched my home network and 2 production servers to nixos.
         | It has been running as expected for 3,4 months now and I LOVE
         | it. Migrating the servers was way easier than the workstations.
         | My homeserver was setup in a few hours.
         | 
         | I also recently bought a jetson orin nano to play and learn on
         | and I set up nixos with jetpack-nixos there too. I know with
         | gentoo this would have been a (much) more painful process.
         | 
         | I have used gentoo for over 20 years and have always felt very
         | much at home. What annoyed me was that the compile times on
         | older computers were simply unbearable. Compiling GHC on my
         | 2019 dell xps just takes 6 hours or something like that.
        
         | mirdaki wrote:
         | The big difference for me was NixOS provides really simple
         | rollbacks if something goes wrong, where with Ansible and
         | compose files, that's possible, but you have to do it yourself
         | 
         | But also if you're setup is working for you, I think that's
         | great! It sounds like you have a good system in place
        
       | beala wrote:
       | A pain point you mention is that everyone must run the tailscale
       | client. Have you considered exposing everything on the public
       | internet using something like Cloudflare Tunnels? You can have
       | cloudflare handle auth on their edge network, which mitigates the
       | worry about having to deal with 0-days on a self hosted auth
       | solution. You have a pretty sophisticated directory setup tho so
       | I'm not sure how well this would fit in with the existing infra.
        
         | mirdaki wrote:
         | It is something I considered. Ultimately I didn't want to
         | depend on Clouflare (or any other provider) for something as
         | core to my setup as my remote access
         | 
         | But it's a totally valid option, just not one that fit with my
         | preferences
        
       | master_crab wrote:
       | Why bother with SSO? If your family and closest friends use
       | something like a wireguard client (iOS for example has a very
       | good one that takes only a minute to configure permanently), the
       | users simply switch a toggle and they are now on your private
       | network and don't need to SSO to anything (provided you have left
       | everything open).
       | 
       | For a small home network the pros of that approach vastly exceed
       | the cons.
        
         | haswell wrote:
         | I self host about 20 separate apps. I'm in the middle of an SSO
         | implementation project because I do not want to continue
         | managing credentials for 20 separate apps.
         | 
         | I've considered opening some of these apps to family members,
         | and having one place to deal with any auth issues is a high
         | priority for me.
         | 
         | I can't agree with your conclusion.
        
           | master_crab wrote:
           | That's ok. But step back further. Do you need to fine grain
           | permission people on the majority of those apps? If you
           | don't, then SSO is more of a pain than it's worth. Simply
           | control the network access and leave the apps alone.
        
             | haswell wrote:
             | How are you defining fine grained permissions?
             | 
             | All I care about is having separate accounts for each
             | person who will log in, even if I'm the only person.
             | 
             | I again can't agree with your conclusion that this is more
             | pain than it's worth. But it's possible we just have
             | different priorities.
        
         | mirdaki wrote:
         | The services we use, like Nextcloud or Mealie, are designed for
         | folks to have their own user accounts. SSO means they can use
         | the same login across all of them without me having to manage
         | that for them (and also avoids me having to know their
         | passwords). It does complicated the setup, but not the
         | operation, and that makes it more likely folks will use the
         | services
        
       | stephenlf wrote:
       | Great read. Thanks for sharing.
        
       | grep_name wrote:
       | This is kinda similar to something I'm trying to setup. I have
       | most of my self-hosted infrastructure running in docker
       | containers, but I want to put some stuff on a nixOS ec2 instance.
       | Mostly services I want to never go down or be affected by my
       | local network (uptime kuma) and chat stuff (irc bouncer, conduit,
       | soju, etc etc).
       | 
       | I use nixOS on my laptop but don't make many nix projects, and
       | TBH I have no idea how to test this setup locally before
       | deploying it. I have some nix stuff setup that spins up a VM and
       | exposes the ports on localhost, but it's brittle and rapidly
       | spaghettifying. Do you have any tips for testing this stuff as
       | part of a local project?
        
         | mirdaki wrote:
         | I've done two kinds of testing
         | 
         | On my NixOS laptop I you can setup services I'm interested in
         | trying, but just run them locally. So I don't setup things like
         | SSL (you can, it sometimes just makes getting a new SSL cert
         | for that same domain take some time). I just update my
         | /etc/hosts to the local IP and can give that a go
         | 
         | For trying out the more complicated setup parts, like SSL,
         | Tailscale, etc, I created a NixOS VM that I setup the same way
         | I wanted for my "production" use case. Once I have the config
         | file the way I wanted, it's as simple as moving it to my non
         | test VM (baring previous mentioned SSL issues). And I only
         | tested one part at a time, adding them together as I went
         | 
         | But also, one of the great things about NixOS is it's really
         | easy to incrementally try things and rollback. Once I got the
         | skeleton of the setup working, I've mostly done my testing on
         | my "production" server without issue
        
       | weitendorf wrote:
       | I have been getting into this too. I caution anybody with self-
       | hosting/tinkering tendencies against starting a tech company
       | because it just makes it so much easier to justify this stuff...
       | 
       | Eventually serving a regular old container doesn't cut it anymore
       | and you find yourself needing to pay these weird newspapers
       | nobody reads to publish your business' alias because it's a
       | requirement for a legal DBA which ASIN needs to let you get your
       | own IPV6 block, which you need to truly own you and your
       | customers' IPs and it's not worth becoming an AS without it, but
       | then you can actually move towards physically become your own ISP
       | and then...
       | 
       | The ingress problem people solve with tailscale is one of the
       | hardest. I'm curious to see if it's possible to implement
       | STUN/TURN [0-1] with a generally good mechanism for exposing the
       | server to the Internet by caching all static files and blocking
       | dynamic access to the backend with a loginwall, which
       | authenticates allowed users with email "magic links" ->
       | nonrenewable access tokens. In theory it should not be
       | excessively difficult, expensive, or risky to do this.
       | 
       | It's just relevant enough to what we're doing with remote
       | development environments for me to justify another rabbit hole
       | 
       | [0]
       | https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_...
       | 
       | [1] https://en.wikipedia.org/wiki/STUN
        
         | zrail wrote:
         | I have ingress set up with Fly.io.
         | 
         | Simple caching nginx config on the remote end with a Fly
         | Wireguard peer set up as an extra container in the appropriate
         | ingress pod.
         | 
         | It's not free but it's the least expensive way I can find to
         | get anycast ingress and not expose any ports to the internet
         | from my homelab.
        
       | sylens wrote:
       | Great write-up. I have been tinkering with Immich over the last
       | few months and go back and forth whether I want to just limit it
       | to Tailscale for use away from home or if I want to go through
       | the trouble of setting up a reverse proxy on a VPS. I think my
       | biggest concern is finding a relatively user-friendly
       | monitoring/security solution to alert me if anybody is trying
       | some sort of attack against the VPS
        
         | clueless wrote:
         | that's a great concern as I'm in the same boat. What have you
         | find with regards to the monitoring/security solution?
        
       | xyst wrote:
       | It's a shame he doesn't self host an internal mail server at
       | least with restricted outbound/smtp.
       | 
       | Something like this is very easy to setup with projects such as
       | stalwart which also offers CardDAV and CardDAV (think easy
       | synchronization of calendar and contacts without relying on
       | "cloud").
       | 
       | He already has tailscale + headscale, adding in an internal only
       | mail/collaboration server would be a win.
        
         | mirdaki wrote:
         | Hey, I ruled out a mail server for external, since I've heard
         | many people have issues with other providers (Gmail, Outlook,
         | etc) randomly blocking email. Didn't feel I could rely on it
         | 
         | Having an internal only mail server for notifications is an
         | interesting idea. I've been using ntfy and Matrix to achieve
         | something like that, but not all services support those
         | notification methods. I'll keep that in mind!
        
       | ctkhn wrote:
       | Curious what the driver for nixos and packages over docker was.
       | Docker was the huge step up for me in making the homelab easy to
       | run, update, and recover when things failed. It also made
       | managing services endpoints and ports remote easier than when
       | they all lived on the operating system. Wish this was delved into
       | a little more in the post.
        
         | mirdaki wrote:
         | I can touch on it more. Docker and compose files are great for
         | getting things going, contained, and keeping everything
         | declarative
         | 
         | But I found the more services I used with Docker, the more time
         | it took to update. I didn't want to just update to latest, I
         | wanted to update to specific version, for better rollback. That
         | meant manually checking and updating every single service,
         | bringing each file down, and then back up. It's not entirely
         | unmanageable, but it became enough friction I wasn't updating
         | things consistently. And yes, I could have automated some of
         | that, but never got around to it
         | 
         | NixOS, in addition to the things I mention in the post, is just
         | a two step process to update everything (`nix flake update` and
         | `nixos-rebuild`). That makes updating my OS and every
         | package/service super easy. And provides built in rollback if
         | it fails. Plus I can configure things like my firewall and
         | other security things in NixOS with the same config I do
         | everything else
         | 
         | Also, Nix packages/services provides a lot of the
         | "containerization" benefits. It's reproducible. It doesn't have
         | dependency problems (see this for
         | morehttps://nixos.org/guides/how-nix-works/). And most services
         | use separate users with distinct permissions, giving pretty
         | good security.
         | 
         | It's not that Docker can't do those things. It's that Nix does
         | those things in a way that work really well with how I think
        
       | sn0n wrote:
       | > general approach, lists nixos first
       | 
       | *Leaves page* can't do it...
        
       ___________________________________________________________________
       (page generated 2025-07-19 23:01 UTC)