[HN Gopher] Ditching PaaS: Why I Went Back to Self-Hosting
       ___________________________________________________________________
        
       Ditching PaaS: Why I Went Back to Self-Hosting
        
       Author : shubhamjain
       Score  : 55 points
       Date   : 2024-01-18 18:20 UTC (4 hours ago)
        
 (HTM) web link (shubhamjain.co)
 (TXT) w3m dump (shubhamjain.co)
        
       | zelon88 wrote:
       | > PaaS is always going to offer less for the same price, but what
       | surprised me was how start the difference can be. On Render.com,
       | a 4GB RAM and 2vCPU service costs $85/mo. The same spec costs
       | $14/mo on Scaleway (ARM).
       | 
       | I notice a trend that the people who scoff at hardware specs are
       | usually the same ones standing in line for 2 cores and 4gb of RAM
       | for $50+/month. They'll laugh when you suggest utilizing an
       | obsolete $50 computer with a decade old CPU (that's just sitting
       | in a closet), but more than willing to spend $50/month on similar
       | performing hardware from a Cloud vendor.
        
         | anotherhue wrote:
         | Cue the downvotes but I imagine you could sort those people
         | into two buckets:
         | 
         | 1. Those who love their Macs a little too much
         | 
         | 2. Those who routinely build a PC.
         | 
         | i.e. Are you familiar with the hardware market.
        
           | rapind wrote:
           | 3. Those who build their own hackintosh because they love OSX
           | too much.
        
           | jamil7 wrote:
           | I think people know they're paying a premium on hardware. The
           | point of PaaS isn't to get the best deal on specs it's for
           | small teams to iterate quickly and focus on product. If
           | they're successful they'll outgrow it and hire people to
           | manage infra.
        
           | zelon88 wrote:
           | I'm not gonna downvote you. But to your point about the
           | hardware market, a mid range server CPU of today will be
           | about as fast as an entry model CPU next year. Actual
           | performance wise, your old server can probably keep up with a
           | newer but slightly lower end server just fine. Obsolete !==
           | useless.
           | 
           | I'm the kind of person who would rather take the 3y server
           | and recomission it as a lower priority service than just do a
           | 1:1 replacement. "Old" computers aren't as useless as they
           | used to be. Computing power has advanced to the point where
           | computation is arbitrary. For all intents, in most sectors,
           | you can scale your compute capacity as high as your budget
           | allows and you won't hit any performance barriers ever. There
           | will always be more compute. It is a commodity now.
           | 
           | My stance is this; sure the new server is faster than the old
           | one, but you know what's even faster? Dividing the existing
           | workload between both of them.
        
         | system2 wrote:
         | It is not the hardware actually. It is network reliability
         | makes cloud better.
        
           | GiorgioG wrote:
           | Colocation exists. It's a solved problem. It's just not in-
           | vogue right now.
        
             | reactordev wrote:
             | and for $400/mo you can put your $50 old machine in someone
             | else's closest.
        
               | GiorgioG wrote:
               | And with my $50 old machine I can host $5,000/month worth
               | of "cloud" services.
        
               | tiffanyh wrote:
               | Hetzner & OVH gets you the best of both worlds.
               | 
               | Unbelievable (new) specs, hosted and at prices points
               | tough to beat doing it yourself.
               | 
               | E.g.
               | 
               | Hetzner                 AMD RYZEN(tm) 9 7950X3D (current
               | gen Zen 4)            16 physical cores (4.2 GHz)
               | 128GB DDR5 ECC ram            4TB NVMe
               | completely hosted
               | 
               | For only ~$100/mo
               | 
               | https://www.hetzner.com/news/new-amd-ryzen-7950-server
        
               | jmaker wrote:
               | Both are quite happy to terminate users' accounts, but on
               | paper it's really compelling indeed.
        
               | tiffanyh wrote:
               | Have you seem them terminate unreasonable situations?
               | 
               | They normally are trying to keep their IPs and Network
               | clean of scammers abusing their resources, which
               | inevitably hurt all customers.
        
               | ayi wrote:
               | My 2024 goal was creating a website for me and hosting a
               | copy of my newsletter there. I heard a lot of good things
               | about Hetzner and choose them. Just right after signing
               | up, even after I put a credit card with 3DS they
               | immediately banned me. I wrote to support, they forwarded
               | me to page. I uploaded my passport with my face. But my
               | account is still banned. And worst thing is I'm 1 month
               | late for my 2024 goal.
        
               | tiffanyh wrote:
               | > I wrote to support, they forwarded me to page. I
               | uploaded my passport with my face. But my account is
               | still banned.
               | 
               | Is there still more needed to complete their KYC process?
        
               | lstamour wrote:
               | Try racknerd... they have various Black Friday/new year
               | specials that rarely/never expire and the price can be as
               | low as $25/year for a decent 2-cpu VPS. Not the same
               | hardware as OVH, storage often limited, but still an
               | interesting price point and service. Biggest limitations:
               | no 2FA on the control panel and mounting an ISO requires
               | talking to customer service... oh and no automated
               | backups but they might have recently added snapshots.
        
               | jiripospisil wrote:
               | > We've mounted the AX102 with 128 GB of DDR5 ECC RAM. In
               | addition to the classic ECC (error correction code),
               | which protects data both on the memory module and during
               | transfer, the new DDR5 memory generation uses the on-die
               | ECC method. This method carries out independent error
               | correction directly on the DRAM chip, giving you greatly
               | optimized reliability and data integrity.
               | 
               | Isn't on-die ECC necessary just because DDR5 is less
               | reliable than previous generations, especially at higher
               | frequencies and density? Makes me wonder whether they use
               | an actual ECC DDR5 memory.
        
               | jmaker wrote:
               | With comparable reliability, resilience, durability, and
               | latency? I doubt scalability up or out is reasonable to
               | ask, correct me if I'm underestimating its potential.
        
               | datadrivenangel wrote:
               | If we need to scale up, we drive to the nearby computer
               | store, give them several thousand dollars for a bigger
               | box, go home and install it!
        
               | GGO wrote:
               | I have 14core/128GB RAM, 2x900GB nvme + 60TB spinning
               | rust with 33TB traffic in someone else's closet and
               | paying only $60 -\\_(tsu)_/-. Prices vary a lot when it
               | comes to colocation
        
               | jmaker wrote:
               | That 60 for collocation only, right? How are you
               | amortizing the machine if that's the case? What SLAs with
               | the closet owner?
        
               | GGO wrote:
               | yeap - 60 for collocation only. 99.9% uptime SLA
               | (power/network). I've had 5-minute blips 3 times in last
               | 12 months. The hardware itself is bought used $400 for
               | the whole server (not including HDD). HDD was around $400
               | too. so last 2 years: (24 * 60 + 800)/24 = ~$94/month and
               | going down every month
        
               | jmaker wrote:
               | That's totally efficient, awesome, thanks for sharing.
        
               | reactordev wrote:
               | Yup, I did this (not for $60!!) almost a decade ago with
               | 4 savemyserver HP 1U's and a switch. Each server cost me
               | about $200 without HDD. Another $200 in ECC memory. I
               | used it for a long time until colocation costs went up
               | because they were bought by L3Harris.
               | 
               | I run a couple ryzen mini PCs on Verizon fios 1Gbps at
               | home now for my hosting needs but I would jump at $60
               | colo in a heartbeat if I got even 100mb connection
               | unmetered.
        
               | FpUser wrote:
               | I am renting 16 core AMD with 128GB RAM and 2x2TB SSD for
               | less then $100/month in Canadian pesos on Hetzner.
               | 
               | I also pay for 1Gbps symmetric fiber with dedicated IP
               | going into my house ($140/month at the moment) so I also
               | host some stuff right from my own place.
        
               | schmookeeg wrote:
               | This is a bit exaggerated. I've had a 1U or 2U server box
               | colo'ed for the last 20+ years, and it's been
               | consistently in the $80-$100 range. Double my total costs
               | if you add that I'm replacing a $5K server box every 5-7
               | years or so.
               | 
               | The duct-tape-maintenance vs your-time criticisms are
               | 100% dead-on though.
        
             | jmaker wrote:
             | Depends on the industry I think
        
           | imglorp wrote:
           | Not just that, it's also all the legwork of managed hosting:
           | all that OS configuration, patching, redundancy, testing,
           | monitoring etc etc is someone else's job, and they are
           | accountable. Plus other managed services like LB, DB, auth,
           | etc etc. you might not want to duct tape yourself and manage
           | every day. That cloud bill could be cheaper than your time is
           | worth.
           | 
           | Plus flexibility of scale up / scale down as needed for load,
           | transient testing, etc.
           | 
           | Cloud is definitely not for everyone but it makes sense for
           | some.
        
             | rapind wrote:
             | > that OS configuration, patching, redundancy, testing,
             | monitoring etc.
             | 
             | For larger projects I guess, but I have a few VPSes that
             | have been running sites for a decade or more that require
             | almost no maintenance (occasional apt-get upgrade). In fact
             | it's the frameworks / languages these projects were built
             | in that cause most of the work (for which deployment target
             | makes no difference).
        
             | jmaker wrote:
             | With IaC it's become feasible. You kinda iteratively build
             | it up. With something more predictable like Nix it really
             | gets close to shops run by a single person. And on e you
             | start to get managed infrastructure, next question that
             | pops up would often be which services do you want to
             | delegate, and some folks wind up in serverless land--locked
             | in after a while, which might be totally fine.
        
           | wharvle wrote:
           | I've seen real-world connectivity, latency, and bandwidth
           | problems crop up with enough frequency to be _a real problem_
           | on a major budget  "cloud" provider. It looked liked they'd
           | badly cheaped out on their peering agreements.
           | 
           | Move the exact same workload to the industry's default but
           | much more expensive choice, and the problems vanish entirely.
           | 
           | This is, unfortunately, one of those things that's really
           | hard to judge about a hosting provider unless you have direct
           | experience using them "at scale", as they say. Nobody puts
           | that stuff on a sales page spec sheet or comparison grid.
           | 
           | Could I save money by hosting on real hardware at some
           | popular, cheapish server-leasing place? Or colocating at one?
           | Or hosting out of my own basement!? Maybe. Would it cause
           | some users to consistently see dial-up speeds and dropped
           | connections on gigabit Internet service because of either
           | some quirk of routing, or bad peering agreements? Who knows!
        
             | jmaker wrote:
             | I've observed those issues on the industry default CSPs.
             | Terrible latencies for some services. Some of those budget
             | CSPs simply resell AWS and Azure and GCP and OCP, while
             | others run on commodity VPS infrastructure. And honestly
             | there's still too much ops work running workloads not on-
             | premise, wasted time even though totally entertaining
             | sometimes.
        
       | throwawaaarrgh wrote:
       | You saved $35 a month but spent 3x as much time maintaining and
       | tweaking your self hosting. I guess we know how much your time is
       | worth!
        
         | starttoaster wrote:
         | They probably gained a lot of knowledge along the way, and if
         | that knowledge happens to come in handy in your career, can you
         | as easily put a price tag on that? I run services at home and
         | spin up infra on AWS, but I'd say I learn the most from what I
         | have at home.
         | 
         | That said, I barely need to maintain most of my home
         | infrastructure. I have CI/CD scripts do the bulk of
         | maintainership for me these days.
        
           | jmaker wrote:
           | That's knowledge forgotten in a week, unless you're an ops
           | side professional. You kinda set it up once and hope to
           | forget to focus on your application layer.
        
             | PH95VuimJjqBqy wrote:
             | that hasn't been my experience.
        
             | starttoaster wrote:
             | It's much, much easier to recall knowledge that you've
             | gained once, than knowledge that you gained 0 times. It has
             | also not been my experience that I "hope to forget" that
             | knowledge either. Perhaps you go about learning a different
             | way than I do, though.
             | 
             | After all, can you really write a good application without
             | knowledge of how it will be deployed, and the challenges
             | users face deploying things on specific platforms (eg.
             | Windows, baremetal linux, kubernetes, docker, etc)? I would
             | argue that you'd often write naive applications that gimp
             | itself in unexpected ways depending on how the user intends
             | to use it, without that knowledge. Depending on what types
             | of applications you tend to write, this might be less of a
             | valuable point to you. For example a static site web dev
             | probably wouldn't be as interested in the infrastructure,
             | they just need a server that can bind on ports 80/443. But
             | I see a lot of incredibly naive applications written by
             | potentially naive software developers out there.
        
         | ozim wrote:
         | If he knew all along what he was doing it was most likely still
         | cheaper.
        
         | stuartaxelowen wrote:
         | My preferred way of looking at this is "your project costs you
         | this much to keep being alive". An upfront cost means
         | maintenance costs are essentially zero, resulting in your
         | projects needing to hit a much lower bar to stay alive.
        
         | schmookeeg wrote:
         | It's zen-like "motorcycle maintenance" for me, and keeps me
         | abreast of what other Ops folks deal with in this space.
         | 
         | It has probably contributed zero directly to my software
         | engineering career, however, there are moments where a deep
         | understanding of the quicksand under my app foundations can
         | help, and shortcut strange debugging sessions and the like.
         | 
         | The time I spend is recreational for me. I can see where
         | others, particularly Ops professionals, are horrified at the
         | idea of doing lib and OS maintenance/updating for fun. It's a
         | very yuckable yum :D
        
       | ozim wrote:
       | IaaS for me also works better but I would not call it self
       | hosting. VPS is also a cloud.
       | 
       | I get triggered by it because I get people in my company coming
       | to me we should switch to cloud - but we are in cloud only that
       | it is IaaS.
        
       | davedx wrote:
       | I've recently gone the other way. I was self-hosting everything
       | on a DigitalOcean VPS, but keeping the OS maintained, and indeed
       | the headaches of configuring Nginx, letsencrypt, postgres and so
       | on became more annoying not less each time I wanted to make a new
       | app, because every app was a bit different.
       | 
       | I'm now running my primary project on Fly.io and I'm pretty happy
       | with it overall.
       | 
       | "No matter how small is the service, no matter how rarely you
       | need it, you'd need to pay the full price."
       | 
       | On Fly.io I'm running an app server, a db server, and another app
       | server with a 1GB volume for a Discord bot. Everything fits in
       | the free plan.
       | 
       | The thing about PaaS is you really have to do your research. It's
       | not like VPS providers where all you really need to look at is
       | how much compute and storage you get for a monthly price. PaaS
       | have a lot more subtleties and yes, it happens that the startups
       | behind them sometimes blow up or get bought out by huge public
       | enterprise companies. VPS providers tend to be lower risk.
       | 
       | The tradeoff is worth it for me, but it really depends on your
       | skillset, your priorities and so on. I _can_ maintain a VPS, but
       | I have very limited time, so I want to focus every spare hour I
       | have on developing my product.
        
         | tronikel wrote:
         | Have you tried dokku?
        
           | wharvle wrote:
           | I've been amazed at the resilience and convenience of a
           | handful of shell scripts calling "docker" commands on Debian
           | for my server at home.
           | 
           | - Figured I'd need to screw with Systemd at some point. Nope,
           | whatever Docker's doing restarts my services on a system
           | restart, and auto-restarts if they break. I haven't had to
           | lift a finger for any of that. My services are always just
           | there, unless something really goes horribly wrong.
           | 
           | - Which directories I need to backup is documented in the
           | shell scripts themselves. Very clear and easy.
           | 
           | - Moving those directories and my shell scripts to another
           | server, potentially with a different distro, would be
           | trivial. Rsync two directories (I've put all the directories
           | I mount in the docker images, under a single directory for
           | convenience), shell in, run the scripts. Writing a meta-
           | script to run all of them would be easy. On a VPS I could
           | have _everything_ that mattered on a network drive, and that
           | 'd make it even simpler. Mount network drive, run script(s).
           | 
           | - Version updates are easy. I can switch between "use the
           | latest" and "use this specific version until I say otherwise"
           | at will. Rollbacks are trivial. If the services were public-
           | facing I could automate a lot of this with maybe an hour of
           | effort.
           | 
           | - Port mapping's covered by Docker. If these were public-
           | facing it'd be pretty easy to add one extra container for SSL
           | certs and termination (probably Caddy, because I'm lazy,
           | though historically my answer for this at paying gigs has
           | been haproxy). Like, truly, the degree to which I can
           | interact with and configure this system entirely by using
           | portable-everywhere docker commands & config is very high.
           | 
           | I've been running servers (sometimes private, sometimes
           | public) at home since like 2000, and this is easily my
           | favorite approach I've used so far.
           | 
           | I've used stuff like Dokku at work. I dunno--it's another
           | thing that can and does break. If you're just self-hosting a
           | few services and aren't trying to coordinate the work of
           | several developers, IMO it's simpler and not-slower to just
           | use Docker directly.
        
             | josegonzalez wrote:
             | Maintainer of Dokku here.
             | 
             | Would love to hear more about how Dokku broke as it will
             | help me polish the project further :)
        
         | takinola wrote:
         | Interestingly, I have a very different experience than you. I
         | simply have a script that sets up the server. I keep updating
         | the script to make it better each time I hit an edge case. At
         | this point, it's pretty bullet-proof. Updates are automatic and
         | so I can leave the server running for months without any
         | intervention.
         | 
         | Due to my love to shiny things, I keep wanting to find an
         | excuse to move to a PaaS but I can never find a sufficient
         | justification.
        
       | _heimdall wrote:
       | What's the common definition of self-hosting these days?
       | 
       | I've always considered self-hosting to mean I'm managing
       | hardware, but its clear the author here sees it more as a self-
       | managed OS and infrastructure.
       | 
       | It actually feels very similar to the whole MPA vs SPA debate in
       | web development. Maybe I'm just getting old, but self-hosting and
       | SPA have specific meanings I learned years ago that seem to be
       | getting redefined now rather than coming up with new names.
        
         | PH95VuimJjqBqy wrote:
         | that would imply renting a VPS isn't self-hosting, which I
         | think is clearly incorrect.
         | 
         | If you wanted to communicate that you're dealing with hardware
         | I would imagine you would say co-locating or talk about your
         | datacenter.
        
         | layer8 wrote:
         | Self-hosting doesn't necessarily imply that it's on-premises,
         | or that you own the hardware. It means that you are fully
         | managing the software side of things (everything that is "on
         | the host") and have full control over that.
        
           | paxys wrote:
           | This thread is the first time I'm hearing this definition.
           | Self hosting has always meant using your own hardware.
           | Cloud/VPS/VMs/shared hosting or whatever else have never
           | qualified.
        
             | layer8 wrote:
             | For me it's the opposite: VPS are the standard solution,
             | since it's difficult to self-host behind a consumer ISP.
             | Wikipedia seems to agree [0]: "The practice of self-hosting
             | web services became more feasible with the development of
             | cloud computing and virtualization technologies, which
             | enabled users to run their own servers on remote hardware
             | or virtual machines."
             | 
             | [0] https://en.wikipedia.org/wiki/Self-
             | hosting_(web_services)
        
               | paxys wrote:
               | Going by this definition a company running their services
               | on a cluster of EC2 instances is also "self-hosting",
               | which makes the term meaningless.
        
               | layer8 wrote:
               | See this older version of the page: https://web.archive.o
               | rg/web/20170727020916/https://en.wikipe...
               | 
               | The original difference was between using a service like
               | WordPress vs. running an instance of the WordPress
               | software yourself. Who owns the hardware it's running on
               | or where it is located is largely irrelevant for the
               | definition.
               | 
               | Here is another reference:
               | https://www.computerhope.com/jargon/s/self-hosting.htm
        
       | atentaten wrote:
       | How about auto scaling? Is there something that makes auto
       | scaling when self-hosting easier?
        
         | the_gastropod wrote:
         | This presupposes that you _need_ autoscaling. I suspect the
         | vast majority of applications have pretty predictable resource
         | requirements  / growth characteristics. Over-provisioning your
         | hardware a bit to handle anticipated spikes is probably fine
         | for 80%+ of use-cases.
        
         | layer8 wrote:
         | Only a very small percentage of businesses need autoscaling.
        
       | rdoherty wrote:
       | Good on the author, but using a Virtual Private Server skirts
       | very close to not self-hosting. When I read 'self-hosting', I
       | imagined buying/building a physical server and either putting it
       | into a datacenter or running it in your home.
       | 
       | Lately I've been thinking of creating a bare-bones HTML website
       | of my own and maybe I'll run it on a Raspberry Pi at home. I
       | think that would qualify as 'self-hosting'.
        
         | arter4 wrote:
         | Between bots and the Reddit/HN hug of death, if you ever do
         | this don't ever advertise your website (to avoid getting DoS'd)
         | and put some firewall in front.
        
         | thinkingkong wrote:
         | Moving from a PaaS to a VPS is 95% the same amount of effort
         | and energy as spinning you own rust under your desk. Semantics
         | matter but holding a definition to "need to suffer a power
         | supply failure to count" isnt really necessary.
        
         | layer8 wrote:
         | It makes little difference who owns the hardware in the data
         | center. The important thing is who selects, controls, and
         | manages the software that runs on it. Therefore that's the main
         | dividing line.
        
       | BrandoElFollito wrote:
       | I wonder what people host that require such specs.
       | 
       | I host on a skylake about 7 years old I think, with 12 GB RAM.
       | About 30 docker images running Home Assistant, Bitwarden, the
       | *arr suite, jellyfin, minecraft ... Nothing fancy and I have
       | heaps of free CPU and RAM.
       | 
       | I understand that one can easily load CPUs with compute
       | processing, or RAM with video transformation but for a generic
       | self-hoster of typical services I am surprised by the typical
       | setup people have (which is great for them, I am just curious)
        
       | wrigglingworm wrote:
       | I'm surprised no one has brought up the power cost point yet. I
       | have a couple services I'd like to be up 24/7 and paying for
       | hosting on them is actually cheaper than I'd be able to do from
       | home just due to the cost of electricity where I live. Plus I've
       | had quite a few ISP outages, but my provider, as far as I can
       | tell, has been up 24/7/365 in the few years I've been paying
       | them.
        
       | zzyzxd wrote:
       | > Self-hosting has Become Easier
       | 
       | > Self-hosting has become more reliable
       | 
       | Docker and Kubernetes really are the two best things happened to
       | me in my selfhosting journey. They made billion dollar enterprise
       | grade tech approachable in my homelab.
       | 
       | - I powered on a brand new mini PC, 10 minutes later it showed up
       | as a node in my cluster and started running some containers
       | 
       | - Two servers died but I didn't notice until a month later,
       | because everything just kept working.
       | 
       | - Some database file got corrupted but the cluster automatically
       | fixed it from a replica on another node.
       | 
       | - I almost completely forget how to manage certs with
       | letsencrypt, because the system will never miss the renewal
       | window unless the very last server in my lab goes down.
        
         | paxys wrote:
         | Add VPN to that list. It used to be a monumental pain to set up
         | a home network, but now with something like
         | Wireguard/Tailscale/ZeroTier/Nebula you can do it in a few
         | clicks.
        
       ___________________________________________________________________
       (page generated 2024-01-18 23:01 UTC)