[HN Gopher] Deployment and infrastructure for a bootstrapped web...
       ___________________________________________________________________
        
       Deployment and infrastructure for a bootstrapped webapp with 150k
       monthly visits
        
       Author : wolfspaw
       Score  : 170 points
       Date   : 2022-09-26 19:08 UTC (3 hours ago)
        
 (HTM) web link (casparwre.de)
 (TXT) w3m dump (casparwre.de)
        
       | keepquestioning wrote:
       | What is the app?
        
         | Quarrel wrote:
         | He does say, but:
         | 
         | https://keepthescore.co/
        
       | nurettin wrote:
       | I love this simple setup. Big fan. I also do everything simple.
       | Add more workers? Just enable another systemd service, no
       | containers. Let the host take care of internal networking. The
       | biggest cost is probably a managed db, but if you are netting $$
       | and want some convenience, why not?
        
         | ed25519FUUU wrote:
         | How is adding another systemd service easier than starting
         | another container in this instance?
        
           | xani_ wrote:
           | Well, for one, you don't need docker in the first place
        
           | gwn7 wrote:
           | It's simpler, not easier. Both are easy. But one is a lot
           | less complicated in terms of the extra layers it introduces.
        
       | ksbrooksjr wrote:
       | Great write-up. Note that the author is only serving 70 requests
       | per second at peak with 15 thousand registered users and 3
       | thousand in revenue per month. This just shows that you don't
       | always have to plan to scale to thousands (or millions) of
       | requests per second.
       | 
       | This blue-green deployment workflow reminds me of a similar setup
       | used by the creator of SongRender[1], which I found about via the
       | running in production podcast[2]. One thing to be aware of with
       | smaller VPS providers like Linode, DO, and Vultr is that they
       | charge per hour rather than per second. So if you boot up a new
       | VM every time you deploy you're charged for a full hour each
       | time.
       | 
       | [1] https://jake.nyc/words/bluegreen-deploys-and-immutable-
       | infra...
       | 
       | [2] https://runninginproduction.com/podcast/83-songrender-
       | lets-y...
        
       | londons_explore wrote:
       | When you're a 1 man show, you need to both save your own time,
       | and compute costs.
       | 
       | To save your time, use the simplest thing you know how to use.
       | Whatever you can set up easily and gets you serving a hello world
       | on the internet.
       | 
       | To save compute, just don't do stupid things. In particular, you
       | should consider what happens if this project _isn 't_ a runaway
       | success. One day, you might want to leave it running in
       | 'maintenance mode' with just a few thousand users. For that,
       | you'd prefer to be running on a $10/month VPS than a $2000/month
       | cloud setup which is going to require constant migrations etc.
       | 
       | Things like automatic failover and loadbalancing are seldom
       | required for 1-man shows - the failure rate of the cloud
       | providers hardware will be much lower than your own failure rate
       | because you screwed up or were unavailable for some reason.
        
       | abledon wrote:
       | FAQ says an upgraded board (~10$) lasts forever... surely it wont
       | be here in 2150 CE ? I'm always curious about the word 'forever'
       | in legal agreements, when truly do we expect a 'forever' product
       | to go out of service.
        
       | endigma wrote:
       | I don't see why the author is so proud of avoiding tooling that
       | would make their build and deploy process simpler? Even something
       | like digitalocean's own buildkit based "apps" would be an upgrade
       | here. Deploying your app using ssh and git is not magic or simple
       | it's just a refusal to learn how actual software delivery is
       | done.
       | 
       | Even totally dodging docker/k8s/nomad/dagger or anything that's
       | even remotely complicated platforms like
       | AWS/DO/Fly.io/Render/Railway/etc obsolete this "simple" approach
       | with nothing but a config file.
       | 
       | I also theorize that the author is likely wasting a boatload of
       | money serving ~0 requests on the staging machine almost all the
       | time, due to him literally switching a floating IP rather than
       | using two distinctly specced machines for production and staging
        
         | outworlder wrote:
         | > just a refusal to learn how actual software delivery is done.
         | 
         | One day you will be promoted to 'senior engineer' and revise
         | this statement.
         | 
         | Software exists to solve problems. Adding more complexity has
         | to serve a purpose. Just because you read somewhere that
         | Netflix does their deployment in some way, doesn't mean that
         | it's the right way for your environment.
         | 
         | The only thing I disagree with his approach is that they do
         | "ssh <host> git pull". They should just "git push <host>" from
         | their CI machine(or laptop since they have no CI/CD). No reason
         | to allow access to code from the actual server and git is
         | distributed.
         | 
         | They could surely turn off the non-active server but they might
         | be using that for failover. In that case serving zero requests
         | is totally fine.
        
           | endigma wrote:
           | Way past senior engineer, not sure how git/ssh acrobatics is
           | less complex in your mind than "fly deploy" or clicking a
           | button in render/vercel/etc. Premade github actions / gitlab
           | / all exist for these things and the author would be paying
           | less money for a more robust deployment process. DIY with
           | tools that aren't meant for the job isn't better because you
           | already use git/ssh.
           | 
           | Author is running gunicorn/flask/nginx, this is trivial for a
           | buildpack and would take like 5-10 minutes tops to set up a
           | robust build/deploy flow with any recent PaaS.
        
           | [deleted]
        
         | arasx wrote:
         | I've actually used OP's system in way higher traffic
         | environments and was happy to read I was not alone. Sometimes
         | all you need are simple tools and if git pull/ssh covers your
         | basis, what's the big deal.
         | 
         | How actual software delivery is done is a relative term to the
         | systems/needs and the scale you have. You probably have not
         | been in environments where build/deploy tools are over
         | engineered to death, to the point where adjusting it one bit
         | brings down the entire delivery to a halt. If you were able to
         | set those up without such issues, kudos to you.
        
         | kjoedion wrote:
         | What?
         | 
         | Manual git pushes ensure the highest level of security and
         | error avoidance.
         | 
         | I routinely find myself having to do one-off post-deployment
         | things that would be a nightmare to try and script into the ci
         | flow.
        
           | [deleted]
        
           | endigma wrote:
           | https://fly.io/docs/flyctl/ssh/
           | 
           | https://render.com/docs/ssh
           | 
           | ... and others
        
           | bastardoperator wrote:
           | Say what? I can't tell if this is satire or not.
        
         | matai_kolila wrote:
         | AWS's CodeDeploy is magic for stuff like this. I got _so many_
         | things for free when I began using it for deployments, it 's
         | kind of amazing...
        
         | guhidalg wrote:
         | I disagree, avoid CI/CD tools is the right call until this
         | grows enough that you can't continue writing small scripts. I
         | think all the CI/CD tools are garbage in that by being generic
         | they do a poor job at addressing your specific needs.
         | 
         | I don't know who needs to hear this, but you don't always need
         | to start a project by reaching for the most complex tool
         | possible. Build up to it and then you won't need convincing to
         | use it.
        
           | endigma wrote:
           | Alternatively, learn how CI/CD works and have a comprehensive
           | tool in your toolbox basically forever
        
             | LAC-Tech wrote:
             | Are there any CI/CD tools that are likely to stand the test
             | of time in the same way the basic unix ones have?
             | 
             | I'm not against making things easier. But I feel like it's
             | so easy to over-engineer and rely on proprietary crapware.
             | 
             | Is there a conservative path forward here?
        
               | outworlder wrote:
               | No, they are all horrible monstrosities. The ones that
               | are hosted by others (github actions) are still
               | monstrosities, only that other people are taming the
               | monsters.
               | 
               | The best way I've found is to keep the CI/CD tools as
               | just simple task runners. They should only have to figure
               | out when to run things, inject the necessary
               | environment(and secrets), and figure out dependencies (if
               | applicable) and run those. Whatever they run, you should
               | be able to run the same scripts from your own laptop or a
               | similar.
        
               | endigma wrote:
               | Alternatively
               | 
               | https://docs.dagger.io/1200/local-dev/
               | https://docs.drone.io/cli/drone-exec/
               | https://github.com/nektos/act
        
               | xani_ wrote:
               | Well, our Jenkins jobs still work decade after. Jenkins
               | itself requires some care and feeding tho, mostly
               | upgrades, especially if you use some plugins.
               | 
               | Then again jenkins job can be as simple as "download a
               | repo and run some code in it" without anything fancy so
               | even if SHTF you can migrate away relatively painlessly
        
               | endigma wrote:
               | Jenkins/Travis are getting to this point, not that they
               | are the "cutting edge" or anything. I'd personally rely
               | on projects like woodpecker/drone/dagger so that there's
               | no real "rely" on anything closed source or proprietary.
        
             | guhidalg wrote:
             | I'm not saying never learn CI/CD tooling, I'm saying don't
             | use it until you need it. In an enterprise setting, yes
             | it's probably a good idea to use such tools from day 1
             | because you don't ever want to be solely responsible for
             | committing and shipping a bug. In other contexts, like the
             | author's one-man website, the cost is not worth it.
        
               | BlargMcLarg wrote:
               | You can't really say that without knowing the author's
               | skills and how much they lose in opportunity costs to
               | deploy manually.
               | 
               | If the author has already deployed 100-200 times and
               | knows enough to set up a basic build, it would be close
               | to break-even with a fairly wide margin.
        
         | [deleted]
        
         | xani_ wrote:
         | You're asking for net zero benefits at cost of adding hundreds
         | of thousands of lines of code to your deployment path.
         | 
         | That really smells of desperate developer that need to feel
         | relevant just because they use the latest thing
         | 
         | Like, how the fuck it would be "simpler" if the deploy is
         | literally just running a single script ?
        
           | endigma wrote:
           | I could fit an entire droneCI/fly deployment setup in a
           | single HN comment, no idea what you've been scarred by.
        
         | pcorsaro wrote:
         | git push live is pretty simple I think. You're right that ssh
         | and git aren't magic, and that's why the author is using them.
         | I agree that they don't work in a team environment, but for a
         | one man shop, I don't see why it's not OK.
        
         | datalopers wrote:
         | > how actual software delivery is done
         | 
         | the sheer arrogance and ignorance of this comment is remarkable
        
       | tonnydourado wrote:
       | Not gonna lie, I was triggered by the no CI/CD and shared
       | database between staging and prod. But those concerns where very
       | satisfactorily addressed. I'd miss some form of CI/CD if it was a
       | team, but I suppose that for a single person show, running tests
       | locally is enough.
       | 
       | I do miss infra as code mentioned. If shit goes tits up with the
       | infrastructure and everything was setup with clicking around in
       | control panels, ad-hoc commands and maybe a couple of scripts,
       | your recovery time will be orders of magnitude bigger than it has
       | to be.
        
       | jlundberg wrote:
       | Using two servers, one for production and the other for
       | staging/failover, then switching upon release is a neat
       | technique.
       | 
       | Been using it for our API backends for about ten years.
        
       | criley2 wrote:
       | It's a lot more work than say Next.js and Vercel, but I suppose
       | it costs a lot less to maintain in cash (trading cash for time).
        
         | rsanheim wrote:
         | I'm sure everyone's mileage will vary based on experience /
         | comfort with the particular stack here... but in my experience
         | shipping a monolith (Python, Ruby, whatever) to a VPS is vastly
         | simpler than a comparable Next/Vercel setup, especially for a
         | CRUD app as the OP is talking about.
         | 
         | Dealing with the details of things like SSR vs client side,
         | Apollo caching (cuz w/ Next you are _probably gonna go w/ GQL,
         | and so you probably use Apollo), and monitoring/observability
         | in the Next/Vercel world is hard. You don't have these problems
         | with a simple monolith, and you can sprinkle in your front end
         | magic with whatever lean JS approach you want as you need it.
         | 
         | And to be fair Vercel has some logging and datadog integration,
         | but it is very limited compared to what you can get and can
         | configure as-needed with real agent based monitoring.
         | 
         | I'm increasingly feeling like the 'benefits' of server-side
         | rendering in all its flavors for the big JS frameworks are just
         | a boondoggle and not worth the complexity and cost for _most_
         | apps, even ones where the cost of an SPA are worth it.
        
           | criley2 wrote:
           | - SSR vs client side: NextJs does this automatically.
           | https://nextjs.org/docs/basic-features/pages
           | 
           | - GQL: the only case where I'd use GQL is where he would
           | likely want to too: building native mobile apps. Otherwise,
           | I'd use tRPC and have fullstack type safety. This is included
           | in my 5 minute estimate because I'd use create t3 app, which
           | has this configured out of the box https://trpc.io/
           | https://github.com/t3-oss/create-t3-app
           | 
           | - monitoring/observability: not quite sure what you mean, but
           | it's only a few clicks in vercel to have a logdrain or metric
           | drain and configure alerts and dashboards in your tool of
           | choice. And if I wanted to roll the whole thing myself...
           | that's what he did. Rolled it all custom by hand. The
           | difference is I have the option of not reinventing the wheel.
           | 
           | At the end of the day I can get into composing a docker image
           | that is configured just like his prod servers if I need that
           | access, and there's likely already a docker image pre-done
           | that has 99% of what I need, including metrics/monitoring. At
           | which point I could launch my service on any number of hosts
           | that allow direct pushing of built docker images, if my build
           | process goes there.
           | 
           | To each their own, but I think my solution requires far less
           | time to setup and maintain, but costs more, as I said.
           | Manually engineering each system yourself might be fun to
           | you, but it's not to me.
        
             | rsanheim wrote:
             | > SSR vs client side: NextJs does this automatically.
             | https://nextjs.org/docs/basic-features/pages
             | 
             | Uh-huh. Do some simple stackoverflow searches for how to
             | configure a real Next/Vercel app with error-handling,
             | passing thru cookie/session state, or how to wire up third
             | party services or components. Or check out next.js issue
             | search for "window undefined" - https://github.com/vercel/n
             | ext.js/search?q=window+undefined&... -> 1667 issues right
             | now. Even if you discount 3/4 of those as duplicates,
             | developer error, etc, you have a ton of issues where
             | figuring out server vs client state is confusing for many
             | folks.
             | 
             | It sounds like has worked great for you out of the box,
             | which is great! But that isn't the case for many other
             | developers.
             | 
             | This is not my definition of "automatic".
             | 
             | > I'd use tRPC
             | 
             | This looks pretty compelling if you are all-in with TS
             | everywhere, including your server/data access layer. Which
             | doesn't apply in the OP's post, since it's a traditional
             | CRUD app with python in front of good ole' PSQL DB.
             | 
             | > monitoring/observability: not quite sure what you mean,
             | but it's only a few clicks in vercel to have a logdrain or
             | metric drain
             | 
             | We are using Vercel routing its logs to datadog right now.
             | It works fine but is extremely limited. Want to wire up
             | full-stack transactions w/ the rest of your app, or modify
             | _what_ it logs? Good luck.
             | 
             | Also, trying to use a real APM (Datadog, Scout, open
             | source, whatever) is a complete non-starter. You need a
             | persistent process running somewhere to take the data,
             | buffer, and transport it _somewhere_, and so far you can't
             | do that with Vercel's custom server support. You _can_ do
             | this with just NextJS (no vercel), but it requires some
             | frightening hacks I would not be comfortable adding to apps
             | I maintain: https://jake.tl/notes/2021-04-04-nextjs-
             | preload-hack.
             | 
             | I get that Vercel provides a very quick setup and a smooth
             | dev experience, especially if you are in their sweet spot
             | of something like a static site or blog engine. It just
             | seems more trouble than its worth for most CRUD apps,
             | especially anything that isn't 100% in the
             | javascript/typescript world.
        
         | iLoveOncall wrote:
         | It does not look like a lot of work at all? A few days of
         | initial setup and that's it.
        
           | criley2 wrote:
           | I can go from a blank folder to deploying a git hosted
           | next.js app on vercel in about 5 minutes, so comparatively,
           | it is to me.
        
             | matsemann wrote:
             | Quite weird to compare infrastructure of a static frontend
             | to a backend with db and stuff?
        
               | reducesuffering wrote:
               | Quite weird to rebuke the assessment with an objectively
               | wrong statement. Next.js isn't a static frontend. You can
               | statically generate pages w/ or w/o a DB, and server side
               | render dynamically. A DB is literally 3 clicks and a
               | secret string away (starting for free) with Planetscale
               | or Railway.
               | 
               | I've setup Django + Postgres on a VPS and Next on Vercel
               | w/ hosted DB. Next is leagues simpler and faster to spin
               | up.
               | 
               | Here's Django Postgres on Digital Ocean:
               | https://www.digitalocean.com/community/tutorials/how-to-
               | set-...
               | 
               | Next on Vercel is:
               | 
               | 1. npx create-next-app
               | 
               | 2. Click button for DB spinup
               | 
               | 3. Copy secret string to .env
               | 
               | 4. git push
        
               | criley2 wrote:
               | >Quite weird to compare infrastructure of a static
               | frontend to a backend with db and stuff?
               | 
               | I suppose you don't know what you're talking about, but
               | Next.js is a backend and frontend supporting a wide
               | variety of data sources. I personally like using Prisma
               | and postgres with my Next.Js deployments.
               | 
               | I'm literally rolling out a full stack database connected
               | react app in minutes that includes a full functional API
               | that can be accessed by a variety of clients.
        
               | matsemann wrote:
               | But then you suddenly have complexity again. In your
               | various comments here you're talking about ten different
               | tools and concepts being woven together. It's only fast
               | for you because you've done it before.
               | 
               | Not inherently faster than OP's approach.
               | 
               | Like, I can spin up a k8s cluster faster than I can even
               | understand what parts of Nextjs I should use. Doesn't
               | mean that me selecting k8s is the best choice for all
               | problems.
        
               | criley2 wrote:
               | What are you talking about? That's not complexity, those
               | are out of the box features that come with a single
               | script run (create next app or create t3 app) Vercel will
               | even launch that for you without a script, just a button
               | click. Five minutes to launch with my preferred stack,
               | though.
               | 
               | And if we're really racing, Railway.app can spin up
               | deployments AND databases in under 60 seconds. If I tried
               | a speed run, I could get the time down.
               | 
               | If OP can hand configure a python app running on ngnix in
               | manually managed DO droplets in 5 minutes... good on
               | them! The greatest devops of our era.
               | 
               | Also -- I guarantee that a newbie developer can launch a
               | nextjs app faster than hand rolling some DO droplet with
               | python and other technologies. I guarantee I can have a
               | newbie in a bootcamp with a working app on day1 (since
               | it's literally a button click, if we want), while I doubt
               | you could get that newbie out of a bootcamp on his stack
               | at all.
        
               | matsemann wrote:
               | Why so hostile in your comments? It's not a competition
               | about being the fastest. If anything, it only shows
               | immaturity on your part. Time spent setting up a project
               | is negligible compared to other things surrounding it.
               | 
               | You're anyways missing my point. Nextjs isn't a silver
               | bullet, nothing is. And that how long it takes to set up
               | a project is a function of familiarity with the stack.
        
               | criley2 wrote:
               | "Why so hostile" says the person who continues to dismiss
               | every one of my comments while asking zero questions
               | about subjects they clearly are not informed about.
               | 
               | Why am I being hostile? Look in a mirror! Any normal
               | person would be frustrated by your misbehavior here.
               | 
               | I'm done with the free education session - I get paid to
               | do webapp bootcamps privately and to mentor Jr Eng at
               | work and I don't like wasting my time with petulant
               | students who don't know how to take off the "i know
               | everything so let me talk down to folks who do it for a
               | living" hat when talking about technologies they are
               | completely novice in. Have a good one!
        
       | nlstitch wrote:
       | Im secretly a fan of the boring tech statement. Sometimes all of
       | the new containerization and microservice paradigms just feel
       | like an excuse to overengineer everything. Running solo and
       | starting from scratch (no code, no infra, no cloud subscriptions)
       | means you'll have to simplify and reduce moving parts.
        
         | outworlder wrote:
         | Microservices are difficult to justify with a small team (or a
         | single person team in this case). If your monolith has obvious
         | modules that can be carved out, you _need_ to scale them
         | independently and for some reason you can't just deploy
         | multiple monolith copies, then it may be worth it to do
         | microservices.
         | 
         | At large companies, the calculations are different. Even when
         | monoliths would be the best fit, they may still carve out
         | microservices. How else would they be able to ship their org
         | chart? :)
        
         | guhidalg wrote:
         | Yes! Boring tech is working tech, never choose an untested
         | technology if there's any real stakes in the work.
        
       | pier25 wrote:
       | Why not simply run the app locally with the production database?
       | 
       | (either a copy of production or directly connecting to production
       | instance)
        
         | jaywalk wrote:
         | You lose the convenience of being able to immediately switch
         | back to the previous version as well as testing on identical
         | hardware and software.
        
           | pier25 wrote:
           | If you're running it in local, there's no need to switch back
           | to the previous version.
           | 
           | It's true about the hardware, but if you use a container it
           | can run on the exact same software.
        
       | chinathrow wrote:
       | Fascinating.
       | 
       | I have the exact same number of visits and run the site with a
       | boring PHP/MySQL setup from a cheap but very reliable 200EUR/y
       | shared hoster. Deployment via git over ssh as well.
        
         | jason-phillips wrote:
         | Well done! I work on antique brass clocks.
        
           | dijonman2 wrote:
           | Simplicity always wins over complexity. Complexity is a
           | liability, but it does make you feel smart.
        
             | geysersam wrote:
             | > simplicity always wins over complexity
             | 
             | S/he wrote on a hand shaped microprocessor/wireless
             | transmitter connected to a billion servers via 7 layers of
             | network protocol. Ultimately her taps on the micro
             | capacitator infused glas surface resulted in light pulses
             | through cross ocean cables encoding her (encrypted ) words
             | carried to hundreds of people all over the world.
        
           | silverfox17 wrote:
           | Just because it's an old method doesn't mean it's a bad
           | method.
        
       | hacknews20 wrote:
       | Love this: "absolutely No Kubernetes".
        
       | jonplackett wrote:
       | Something not mentioned but I'd like to know is the running cost?
        
         | iLoveOncall wrote:
         | He gives his server specs and his cloud provider, he pays most
         | likely a bit less than $100 per server per month:
         | https://www.digitalocean.com/products/droplets
         | 
         | Which is absolutely insane for a website that serves 200
         | visitors per hour.
        
           | rurban wrote:
           | probably that's why he already got white hair.
        
           | ed25519FUUU wrote:
           | Just to be clear are you saying the costs are HIGH or LOW?
           | That cost seems high given his relatively low traffic count
           | but I don't know what sort of processing is going on in the
           | background.
        
             | atmosx wrote:
             | High. Although the assumption here is that there are no
             | spikes... which might or might not be true, but for a web-
             | app, 200 visits per hour is not much.
        
               | chx wrote:
               | 70 requests per second at peak-time
        
           | yabones wrote:
           | 8c/16g seems absolutely bonkers for an web/app server. I
           | could understand if there was also a data & caching layer
           | shoved onto the same box, but that's a LOT of memory for a
           | python app. That's more inline with "mystery meat ASP.NET app
           | built by some a couple guys in Slovenia 12 years ago".
        
         | swyx wrote:
         | according to his bio https://twitter.com/wrede he makes about
         | 1.5k a month so infra cost is hopefully is not much more than
         | that. for sure the labor is the biggest part of this operation
        
         | wolfspaw wrote:
         | "Around 500 USD. I also use Sentry.io, Papertrail, Twillio and
         | a few other tools that cost money."
         | 
         | From:
         | https://www.reddit.com/r/Python/comments/xobba8/comment/iq05...
        
           | pocketsand wrote:
           | This seems to me to be a hefty bill for the traffic received.
           | I run devops on a site with 1.5-2mm unique users a month,
           | bursty traffic, and our monthly bill is just a couple hundred
           | higher than his. We are overprovisioned.
        
             | jeroenhd wrote:
             | The author says the product is hosted on two VPS servers, a
             | managed database, and a floating IP; I doubt that'll be a
             | significant part of the costs.
             | 
             | If that bill includes (business) licenses for software and
             | subscriptions, it's pretty reasonable. I'm sure it can be
             | done cheaper and I'm sure the author has looked into this
             | as well. Maybe it's not worth the effort, or maybe there's
             | some other blocker preventing migration to cheaper
             | alternatives.
             | 
             | Maybe $2500 of monthly profit is enough for the author not
             | to bother messing with the code base? Maybe the fact the
             | reported revenue has doubled [1] over two months has
             | something to do with it?
             | 
             | [1]: https://casparwre.de/blog/python-to-code-a-saas/ vs
             | https://casparwre.de/blog/webapp-python-deployment/
        
         | [deleted]
        
       | homami wrote:
       | For what it is worth, I am handling about 130k views and
       | registering ~1k paying users per day with a t2.large instance
       | server running node, redis and nginx behind a free tier
       | Cloudflare proxy, a db.t2.small running postgres 14, plus
       | CloudFront and S3 for hosting static assets.
       | 
       | Everything is recorded in the database and a few pgcron jobs
       | aggregate the data for analytics and reporting purposes every few
       | minutes.
        
         | digdugdirk wrote:
         | ... Could you translate please?
         | 
         | As someone who isn't a programmer (mechanical engineer) but has
         | some programming ability, the idea of designing something like
         | the article author did and sharing it with the world intrigues
         | me.
         | 
         | How much does a setup like you described cost per month? (Or
         | per x/number of users, not sure how pricing works in this
         | realm)
        
           | mattmanser wrote:
           | The article's a pretty simple design, and how most people
           | used to do it pre-cloud platforms. The article's pretty much
           | saying "you don't have to go AWS/Azure". Although back in the
           | day we didn't really have managed DB instances, you'd often
           | just run the DB on the same server as the app.
           | 
           | The parent, however, is paying a lot more for a t2.large.
           | 
           | For that kind of money you could almost get a dedicated
           | machine which would be 10x more powerful than a T2.large, but
           | with the hassle of maintenance (though it's not a lot of
           | hassle in reality).
           | 
           | The advantage of AWS is not price. AWS is quite a bit more
           | expensive if you want the same performance. It's either that
           | you can scale up and down on demand, or the built-in
           | maintenance or deployment pipelines.
           | 
           | So you can save money if you have bursty traffic, or save dev
           | time because it's super easy to deploy (well, once you learn
           | how).
           | 
           | Cloud platforms can also have weird limits and gotchas you
           | can accidently hit, like your DB can suddenly start going
           | slowly because you've got a temporary heavy CPU query
           | running, as they don't actually give you very much CPU with
           | the cheaper stuff.
        
       | woopwoop24 wrote:
       | if it was mine i would most certainly opt for ansible or
       | something likewise, the overhead of logging into a machine and
       | doing all the things by hand/manual is more complicated and error
       | prone than a playbook would be (at least for me and me forgetting
       | all the steps all the time ^^).
       | 
       | But who are we to judge, impressive to earn 2,5k every month with
       | it, kudos.
        
       | saradhi wrote:
       | I came here to see more website links with one-man shows that are
       | beating 150k/m.
        
         | [deleted]
        
       | petargyurov wrote:
       | > The trick is to separate the deployment of schema changes from
       | application upgrades. So first apply a database refactoring to
       | change the schema to support both the new and old version of the
       | application
       | 
       | How do you deploy a schema that serves both old and new versions?
       | Anyone got any resources on this?
        
         | rwiggins wrote:
         | The application needs to be developed with this constraint in
         | mind. I've spent quite a lot of time over the years doing
         | blue/green deployment advocacy and awareness for developers in
         | some orgs. It does sometimes substantially complicate the code
         | to build both backward and forward compatibility, but it is so
         | nice when done well.
        
         | david422 wrote:
         | You only do additive changes. Only add columns etc. never
         | remove them ... until all your old application code is no
         | longer running.
        
           | naasking wrote:
           | And you have to weaken your schema to allow nulls for the
           | duration that old and new code are both running (and possible
           | foreign key constraint failures).
        
           | petargyurov wrote:
           | Thanks. I thought it would be something... smarter. Not sure
           | I like that, seems to come with a lot of caveats.
        
             | xani_ wrote:
             | "smarter" means more code and that's rarely more
             | maintainable.
             | 
             | It's what you have to do if you want less downtime.
             | Normally it shouldn't span much more than 2-3 versions of
             | application and it's rare that the annoying kind (say
             | writing same data to "old" and "new" column) happens.
        
           | outworlder wrote:
           | > until all your old application code is no longer running.
           | 
           | Or until it's been disabled by feature flags.
        
       | znpy wrote:
       | It's so simple, I love it.
        
       | ed25519FUUU wrote:
       | I think we can safely put docker images (not k8s) in the "boring
       | technology" category now. You don't need k8s or anything really.
       | I like docker-compose because it restarts containers. Doesn't
       | need to be fancy.
        
         | stavros wrote:
         | I needed something that would restart containers automatically
         | when I pushed to a branch, so I wrote a few lines of code to do
         | it:
         | 
         | https://gitlab.com/stavros/harbormaster
         | 
         | As far as PaaSes go, it's probably the simplest, and works
         | really well.
        
           | geek_at wrote:
           | Funny I did something similar but only much easier. A cron
           | job that ran every 5 minutes doing a git pull on the
           | production branch and since it was a PHP stack the new
           | version was live instantly without needing rebuild or restart
           | of any service
           | 
           | It's basically a oneliner in bash. KISS
        
             | stavros wrote:
             | That's good when you only want updates, but I wanted to be
             | able to launch new services on the server by just pushing a
             | new YAML file to git. Harbormaster does it wonderfully, and
             | it lets me see exactly what's running on the server and how
             | with one look at the YAML.
             | 
             | I can also back everything up by just copying one
             | directory, so that's great too.
        
           | ed25519FUUU wrote:
           | This to me is boring and simple technology. Well done.
        
             | stavros wrote:
             | Thank you! I really like it, it's been working great for a
             | while now, with no hiccups.
        
         | xani_ wrote:
         | For few of the small API-mostly apps I am particular to just
         | embedding the statics into binary and have one blob to run and
         | deploy, go:embed making it particularly easy.
         | 
         | Container with it is just a few files in etc and binary blob
         | itself. Same with container-less deployment.
        
         | spyremeown wrote:
         | Agreed.
         | 
         | Containers... just work? Actually, _Docker_ just works. It is
         | rock solid and I 'm very delighted I learnt so much of it.
         | 
         | The only thing I'm missing about Docker is a bigger focus on
         | on-the-fly image editing. Like, I love to have these container
         | development environments. I'd love to give some command inside
         | the container and make it magically appear on the Dockerfile if
         | it was successfully executed, so I don't have to manually copy
         | things over when I want to have reproducibility for later.
        
           | zegl wrote:
           | Docker is incredibly stable, and since it turns 10 years next
           | year, OP will finally be able to migrate away from having a
           | build step on production servers! ;-)
        
         | yakshaving_jgt wrote:
         | > I like docker-compose because it restarts containers.
         | 
         | Still sounds like over-engineering to me. It's trivial to make
         | systemd supervise a service and automatically restart it on
         | failure.
        
           | ed25519FUUU wrote:
           | Honestly they're both easy technologies to learn and master.
        
         | BlargMcLarg wrote:
         | Can say the same for ci/cd. Author talks about integration
         | tests, setting a barebones "build this and deploy it" pipeline
         | is far more trivial and skips having to do it manually every
         | time.
        
           | ed25519FUUU wrote:
           | It's interesting that some engineers look at docker and ci/cd
           | and say "that's too complicated" and others look at a system
           | without it and say "it needs to be simplified"
        
             | BlargMcLarg wrote:
             | Inexperience or difference in experiences. I wouldn't
             | necessarily advocate containers as my experience has been
             | mostly "this takes far longer than the gain" (given the
             | OP's scenario).
             | 
             | Meanwhile, my experience is a minimal CICD pipeline takes
             | half an hour once you know manual deployment, and I lose
             | far more having to SSH and run scripts myself than telling
             | a machine to do it for me (not just time but mental
             | satisfaction too).
             | 
             | Though I wouldn't call either fancy in this year. Only past
             | that stage, where the benefits drop sharply for
             | individuals.
        
               | intrepidsoldier wrote:
               | Containers are more than just Kubernetes
        
       | dewey wrote:
       | I'm a strong advocate of boring technology too but I'm also very
       | much in favor of keeping things off my dev machine. In that case
       | they have to run ssh, git and run a script to switch endpoints.
       | 
       | My current boring system for a simple Rails app
       | (https://getbirdfeeder.com) is that I push to GitLab, the CI
       | builds a docker image and does the ssh & docker-compose up -d
       | dance. That way I can deploy / roll back from anywhere even
       | without my computer as long as I can log into GitLab (and maybe
       | even fix things with their web editor). Seems a lot "more boring"
       | to me and having the deploy procedore codified in a .gitlab-
       | ci.yml acts as documentation too.
        
         | outworlder wrote:
         | My rule is: SSH to a machine and running manual commands for
         | any non-toy systems is an antipattern. Every time you SSH to do
         | something, it means you are missing automation and/or
         | observability.
         | 
         | In small teams it's bad because it's a waste of time. In large
         | teams it's bad because it creates communication overhead (it's
         | also a waste of time but that can be often absorbed if things
         | don't get too crazy). For a single person team it is bad
         | because now they have to depend on their memory and shell
         | history.
        
           | emptysea wrote:
           | Yeah it's really straightforward to setup
           | Github/Gitlab/Circle/etc. to run your various deploy / ssh
           | commands for you, then you don't need actually access to the
           | box.
           | 
           | Really nice being able to open up GitHub and manually kick
           | off a deploy -- or have it deploy automatically.
        
           | ithrow wrote:
           | _For a single person team it is bad because now they have to
           | depend on their memory and shell history._
           | 
           | There is this thing called shell scripts.
        
       | 28304283409234 wrote:
       | Oh my all the opinions again. Software is not, believe it or not,
       | a True or False game. TIMTOWTDI, folks. This guy rocks a solid
       | process that he is comfortable with and that works. I for one
       | applaud him for it.
        
       | bakugo wrote:
       | Maybe I'm missing something here, but what are the advantages of
       | having two identical servers with a floating IP that switches
       | between them instead of just running two instances of the app on
       | the same server and switching between them by editing the nginx
       | proxy config?
        
         | noselasd wrote:
         | So you can do maintenance/kernel upgrades etc. on the server
         | that's not in production at the moment.
        
         | tjkrusinski wrote:
         | Less work. You'd need to have a different context for
         | environment variables that adds another level of complexity.
         | Likewise if you had a dependency that needs updating (SSL,
         | etc), you can do it safely on the idle machine without worrying
         | about production traffic.
        
           | xani_ wrote:
           | Eh, it's pretty trivial to do. The reason for 2 servers is
           | high availability, not that it is easier in meaningful way.
           | 
           | I run something similar on personal site and it's literally
           | just 2 identical configs but with different port/directory
           | and some haproxy config.
        
       | wolfspaw wrote:
       | rdd discussion:
       | https://www.reddit.com/r/Python/comments/xobba8/how_i_deploy...
        
       | moralestapia wrote:
       | Just a small comment, blue/green usually implies some sort of
       | load balancing, here OP is just flipping a switch that changes a
       | hostname and flips the roles of blue/green from
       | staging/production.
       | 
       | Nothing bad with that, thought, and part of its genius is how
       | simple it is.
        
         | xani_ wrote:
         | You can still do that in this kind of setup, just put
         | loadbalancer like HAProxy on both that sends traffic to both
         | nodes but prefers "the one that's local" in default config.
         | Then you can use the LB to move traffic as fast or as smoothly
         | as possible, and it also masks failures like "app on current
         | node dies", as the traffic near-instantly switches to the
         | other, at some small latency cost.
         | 
         | This also gives you a bit more resilience, if the app on node
         | dies (out of memory/disk/bug/whatever), the proxy will just
         | send the traffic to the other node without having to wait for
         | DNS to switch
        
         | akoncius wrote:
         | well balancing is done on DNS level, and it's not required to
         | have literal loadbalancer.
        
           | moralestapia wrote:
           | Blue/green usually involves moving traffic from blue <->
           | green gradually. You usually do this with a load balancer.
           | That is all.
        
             | outworlder wrote:
             | Usually, yes. But now some[one|thing] will have to do the
             | monitoring and gradually ramp up traffic, rollback, etc.
             | 
             | It's perfectly ok to immediately flip the entire traffic if
             | you are a one man shop and your customers will tolerate it.
        
       ___________________________________________________________________
       (page generated 2022-09-26 23:01 UTC)