[HN Gopher] Best practices around creating production ready web ...
___________________________________________________________________
Best practices around creating production ready web apps with
Docker Compose
Author : nickjj
Score : 183 points
Date : 2021-06-01 18:25 UTC (4 hours ago)
(HTM) web link (nickjanetakis.com)
(TXT) w3m dump (nickjanetakis.com)
| oliwary wrote:
| I have really enjoyed working with docker compose for
| development. For production, there is a great guide by the
| creator of FastAPI on how to spin up a docker swarm cluster to
| use almost the same files here:
|
| https://dockerswarm.rocks/
| barbazoo wrote:
| > web:
|
| > <<: _default-app
|
| > ...
|
| > worker:
|
| > <<: _default-app
|
| This is always somewhat of a code smell to me. This makes me want
| to dig deeper and see if the two use cases shouldn't have been
| separated into separate Docker files or even separate apps.
| TameAntelope wrote:
| I tend to start with monolith and split things out when I get
| frustrated with how coupled they are.
| pentium166 wrote:
| It seems reasonable to me (for a compose-in-prod setup). The
| standard setup (at least for Rails, in my experience) is that
| the web server and background job processor will have the same
| dependencies and nearly identical configuration. Until the app
| has gone off the rails and I really do need to have the server
| and worker diverge, this approach seems like less to worry
| about than maintaining multiple images and duplicated but 98%
| identical container definitions.
| znpy wrote:
| Pro-tip: nowadays podman can pretend to be a kubernetes cluster
| and launch pod And deployment definitions (in yaml, of course).
|
| That of course is way close to the "production" in "production
| ready".
| buildbot wrote:
| Without podman-compose too! I didn't know this was the case
| now: https://www.redhat.com/sysadmin/compose-kubernetes-podman
| ecliptik wrote:
| That sounds really cool. Do you happen to have a good link for
| this?
| buildbot wrote:
| Not GP, but Perhaps https://www.redhat.com/sysadmin/compose-
| kubernetes-podman?
| 2OEH8eoCRo0 wrote:
| Also daemonless and rootless
| nonameiguess wrote:
| I had remembered docker's own documentation in the past
| recommending against using compose at all in production, but that
| is apparently no longer the case:
| https://docs.docker.com/compose/production/
|
| I'm fairly surprised by this. You can deploy a fairly similar
| compose file into a Docker swarm cluster, but if you're doing a
| single-server deployment of everything, that is awfully brittle
| for production, to say nothing of considerations brought up in
| other comments where you may be using managed services from a
| cloud provider for things like database and caching in production
| but you're probably just spinning up a local postgres server on
| your laptop for development.
| throwaway2139 wrote:
| https://github.com/docker/compose-on-kubernetes looks dead now,
| maybe for misguided political reasons while they remain in
| denial that swarm might compete? But it's too bad, there's no
| reason some k8s api/operator thing can't make this
| frictionless.
|
| Still, someone already mentioned docker-desktop having some
| tricks for deploying straight to aws and [there is definitely a
| way to do this with
| ecs](https://aws.amazon.com/blogs/containers/deploy-
| applications-...). There's also tools like kompose which
| translate configs automatically and are suitable for use in a
| pipeline, so the only thing you need is a compose file in
| version control.
|
| So yes, you can still basically use compose in production, even
| on a non-swarm cluster, and it's fine. A lot of people that
| push back against this are perhaps just invested in their own
| mad kubectl'ing and endless wrangling with esoteric templates
| and want to push all this on their teammates. From what I've
| seen, that's often to the detriment of local development
| experience and budgeting, because if even your dev environment
| always requires EKS, that gets expensive. (Using
| external/managed databases is always a good idea, but beside
| the main point here I think. You can do that or not with or
| without docker-compose or helm packages or whatever, and you
| can even do that for otherwise totally local development on a
| shared db cluster if you design things for multi-tenant)
|
| At this point I'll face facts that the simple myth here
| (docker-compose is merely a toy) is winning out over the
| reality (docker-compose is partly a tool but isn't a platform,
| and it's just a format/description language). But consider..
| pure k8s, k8s-helm, ECS cloudformation, k8s-terraforming over
| EKS, and docker-compose _all_ have pretty stable schemas that
| require almost all the same data and any one of them could
| pretty reasonably be considered as a lingua-franca that you
| could build the other specs from (even programmatically).
|
| From this point of view there's an argument that for lots of
| simple yet serious projects, docker-compose should win by
| default because it is at the bottom of the complexity ladder.
| It's almost exactly the minimal usable subset of the abstract
| description language we're working with, and one that's easy to
| onboard with and requires the least dependencies to actually
| run. For example: even without kompose it's trivial to automate
| pulling data _out_ of the canonical docker-compose yaml and
| injecting that _into_ terraform as part of CD pipeline for your
| containers on EKS; then you keep options for local-developer
| experience open and you 're maintaining a central source of
| truth for common config so that your config/platform is not
| diverging more than it has to.
|
| I'm an architect who works closely with ops, and in many ways
| not a huge fan of docker-compose. But I like self-service and
| you-ship-it-you-run it kinds of things that are essential for
| scaling orgs. So for simple stuff I'd rather just use compose
| as the main single-source-of-truth than answer endless
| bootstrappy questions about k8s or ECS if I'm working with
| others who don't have my depth of knowledge. (Obviously compose
| has been popular for a reason, which is that kubernetes really
| is still too complicated for a lot of people and use-cases.)
| Don't like these ready-made options for compose-on-ECS, or
| compose-to-k8s via kompose? Ok, just give me your working
| docker-compose and I'll find a way to deploy it to any other
| new weird platform, and if I need some pull-values/place-
| values/render-template song and dance with one more weird DSL
| for one more weird target deployment platform, then so be it.
| I've often found the alternative here is a lot of junior devs
| deciding that deployment/dev-bootstrap is just too confusing,
| their team doesn't help them and pushes them to an external
| cloud-engineering team who doesn't want to explain this again
| because there's docs and 10 examples that went unfollowed, so
| then junior devs just code without testing it all until they
| have to when QA is broken. Sometimes the whole org is junior
| devs in the sense that they have zero existing familiarity with
| docker, much less kubernetes! Keep things as simple as
| possible, no simpler.
|
| Seen this argument a million times, and no doubt platform
| choices are important but even pivoting on platforms is
| surprisingly easy these days. When you consider that compose is
| not itself a platform but just basically a subset of a wider
| description language, this all starts to seem a bit like a json
| vs yaml debate. If you need comments and anchors, then you want
| yaml. If you need serious packaging/dependencies of a bunch of
| related microservices, and a bunch of nontrivial JIT value
| lookup/rendering, then you want helm. But beyond org/situation
| specific considerations like this, the difference doesn't
| matter much. My main take-away lately is that leadership needs
| to actually decide/enforce where the org will stand on topics
| like "local development workflows"; it's crazy to have a team
| divided where half is saying "we develop on laptops with
| docker-compose" and half is saying "we expect to deploy to EKS
| in the dev environment". In that circumstance you just double
| your footprint of junk to support and because everyone wants to
| be perfectly pleased, everyone is annoyed.
| reportt wrote:
| Using the docker-compose.override.yml helps with that. In local
| you can just have those services/configurations added, and then
| the production environment doesn't even need to know they
| exist.
| wiktorcie wrote:
| just use dokku instead, it's much easier. Why bother with docker
| compose?
| clintonb wrote:
| Dokku has a lot of overhead to run.
| layoric wrote:
| Not for production, but used it for deployments to remote servers
| via SSH using GitHub Actions. Wrote a guide and video tutorial of
| how to as well, it uses nginx proxy and surprisingly updates were
| zero down time since the nginx proxy didn't change and handles
| switching app ports seamlessly. For low traffic app could be used
| for production but I wouldn't since it is a single server
| deployment setup. Good for dev/test environment, I use this
| pattern for 20+ live demo apps on a single server.
|
| Tutorial: https://docs.servicestack.net/do-github-action-mix-
| deploymen...
|
| Video: https://youtu.be/0PvzcnxlBvc
| peterthehacker wrote:
| Running docker compose in production is not a good practice from
| what I understand. The persistence layers, like MySQL or Redis,
| shouldn't be managed with local volumes. If you want to do
| something like this then kubernetes with persistent volume claims
| would be better, but running databases in k8s can be pretty
| tricky. Most of the time RDS is the best practice for your DB.
| zmmmmm wrote:
| Swapping out the persistence layer in prod vs dev is one of the
| topics covered by the article, for the reasons you mention.
| 0xbadcafebee wrote:
| Docker Compose now works natively with AWS ECS, so you could
| conceivably "scale" Docker Compose. Plus there's AWS's _ecs-cli_
| which can work, though IIRC you basically have to maintain two
| different configs, one for local development, one for remote.
|
| And that's the general flaw in using Compose outside of local
| development. It's simply not designed as a general purpose (and
| thus large-scale) deployment orchestrator. If you get it working
| locally, you _probably_ need a different setup for remote. That
| affects things like testing, which means it 's not really the
| same in dev as in prod. And even if it were, Compose isn't
| persistent, meaning it's just a script to start jobs on some
| other thing.
|
| Every time I've worked on a team that tried to do Compose in
| production (or even just for CI) it ended up a weird snowflake or
| with duplicate processes, and we replaced it with something more
| standard. My standard advice now is to model both production and
| local development to be as close as possible and then pick the
| simplest method to support that use case. In general this means
| going from "Compose on one host" to Swarm/Nomad/ECS on multiple
| hosts to K8s. And of course, try to make your CI use the same
| exact methods you use for dev and prod, so that using CI doesn't
| give you different results than in dev or prod.
| mdoms wrote:
| I don't understand his "Avoiding 2 Compose Files for Dev and Prod
| with an Override File" point. He just says it, but never
| justifies it.
| nickjj wrote:
| Hi,
|
| It's mentioned on video and briefly in the blog post. It's so
| you don't have to duplicate a bunch of services between 2 files
| while you slightly change a bit of configuring for those
| services depending on the environment you're in, or even
| running different containers all together (which is the problem
| the override file solves).
| randompwd wrote:
| It's a terrible idea that's why.
|
| It comes with far more(&more dangerous) downsides than upsides.
|
| It's a rookie mistake for simple applications which most people
| learn to not do.
| reportt wrote:
| The idea is that you have a core structure that is used in both
| local/dev environment and the prod environment, with just the
| override applying changes. For example, your local override
| might have the "build" configuration, whereas the dev
| environment or production environment docker-compose should not
| have that configuration (assuming you are pulling from a
| private registry). That single core docker-compose.yml can be
| copied to any environment it needs and it should just run the
| app, but if you really, really need to override something for a
| specific reason, it's not hard to do it without modifying the
| core docker-compose.yml.
| trinovantes wrote:
| Probably so you don't have duplicate settings in docker-
| compose.prod.yml and docker-compose.dev.yml when the only
| difference is probably just 1 line specifying NODE_ENV etc.
| stevebmark wrote:
| I haven't heard of using docker compose for production
| development, only local development. In production I normally see
| people building production images and spinning up them up in
| their cloud with Terraform and whatever other infrastructure they
| need, especially for databases where your local MySQL container
| isn't going to be configured or provisioned like an AWS RDS
| instance. What do you hand a docker-compose file off to in
| production? Swarm? When would you choose this path over a
| separate system to provision your production resources (like
| Terraform)?
| symlinkk wrote:
| Why would you NOT choose this path? Local and prod are the same
| with this method, that's a big advantage.
| osrec wrote:
| We use docker compose in production, but have our own custom
| tooling based around it.
|
| K8s was a little too complex and docker swarm a little too
| simple. Rolling our own solution was a last resort, but it
| worked out well in the end - more control, deeper understanding
| and no nasty surprises in behaviour while scaling etc.
| dariusj18 wrote:
| I found that Hashicorp Nomad fit really well in between k8s
| and swarm for power to complexity. I ended up not being able
| to use it because they hadn't implemented the networking
| stuff that they eventually got, but it's pretty cool stuff.
| ensignavenger wrote:
| Docker Swarm is one option, but Docker Desktop has the ability
| to deploy from a Compose file directly to AWS. There are
| additional tools for other platforms. See
| https://aws.amazon.com/blogs/containers/deploy-applications-...
| for example.
| pricechild wrote:
| It's currently in preview, but https://docs.microsoft.com/en-
| us/azure/app-service/quickstar... doesn't look bad.
| cs02rm0 wrote:
| I don't think you would in prod, unless you were happy with it
| all running on one host.
|
| Shame really, I find compose simple and neat and k8s+terraform
| an unsettling mess.
| 411111111111111 wrote:
| Docker swarm deploys to multiple hosts and there is also
| kompose for k8s.
|
| I haven't used either of them in production though
| namdnay wrote:
| Same here, docker compose is a great little tool for local
| development. You can get your apps up and running with their
| respective elasticsearches and localstacks etc, then attach to
| process if needed to debug
|
| For anything more serious, Terraform makes much more sense IMHO
| nickjj wrote:
| OP here.
|
| A lot of my clients are happily using Docker Compose in
| production. Some even run a million dollar / year businesses on
| a single $40 / month server. It doesn't get handed off to Swarm
| or another service, it's literally docker-compose up --remove-
| orphans -d with a few lines of shell scripting to do things
| like pull the new image beforehand.
|
| You can still use Terraform to set up your infrastructure,
| Docker Compose ends up being something you run on your server
| once it's been provisioned. Whether or not you decide to use a
| managed database service is up to you, both scenarios work. The
| blog post and video even cover how you can use a local DB in
| development but a managed DB in production using the same
| docker-compose.yml file, in the end all you have to do is
| configure 1 environment variable to configure your database at
| the app level with any web framework.
| merb wrote:
| how do you handle no downtime deployments? how do you backup
| data?
|
| I rather would use kompose+kustomize to convert it to k8s and
| than use k3s with ingress-nginx with hostPath and backup
| local volumes with velero.
| zmmmmm wrote:
| > I rather would use ...
|
| you'd rather use six complex things instead of one simple
| one?
| merb wrote:
| well I asked, because everybody answered either with the
| following:
|
| > we do not care about no downtime or stateless nodes and
| a lb in front
|
| and b:
|
| > backup with the standard tooling or stateless nodes
|
| so basically they just use it mostly to "package" the
| application and everything else is probably done with
| different tooling.
| vbsteven wrote:
| I'm not the parent but I'll pitch in on this as I have a
| few client projects using docker compose for production.
|
| No downtime deployments: I don't bother for these projects.
| A minute of downtime in off-peak hours for a restart is
| fine.
|
| Data backup: just like you would backup any other local
| data: cron, rsync, pg_dump. And there isn't much local data
| anyway as most services are stateless, using S3 or a
| database for long term storage.
|
| I choose docker compose for production when I want to keep
| complexity to a minimum: All I need is a cloud vm, an
| ansible playbook to install docker and configure backups.
| Typically in these scenarios I run the database as an OS
| service and only use docker for nginx/redis and application
| services.
| angrais wrote:
| Why do you use the database as an OS service and not in
| docker?
|
| Using it in docker would allow easier local development,
| and shorter setup time.
| aprdm wrote:
| How's it different than running things with pure docker or
| before docker? There were backups and no downtime
| deployments before docker became a thing... docker compose
| is already a layer above docker (but really just docker).
| merb wrote:
| yeah but stuff like that is hard and not solved with
| compose, you can update your compose but that would
| result in downtime, so you can either do a/b (blue/green)
| deploys or you somehow reload the content inside the
| container?! everything that needs special tooling.
| mamcx wrote:
| You don't always need "no downtime deployments", few
| minutes are tolerated in many environments (I work in
| enterprise software, where it could be days!).
|
| Backups? You only need backups or the Db, and that have
| zero impact in the use of docker or not*
|
| *ie: I know what very complex setups the micro-service crow
| use for serve their smalls-size disruptive app. I still
| think cron + pg_dump + rsync is more than enough...
| merb wrote:
| well I'm asking because I have a docker-compose file and
| I use kustomize+kompose on it, because volume backup is a
| thing and the file comes from an upstream vendor and we
| can actually miss a few minutes/hours/days of uptime, but
| since the databases also use compose and sadly in a way
| that I can't remove it (migrate.sh script that makes use
| of compose commands), else I can use my kustomize+kompose
| setup which basically replaces the kustomize+kompose with
| a custom init container that I built, however everytime
| they change their shell scripts I need to fix my stuff...
| but they say that I should just backup my whole volume/vm
| via lvm snapshots or via vmware whatever. which I find a
| little bit akward and I do not want to backup my data
| with disk snapshots. another way would be to natively
| backup stuff via compose, but they use many databases and
| it's way harder than just use velero.
| mhuffman wrote:
| Another one here! I use docker-compose, normal periodic DB
| backups to S3 and a twilio script that can use a lambda
| function to reload the entire system or parts of it by
| simple phone commands when necessary. Site does millions of
| views per day.
| lmeyerov wrote:
| We've been doing this because alternatives like k8s burned
| out time, $, and other team members, and our on-prem
| customers like it operationally vs k8s. SW teams ran high
| uptime stuff before k8s, and you can still do most of those
| tricks. We're revisiting, but selectively and only b/c
| we're now getting into trickier scenarios like multi-az and
| potentially elastic GPUs for our SaaS offering
|
| Ex:
|
| - blue/green stateless nodes
|
| - DNS/load balancer pointer flips
|
| - Do the DB as a managed saas in the cloud and only ^^^ for
| app servers
|
| - Backups: We've been simply syncing (rsync, pg_dump, ..)
| w/ basic backup testing as part of our launch flow
| (stressed as part of blue/green flow), and cloud services
| automate blob backups. We're moving to DBaaS for doing that
| better, which involves taking the DB out of our app for our
| SaaS users
|
| In practice, our service availability has been largely
| degraded by external infra factors, like Azure's DNS going
| down, than by not using k8s/swarms for internal infra.
| Likewise, our own logic bugs causing QoS issues that don't
| show up in our 9's reports.
| zmmmmm wrote:
| > What do you hand a docker-compose file off to in production
|
| Why hand it off to anything?
|
| > When would you choose this path over a separate system to
| provision your production resources
|
| When you have a simple system with modest availability
| requirements and you don't like unnecessary complexity.
___________________________________________________________________
(page generated 2021-06-01 23:00 UTC)