[HN Gopher] Docker Compose best practices for dev and prod
___________________________________________________________________
Docker Compose best practices for dev and prod
Author : brycefehmel
Score : 244 points
Date : 2022-08-16 15:22 UTC (7 hours ago)
(HTM) web link (prod.releasehub.com)
(TXT) w3m dump (prod.releasehub.com)
| postalrat wrote:
| "August 18, 2022"
| qbasic_forever wrote:
| One of the coolest lesser known features of Docker is time
| travel.
| kh_hk wrote:
| docker compose is unnecessarily nerfed for using in development.
| I swear every company I work with has this problem and I always
| end up carrying along my hybrid bash-to-docker-compose cmd line
| tool that makes the dev experience under compose nice.
|
| Did you know that compose accepts stdin for `-f` files? Anything
| that outputs yaml is a valid "compose provider". Self plug, more
| on that, https://eskerda.com/complex-docker-compose-templates-
| using-b...
| jayd16 wrote:
| So whats the complaint? Compose seems pretty easy to work with
| to me.
| kh_hk wrote:
| For me the problem is that yaml (and the compose specificatio
| n) is not self sufficient to support complex scenarios. You
| will start by setting things and discovering that ENV vars *
| on the global scope of the compose specification are not
| supported so you end up having to template and output yaml.
|
| Try having 40 microservices and quickly iterate between
| fixing bugs in them. Some you want mounted, others you want
| running unmounted. I understand the premise is already
| broken, you do not want 40 microservices, but that's not a
| precondition you can fix without a time machine.
|
| [[* EDIT: by env vars on the global scope, I mean setting
| things on the global scope by using an env var, say, you want
| a certain volume named after an environment variable. You can
| only use ENVs as values. # this works
| foo: "${SOME_ENV}" # this does not
| ${SOME_ENV}: "foobar"
| lazide wrote:
| This is literally 'why kubernetes?'
| itintheory wrote:
| Your problem is that you can't set ENV variables at the
| global level? Have you looked at YAML anchors? Makes things
| more DRY for longer/complex compose files.
| kh_hk wrote:
| That's just the tip of the iceberg. My problem is that
| YAML is not a turing complete language (and dev
| environments are usually messy by their nature).
| terseus wrote:
| Given that YAML it's a configuration language I'd say
| that not being turing complete it's a feature, not a bug
| nor a limitation; I always want my language files to be
| declarative to not suffer the perils of logic.
|
| Edit: Also, I don't see the need for a turing complete
| language for something like docker compose, if you need
| something _really_ complex you can always script a
| docker-compose.yml generator with all the logic and
| complexity you need.
| benji-york wrote:
| You might find Cog (https://nedbatchelder.com/code/cog)
| helpful for generating YAML on the fly.
| SirSavary wrote:
| When I need a Turing-complete Compose file, I use Pulumi
| to spin up the Docker containers and wire things
| together: https://www.pulumi.com/blog/pulumi-and-docker-
| development-to...
| pkrumins wrote:
| Docker-compose just works and my team gets shit done. Does your
| team get shit done? If they do then what are you complaining
| about? If it hits production then it's all good!
| kh_hk wrote:
| All I am saying is:
|
| * docker compose could be better
|
| * docker compose does not improve that much
|
| * by mixing docker compose with templates you can get complex
| stuff going, but is not a much advertised feature.
|
| Most of the "cool dev tool env" that appear are based on
| kubernetes, or something that is _not_ docker compose, which
| makes them difficult to adopt. I would like docker compose to
| improve on a direction that makes development easier, that's
| all.
| fio_ini wrote:
| You can accomplish the same thing using environment variables.
| You can pass them in at build time or deploy time.
| `ENVIRONMENT=dev docker-compose up` or `docker-compose build
| --build-arg ENVIRONMENT=dev` and in your case maybe you write
| some function `find_answer() { something-async ... };
| ENVIRONMENT=$(find_answer) docker-compose ...` You can also
| read in .env files... Which in my opinion is cleaner since you
| can check the configurations into source control for YAML or
| .env files which is easier to read than hundreds of lines of
| bash and it's esoteric syntax. I love bash but I'm just sayin.
| kh_hk wrote:
| It's a simple example, maybe too simple to get the point.
| What I want is to conditionally set things on a yaml file. If
| you have ever used a templating language, you know this
| problem services: redis:
| ... {{ if some_condition }}
| volumes: bla bla {{ end if }}
| {{ if some_other_condition }} another_service:
| ... {{ end if }} volumes:
| {{ for volume in volumes }} - ...
| {{ end for }}
| Gordonjcp wrote:
| Why do you want to set things conditionally like that?
|
| This feels like it's a very incorrect solution to a problem
| that you're making far more complicated than it needs to
| be.
| kh_hk wrote:
| I apply this solution exclusively on the development
| environment. I have found that having the power to bail
| out to a shell whenever docker compose does not support
| something helps me iterate faster on my local dev
| (without affecting others).
|
| The following is just a contrived example. Why would I do
| that? Because I can # Bring up services
| without a command # # usage: #
| nocmd.sh up -d service_name function
| _compose { docker-compose -f docker-compose.yml
| $@ } function nocmd {
| cat << EOF services: $1:
| command: tail -f /dev/null EOF }
| function nocmd_compose { local service=${@: -1}
| _compose -f <(nocmd "$service") $@ }
| nocmd_compose $@
| encryptluks2 wrote:
| Podman allows for Kubernetes templates, so theoretically
| you could use Helm for this potentially.. otherwise,
| Minikube on Podman. I wish they had Go templating in the
| compose files though, that would be rad.
| [deleted]
| fio_ini wrote:
| no haha I get the point but it's just another example I see
| someone over engineering a problem where it should be
| declarative. In what use case would you dynamically
| provision some arbitrary n number of volumes without
| knowing the path names to them? for volumes why wouldn't
| you just use "volume_name:/some/path" ...
| "another_volume:/another/path" and let docker manage the
| volume for you? you're going to end up writing in a
| different file "/some/path,/another/path" so why not just
| put it in the compose file?Also why wouldn't you just mount
| the root path and put n paths inside of them instead of
| making them all volumes? I'm struggling to see that you're
| use case actually needs such a feature and you're solving a
| problem for something that's actually broken somewhere
| else. you're adding some virtual layer that's making things
| harder to manage or understand because you don't understand
| all the features of docker compose technology instead of
| fixing the root of the issue. I'm not convinced that you're
| "feature" is necessary.
| kh_hk wrote:
| Everything is simple until someone asks you to try and
| diagnose (and hopefully fix) a bug on an upgrade between
| two different versions of a piece of software that can be
| deployed using 5 different strategies or that has 20
| moving services...
| fio_ini wrote:
| the point of docker-compose is partly so that your stack
| is ephemeral? you stand things up and shut it down with
| one simple switch and it can move between different
| environments and use the same declaration as the last
| environment. so now I gotta drag your bash script around
| and embed my yaml file in it? lol
| fio_ini wrote:
| your volume example is an anti-pattern in regard to
| something like the 12 factor app. you're conditionally
| changing and setting the volumes based on an environment
| your in? your code and service is gonna be busted. how
| would you even know if anything worked in prod if it's
| not reproducible in dev since # if dev then volume xyz
| else volume abc #
| kh_hk wrote:
| It's an anti-pattern just because you say so. This is a
| simplified example of a complex problem that needs
| arbitrary volume mounts. Using volume mounts to run
| software inside containers is a common dev pattern on
| scenarios where rebuilding takes too much time to quickly
| iterate. I am sorry, but unfortunately I cannot expand
| the issue further here to help you understand it, and I
| am happy too if you think I am over-engineering compose.
| fio_ini wrote:
| I honestly don't care if it's you misunderstanding or
| your teammates are building a bad framework/foundation to
| iterate upon. Doesn't matter to me either way bud. But
| you're going against the premise of docker and docker-
| compose which is write once and run anywhere because the
| services will be different every time you deploy since
| they'll have have variable path and file locations etc. -
| that's why it's an anti-pattern not because "i say so"
| imwillofficial wrote:
| This was an awesome article.
|
| It lays out every single hard lesson I discovered when working
| with docker compose.
|
| Great minds think alike! (Or I just got lucky)
| edfletcher_t137 wrote:
| The article is titled "for dev and prod" (emphasis on the
| latter), yet the very first suggestion is to bind-mount your
| source outside the container so you can mess with it at any time?
| How is this good advice for production?! (Update: this is indeed
| great advice for development, and it's actually in a sub-section
| dedicated to that! My bad.)
|
| Update: reading comprehension, it's a thing! :) Thanks chrsig for
| pointing out that this is under an appropriate sub-heading. D'oh!
| chrsig wrote:
| it's under a subheading of
|
| > Docker Compose Best Practices for Development
| pcorsaro wrote:
| Just curious, why would a bind-mount be bad in production? I
| understand if you're running multiple app servers and need the
| code to be deployed in a lot of places that they'd be bad, but
| if you're just using a single server, what's the downside of
| using bind-mounts? You could use git to push code changes up to
| your server, and they'd be picked up immediately without having
| to rebuild and push an image.
| adamckay wrote:
| Usually for production you want to know with 100% certainty
| what version of the software is running. Whilst you can do
| that with Git commit hashes you can't be sure no one has
| performed a hotfix and modified code on the production server
| which is not committed to source control. Then there's also
| the potential problem of pulling the latest updates from
| source control but forgetting to restart the container so
| it's not obvious you're actually running outdated code.
|
| It's also a lot easier if there's a bad deploy to roll back
| the update by reverting the image tag in the compose file and
| restarting rather than checking out specific older commits
| and risk getting into a funky state with detached heads and
| the like.
| edfletcher_t137 wrote:
| This, 100%. Reproducibility, traceability and auditability
| _all_ go out the window when you allow source modification
| on in production directly. You _never_ want to ask "what's
| running in production" only to be greeted by crickets in
| response, or worse a lone "my branch from five days ago."
| gottie wrote:
| I find docker compose override files whilst powerful ultimately
| much more confusing and error prone to work with. Instead I would
| recommend just duplicating your compose files and changing each
| separately, perhaps using a templating engine. It is often much
| easier to work with a single compose file containing all the
| config in one easy to read block. This compared to having to do a
| yaml merge of all the various blocks in your head whilst
| reasoning about override files.
|
| Similarly yaml anchors are cool and can easily reduce duplication
| in your files. However I've found the majority of Devs and users
| find them confusing and unexpected in a compose file. As a result
| I would recommend not using them and just sticking with plain
| yaml for simplicity's sake and not being afraid of duplication.
|
| One other alternative to yaml anchors is instead using the
| env_file: property on your services
| (https://docs.docker.com/compose/environment-variables/#the-e...)
| . This way you can place all of your env variables in a single
| file and share this same of env variables between many services
| without having to have large duplicate env blocks in your compose
| file.
| dijksterhuis wrote:
| I usually suggest passing `--env-file` to the actual compose
| binary. Gives you the "multiple deployment versions" without
| faffing about with external templating tools.
| docker compose --env-file ${INSTANCE}.env -f docker-compose.yml
| up
|
| In the `*.env` file:
| CONF_A_VALUE="some_value_a" CONF_B_VALUE="some_value_b"
| CONF_C_VALUE="some_value_c" ...
|
| In the compose file: services:
| service-1: # you need to be more explicit here
| environment: SOME_NAME_FOR_A:
| "${CONF_A_VALUE:?err}" SOME_NAME_FOR_B:
| "${CONF_B_VALUE:?err}" # but it gives you
| simple templating in other places labels:
| com.company.name.label.a: ${CONF_D_VALUE:?err}
| com.company.name.label.b: ${CONF_E_VALUE:?err}
| com.company.name.label.c: ${CONF_F_VALUE:?err}
| ports: - "${CONF_G_VALUE:?err}:8020"
| service-2: # also gives services more freedom
| to diverge from one another (if required)
| environment:
| MAYBE_ANOTHER_NAME_FOR_A_BECAUSE_COWBOY_TEAM_REASONS:
| "${CONF_A_VALUE:?err}" SOME_NAME_FOR_C:
| "${CONF_C_VALUE:?err}"
|
| Where `?err` forces the variable to be set and non-empty:
| https://docs.docker.com/compose/environment-variables/#subst...
|
| It's just a shame that `docker stack` doesn't have an `--env-
| file` argument yet for swarm deploys.
| dijksterhuis wrote:
| I think the YAML anchors section would lead to an unparseable
| compose file?
|
| No `services` block above the `api` and `web` services, on the
| same level as `x-app:`.
|
| So the `<<: *default-app` anchor won't be recognised as the block
| that defines it is never terminated prior to use?
|
| (I've not tested as currently on the windows gaming box. this is
| purely from reading it, so happy to be corrected).
| duskwuff wrote:
| Yeah, the example in this blog post is nonsense. The services
| shouldn't be defined inside the same block as the defaults.
| newman314 wrote:
| Is there a "proper" way to have per host configs? I have a setup
| where I need to set "hostname: <HOSTNAME>" depending on the
| system I'm on and right now I'm having to resort to using "docker
| compose -f compose.<hostname>.yml up -d" where
| compose.<hostname>.yml extends compose.yml
| [deleted]
| azeirah wrote:
| Is there a reason why you wouldn't be able to use environment
| variables for this? Or am I misunderstanding your issue?
| koprulusector wrote:
| docker compose automatically looks for .env file with the
| format key=value and will auto populate variables in the
| docker-compose.yml with matches.
|
| Ex:
|
| docker-compose.yml:
|
| hostname: ${HOSTNAME}
|
| .env:
|
| HOSTNAME=foo.bar.com
| bacchusjackson wrote:
| biggerChris wrote:
| Docker is cool and all.
|
| Kids these days don't know anything about Tmux and port 433.
| Exposing localhost to the internet and getting your internet
| monitored by your ISP and getting letters in the mail; for
| hosting movies and Storage services.
|
| Lol.
| sureglymop wrote:
| Am kid, do know
| siftrics wrote:
| If it's going over port 443, how would your ISP know you're
| hosting movies?
|
| The way ISPs catch you is by seeing that you're connecting to
| IPs that are known to host content illegally. There's now way
| for your ISP to know what's going _out_ of your port 443
| without breaking TLS.
| jjice wrote:
| The post said 433, not 443. I read it as the HTTPS port as
| well at first.
| qbasic_forever wrote:
| Your ISP doesn't care about what you're connecting to and
| accessing, at least not until they get a court order
| demanding that info.
|
| They care a LOT about what you're serving to other people
| from their network though as there might be some legal issues
| directed back at them.
| pixl97 wrote:
| Your ISP may care about what your connecting to very much,
| and are prepared to sell it to the highest bidder.
|
| https://arstechnica.com/information-
| technology/2015/03/atts-...
| pkrumins wrote:
| Love your comment! Come work for me!
| sgt wrote:
| Love your compliment and belief in other people. Come work
| for me.
| edfletcher_t137 wrote:
| Back in the day we used screen because tmux didn't exist yet.
| koprulusector wrote:
| What does tmux have to do with anything?
| Macha wrote:
| A common deployment strategy for small scale services,
| especially around the 2005-2015 era, was to just copy files
| to your VPS, ssh in and start it up with nohup or in a
| screen/tmux session so it stayed running when you closed your
| ssh session. screen/tmux were especially helpful for stuff
| like irssi or minecraft servers which had a server side
| console you might want to return to.
|
| This was a step up for novice sysadmins/developers from "FTP
| to the webroot" without having to learn real deployment
| tools, how to write init scripts, etc. Of course it wasn't
| good enough for anything business critical, but I'm sure it
| happened there too.
|
| Also, if you then wanted to access that service from your
| college network, who may have gone out of their way to block
| gaming, a common workaround was to host it on port 443. By
| 2010-2015, colleges were using DPI sophisticated enough to
| see that "that's not http traffic" on port 80, but tended to
| treat 443 as "that's encrypted, must be a black box". That's
| how I ran mine in ~2013.
| aidos wrote:
| What's port 433? You mean 443?
| dijksterhuis wrote:
| 433 for those who didn't know (like me)
|
| https://en.wikipedia.org/wiki/Network_News_Transfer_Protocol
|
| > The Network News Transfer Protocol (NNTP) is an application
| protocol used for transporting Usenet news articles (netnews)
| between news servers, and for reading/posting articles by the
| end user client applications.
|
| > Well-known TCP port 433 (NNSP) may be used when doing a
| bulk transfer of articles from one server to another.
| jjtheblunt wrote:
| on many unix variants, /etc/services lists the ports and
| normal uses
| biggerChris wrote:
| Yes. I meant port 443.
| fortyseven wrote:
| Maybe some "mobile responsiveness" best practices should be
| looked into next.
| Ajedi32 wrote:
| I like to divide up my compose files into three:
|
| 1. A `docker-compose.yaml` for basic configuration common among
| all environments
|
| 2. A `docker-compose.dev.yaml` for configuration overrides
| specific to development
|
| 3. A `docker-compose.prod.yaml` for configuration overrides
| specific to production
|
| Then I have `COMPOSE_FILE=docker-compose.yaml:docker-
| compose.dev.yaml` in my `.env` file in the root of the project so
| I can just run `docker compose up` in dev, and
| `COMPOSE_FILE=docker-compose.yaml:docker-compose.prod.yaml` in a
| `prod.env` file so I can run `docker compose --env-file=prod.env
| up` in production. Sometimes also a `staging.env` file which is
| identical to `prod.env` but with a few different environment
| variables.
|
| It's been working pretty well for me so far.
| xorcist wrote:
| In my experience, the "base" configuration should always be in
| use somewhere. Otherwise you inevitably end up with cruft, when
| default values turns out to have no practical value.
|
| In your example, "dev" should probably be the default
| configuration, which "prod" overrides. It's a tiny detail but I
| think it pays over time.
|
| A similar situation is when you have lots of almost-identical
| settings between many (micro)services.
| lazide wrote:
| 100% agree - I find staging useful here. It's the default
| 'non-production', and has realistic data shared in a way that
| anyone in dev can access it. Think 'sacrificial prod with no
| actual real customers hitting it right now'. Depending on
| data sensitivity requirements, it might have a subset of real
| data, or no real data.
|
| Having prod be the default is a problem, as it's inevitable
| someone will accidentally break production/write junk to it
| when they didn't realize they were talking to prod.
|
| Having dev in it inevitably results in broken prod pushes
| because no one has ever tried it outside of a synthetic dev
| environment.
| Linda703 wrote:
| solardev wrote:
| If you happen to be unlucky enough to need Docker for LAMP/LEMP
| stacks, Lando is a neat open-source util one level of abstraction
| above Docker Compose: https://lando.dev/
|
| Like with `lando init` and then `lando start`, it'll fetch a LEMP
| "recipe" (say, for Wordpress or Drupal), configure all the Docker
| containers for you, set up all the networking between them, get
| PHP xdebug and such working automatically, and leave you with an
| IP and port to have your app ready at. It supports a bunch of
| other common services too, like redis, memcached, varnish,
| postgres, mongo, tomcat, elasticsearch, solr, and only very
| rarely do you need to drop down to editing raw docker compose or
| config files.
|
| Having had to use the Docker ecosystem for a few years, Lando
| made my life way easier. Although these days I just tend to avoid
| Docker altogether whenever possible (opting for simpler/more
| abstracted stacks, often maintained by a vendor like Vercel,
| Netlify, or Gatsby).
| user3939382 wrote:
| I was kind of puzzled by your preamble:
|
| > If you happen to be unlucky enough to need Docker for
| LAMP/LEMP stacks
|
| We run Docker LAMP and have a new version underway that's LEMP,
| haven't run into any problems. xdebug works fine, I even got it
| working recently in ECS. I'm not sure what's different about
| LAMP/LEMP in Docker vs any other stack.
| solardev wrote:
| Sorry for being unclear. It's not that Docker doesn't work
| well with LEMP, it's that Lando in particular was designed
| around older stacks (LEMP/LAMP and caches) and wouldn't be
| appropriate if you were trying to spin up, say, Next.js or
| Deno or Cloudflare Workers or React Native stacks or
| whatever.
|
| The "unlucky" part is just having to work with a LEMP stack
| at all, and that's just my personal bias leaking through my
| post, sorry. Having grown up with that stuff and used it
| until just last year, I am so so grateful that I was able to
| finally move into a frontend job where I don't have to manage
| the stack anymore. A new generation of abstracted backend
| vendors (headless CMSes coupled with Jamstack hosts) makes it
| so that there is a sub-industry of web devs who never have to
| touch VMs directly anymore. I, for one, couldn't be happier
| about that.
|
| (But of course there will always be other use cases that
| require a fuller/closer to the metal stack, and also backend
| and ops people who love that work. I don't fault them in the
| least, I greatly respect them, I'm just glad I don't have to
| do that.)
| RandomBK wrote:
| One topic I'd like to see more discussion around is how best to
| set up staging environments for Docker Compose projects.
|
| Ever since the V2 migration, I've been banging my head against
| multiple bugs in Docker Compose itself, which has severely
| damaged my trust in it as a production-ready utility. I no longer
| feel like I can safely update Compose without running extensive
| testing on each release.
|
| For example, my journey since migrating to V2 has been:
|
| * Originally on 1.29.2 - Stable and I did not
| encounter any issues.
|
| * Swapped to 2.3.3. - As part of the migration
| from V1, docker-compose lost track of several networks and
| started labeling them as external. Not a huge deal. -
| docker-compose stopped updating containers when the underlying
| image or configuration changed (#9291). This broke prod for a
| while until I noticed nothing was getting updated.
|
| * Updated to 2.7.0, which was the latest release at the time.
| - This introduced a subtle bug with env_file parsing (#9636),
| which again broke prod in subtle ways that took a while to figure
| out.
|
| * Updated to 2.8.0, which was the latest release at the time and
| before the warning was put up. * This broke a lot
| of things as it renamed containers and generated duplicate
| services, etc.
|
| * Finally swapped to 2.9.0. No issues yet, but I'm keeping an eye
| out.
| CincinnatiMan wrote:
| I recently learned that moving the compose file around and
| running commands on it with it in different directories puts
| things into a bad state. This is because it apparently tags
| images and containers with context on where the compose file was
| run from. However, you can negate this by always passing in a
| custom project name with -p.
| switchbak wrote:
| There's something about that decision that violates my mental
| model. Feels similar to a referential transparency issue: the
| state of the system should be the same regardless of the name
| of the directory something lives in.
| Gordonjcp wrote:
| You can specify a project name, but giving things rigidly-
| defined names breaks the whole idea of it.
|
| Why would you want two different instances of the same
| project to have the same name?
| lazide wrote:
| Who said it's a different instance?! Renaming a directory
| structure doesn't have side effects like this in pretty
| much any other infra stuff. It's like nc or curl deciding
| what Server to connect to based on the current directory
| name. Whacky.
| mirekrusin wrote:
| It's a great convention over configuration decision (that
| you can overwrite with -p) that trivially gives solution
| to multi-tenancy problem, similarly you'll run into
| problems when renaming $HOME to something else; for curl
| it's more like being surprised that it uses different
| source ip address and gives different result when called
| from host in China than in Europe.
| Gordonjcp wrote:
| That is specifically how it's designed. If you want to
| create a bunch of identical but differently-named
| projects, put them in separate directories.
|
| Why on earth would you want to have lots of different
| directories that all point to the same instance? That's
| simply idiotic.
| pbalau wrote:
| Which is true if you pass -p. The current directory is merely
| the default.
| turtlebits wrote:
| While that may be true, you should never destroy things that
| are out of your scope.
| lazide wrote:
| Docker has tons of smells/irritations like this. Overall, it
| is worth it, but damn is it weird sometimes.
| kossae wrote:
| To mitigate this, I usually always require a .env file for
| docker-compose containing directories with at minimum a
| `COMPOSE_PROJECT_NAME` envvar to prevent these naming issues.
| As long as the env file travels with the directory there are no
| issues.
| remram wrote:
| It would probably break any bind-mounted volume too, like
| volumes: - ./data:/use/lib/data
|
| I don't know why anyone would expect this to work.
| dugmartin wrote:
| I think it is just the parent directory name plus the service
| name (at least that is what I've seen at work where we have
| many docker-compose files with common "app"/"db" service
| names).
| sandGorgon wrote:
| docker compose was one of the most spectacular and developer
| friendly deployment tool i have used.
|
| unfortunately the ecosystem is on kubernetes. you want to
| integrate with spot instance bidding on AWS...u need kubernetes.
| you want monitoring tools...u need kubernetes, etc
|
| the Compose spec is now open and standardised -
| https://www.compose-spec.io/
|
| a kubernetes distro that can be managed _entirely_ using compose
| files will be a massively impactful project. Not like Kompose
| which converts Compose files to k8s yml files....but entirely on
| Compose files.
|
| I really hope someone does a startup here.
| jeremy_k wrote:
| When you say 'a kubernetes distro that can be managed entirely
| using compose files will be a massively impactful project. Not
| like Kompose which converts Compose files to k8s yml files....'
|
| Do you mean from the your perspective, you set up your docker-
| compose and this startup ends up deploying pods into Kubernetes
| based on your compose file? Basically cutting out the middleman
| of you having to deal with Kubernetes yaml files?
| frenchman99 wrote:
| What do you mean with monitoring tools? You could deploy
| Prometheus/Grafana or an ELK stack on Compose so I'm not sure I
| understand what you mean
| sandGorgon wrote:
| im talking about managed tools... not self hosted ones.
|
| e.g. datadog, etc https://www.datadoghq.com/blog/monitoring-
| kubernetes-with-da...
|
| that said, Grafana's own hosted solution "Grafana Cloud"
| gives Kubernetes operators...but not for anything else.
| https://grafana.com/docs/grafana-cloud/kubernetes-
| monitoring...
|
| the entire ecosystem is supporting k8s. there's not much
| choice here.
| jaimehrubiks wrote:
| Do people here set CPU limits? (in either docker or kube)
|
| In my company we only set CPU requests + Memory req+limits. I
| read somewhere that CPU limits are kinda broken and are not
| reliable. In some experiments CPU limits tend to cause
| unnecessary throttling, which is why some people don't recommend
| them.
|
| However, I've also found some cases where I can't make the app to
| use only some of the CPUs, some frameworks tend to read the
| number of total available CPUs on the node and try to use them.
| Mavvie wrote:
| In our k8s clusters we generally set both requests and limits
| on CPU and memory. We haven't noticed many problems, except for
| when some apps didn't even have CPU requests and dominated all
| the CPU on a cluster (which didn't even autoscale because the
| scheduler doesn't know about the problem).
|
| I think CPU limits (and also, choosing to set memory limits
| equal to the requests or higher) depends on your goal. If you
| want to get the best use out of your hardware, set low requests
| and high limits and you'll be able to minimize "wasted"
| scaling. But if your goal is consistent performance and
| availability, setting high requests and high limits (usually
| equal to each other) save you from a whole host of problems.
|
| Just my take. I have a fair bit of experience but am far from
| an expert on the topic. And my experience is only on the scale
| of dozens of applications/dozens of nodes...people with higher
| scale experience may feel differently.
| hu3 wrote:
| In Windows dev machines, under wsl2, I always set CPU and RAM
| limits by creating a file in %USERPROFILE%/.wslconfig with
| something like: [wsl2] memory=10GB
| processors=4 swap=2GB
|
| By limiting wsl2 I also limit the docker containers that run
| inside wsl2.
| JadoJodo wrote:
| One of the things we've really struggled with is sharing dev
| compose files between Windows and macOS. We were unable to make
| bind mounts work correctly with Windows, due to performance
| issues (this used to be documented in the Docker docs, but I
| can't seem to locate it now). This resulted in us falling back to
| VSCode's remote container tooling.
|
| Has anyone else had this issue with Windows? The last time I
| looked into the issue was about 8 months ago, so it's possible
| the issues have been addressed since then.
| tmpz22 wrote:
| You're trying to create a uniform developer experience across
| two platforms that are wildly different.
|
| If you find yourself spending a ton of time on a one-size-fits-
| all-disappoints-everybody solution, maybe its time to build two
| effective solutions instead.
| bluedino wrote:
| That's kind of the point if containers, isn't it?
| DrBenCarson wrote:
| No definitely not. In fact to this day containers only run
| on Linux. Your Mac or PC just runs a Linux VM in order to
| be able to run any containers.
|
| Containers are not meant to make anything OS-agnostic.
| Containers are just a way of running Linux
| tmpz22 wrote:
| Oh god no, Linux containers were not intended to be the
| solution for cross compatibility issues between Windows and
| MacOS
| nickjj wrote:
| > We were unable to make bind mounts work correctly with
| Windows, due to performance issues (this used to be documented
| in the Docker docs, but I can't seem to locate it now)
|
| Docker Desktop volume performance on Windows is great if you're
| using WSL 2 as long as your source code is in the WSL 2 file
| system. It also works great with WSL 1. I've been using it for
| a long time for full time dev.
|
| In fact, volume performance on Windows tends to be near native
| Linux speeds. It's macOS where volume speeds are really slow
| (even on a new M1 using Virtiofs). Some of our test suites run
| in 30 seconds on Windows but 3 minutes on macOS.
|
| But the Docker Compose config is the same in both. If you want,
| I currently maintain example apps for Flask, Rails, Django,
| Node and Phoenix at https://github.com/nickjj?tab=repositories&
| q=docker-*-exampl..., I use some of these exact example apps at
| work and for contract work. The same files are used on Windows,
| macOS and Linux.
|
| The only time issues arise is when developers on macOS forget
| that macOS' file system is case insensitive where as Linux is
| not. Docker volumes take on properties of their host so this
| sometimes ends up being a "but it works on my machine!" issue
| specific to macOS. CI running in Linux always catches this tho.
| KronisLV wrote:
| > Docker Desktop volume performance on Windows is great if
| you're using WSL 2 as long as your source code is in the WSL
| 2 file system. It also works great with WSL 1. I've been
| using it for a long time for full time dev.
|
| My experience actually was the exact opposite. This one
| project had horrible IO performance on Windows, when running
| a bunch of PHP containers, with the WSL2 integration in
| Docker Desktop set to enabled.
|
| It was bad to the point of the app taking half a minute to
| just load a CRUD page with some tables, whereas after
| switching to Hyper-V back end for running Docker, things sped
| up to where the page load was around 3 seconds.
|
| I have no idea why that was, but after switching to something
| other than WSL2, things did indeed improve by an order of
| magnitude.
| 0xbadcafebee wrote:
| If you don't use the same Docker Compose file for Production,
| don't use it for Development. Your two different systems will
| diverge in behavior, leading you to troubleshoot two separate
| sets of problems, and testing being unreliable, defeating the
| whole "it just runs everywhere" premise.
|
| Docker Compose is fine for running smoke tests/unit tests, but if
| Dev and Prod run differently, there's really no point to using
| Docker Compose other than developing without internet access.
|
| (It goes without saying that Docker Compose only works on _a
| single host_ , so that won't work for Prod if you need more than
| one host, but single-host-everything seems to be HN's current
| fetish)
| dijksterhuis wrote:
| > defeating the whole "it just runs everywhere" premise.
|
| Straight up 100% hard agree.
|
| > It goes without saying that Docker Compose only works on a
| single host, so that won't work for Prod if you need more than
| one host, but single-host-everything seems to be HN's current
| fetish)
|
| technically this is only correct for compose file versions up
| to and including 3.8.
|
| Part of the push with the new mainline compose V2 plugin has
| included the updated compose specification [0] which seems to
| have unify quite a lot of differences between the "compose"
| file for compose deployments and the "stack" file for swarm
| deployments.
|
| FYI You can switch over to the new version with the compose V2
| plugin by remvoing any `version` key from your compose file.
|
| Although the `deploy: restart_policy:` values are currently not
| unified, much to my sadness.
|
| [0]: https://docs.docker.com/compose/compose-file/
| twblalock wrote:
| Also, don't use Docker Compose in dev if you use Kubernetes in
| prod. Just use the same thing everywhere. Otherwise the fact
| that something works in dev tells you nothing about whether it
| would work in prod.
| frenchman99 wrote:
| Running a local k8s cluster doesn't guarantee that a remote
| cluster on some cloud provider will work the same either.
|
| I've heard of and worked with multiple dev teams that develop
| with compose and deploy a totally different way (k8s, AWS
| auto scaling groups, heroku, etc), to great success.
|
| Devs know about compose and understand it. As long as the
| containers behave as planned, the DevOps people can figure
| out how to deploy them on whatever production environment.
| 0xbadcafebee wrote:
| ^ 1000%
| shroompasta wrote:
| This is a questionable comment
|
| >If you don't use the same Docker Compose file for Production,
| don't use it for Development. Your two different systems will
| diverge in behavior, leading you to troubleshoot two separate
| sets of problems, and testing being unreliable, defeating the
| whole "it just runs everywhere" premise.
|
| Anyone ever developing a react project on docker compose would
| never run a build step for development.
|
| In fact for most setups, I probably wouldn't even run react on
| docker-compose for development, and just use the dev server
| straight up on my machine.
|
| Furthermore, working with any python http servers is
| substantially easier when working with a dev server in
| development, rather than running your server through gunicorn
| or other wsgi servers - not to mention the ability to hot
| reload for development.
|
| I'm sure that there are many other cases where it's necessary
| and convenient to separate production and development docker
| compose files.
|
| I don't think it's fair to say that there is no point in having
| separate docker-compose files as the lion's share of
| dependencies that need to be consistent is inside each
| container, not on the docker-compose configuration level.
| zoomzoom wrote:
| Totally agree that in practice you often need several
| configurations for the same systems across different
| environment types. For example, you probably also need a
| different docker-compose for running in CI when compared to dev
| or a deployed environment. The same applies for k8s and
| terraform, and many others.
|
| Managing all these different configurations and the matrix of
| interactions between them is why we're trying to integrate the
| SDLC into one configuration at Coherence (withcoherence.com).
| [disclosure... I'm a cofounder]
| nicoburns wrote:
| Meanwhile here I am developing on macOS (no Docker or VMs
| involved) and deploying to linux. We do run tests on a linux CI
| runner, and we have a staging environment that's identical to
| prod. But really cross-platform support tends to be really good
| these days. I see no need for identical environments for
| development.
| bananaoomarang wrote:
| Compose also works well with Swarm in my experience.
| rdsubhas wrote:
| > ... if Dev and Prod run differently, there's really no point
| to ...
|
| "If dev and prod run differently" -- Why is there an "if" here
| at all? Isn't it obvious to verify with two eyes in the real
| world? Is there any company in the world that runs production
| from somebody's laptop at home? Or distributes server racks for
| people to take home? It looks like an extremely disillusioned
| question out of touch with reality.
|
| 1. For development, engineers want to run a local throwaway
| mysql/redis. In production, engineer want to use a proper
| managed mysql/redis. The compose file, and envvars/links will
| be different. This is extremely normal, it makes sense.
|
| 2. Development laptops will always be underpowered than
| servers. The cpu/memory requirements will be different. Again
| makes sense.
|
| 3. For development, engineers will build and run the docker
| image locally. In production, these are separate, you would
| build and publish the image separately in CI, and run only
| published images in prod. Again, makes sense.
|
| These are all perfectly logical things that make practical
| sense.
|
| Trying to share/ reuse/ resemble is fine. But saying
| "dev===prod" is just a disillusion out of touch with reality.
|
| Edit: For context, yes I do docker for a living for last 10
| years, since v0.4 (lxc) days. I've been to and presented in
| meetups. If someone says they're trying to get dev as close
| functionally to prod, by doing X, Y, Z and sharing practical
| tips -- you know you can listen to them. If someone says
| "dev===prod" and when you ask how they talk abstract
| principles, you quietly get out of there.
| 0xbadcafebee wrote:
| > For development, engineers want to run a local throwaway
| mysql/redis. In production, engineer want to use a proper
| managed mysql/redis.
|
| Bad idea. You develop against a completely different local
| system, then you push to prod, and "oh no it doesn't work".
| Yeah - because you're developing against a completely
| different system! It behaves differently. It causes different
| bugs. You take a ton of time setting up _both different
| systems differently_ , duplicating your effort, duplicating
| your bugs, having to fix things twice for no reason, and
| causing production issues when expectations from local dev
| don't match prod reality.
|
| Every experienced systems engineer has been repeating this
| for decades. Listen to them. They all say exactly. the. same.
| thing. _" Make all your systems as identical as possible."_
| That doesn't mean make all your systems different whenever it
| is more convenient for you. It means that if it's possible
| for you to be developing using the same tech you use in prod,
| you should be doing that, unless it is impossible. If you
| think it's not possible or practical, you simply haven't
| thought hard enough about it.
|
| The whole selling point of the cloud is that you can spin up
| environments and spin them down in minutes and pay pennies
| for it. Not a shared environment where everyone's work is
| constantly conflicting, but dedicated, ephemeral environments
| that actually match production. Literally the only example I
| know of where this doesn't work is with physical devices
| where you have a lab of 10 experimental pieces of test gear
| and you have to reserve them to test code on them. That can
| make remote development difficult. Still not impossible
| though.
|
| > The cpu/memory requirements will be different.
|
| Unless you're developing using the same servers as prod.
| There is no reason that you have to use your laptop. If you
| think "it's more convenient!", that's only because you have
| done zero effort to make Prod-like ephemeral environments
| more convenient.
|
| > For development, engineers will build and run the docker
| image locally. In production, these are separate, you would
| build and publish the image separately in CI, and run only
| published images in prod. Again, makes sense.
|
| That is the antithesis of what Docker was created for. The
| point of containers was to say "it works on my machine", and
| use that exact same image in production. Not a different
| image in CI that nobody developed their code against. It's
| just going to result in bugs that the developer didn't see
| locally, so they will need to spend more time to get it
| working. Your laptop can do the same build CI can on the same
| branch and push the same image that CI can.
|
| What I'm saying is nothing new. Look into cloud-based
| development environments and the dozen different solutions
| made solely for rendering your local code in a remote K8s pod
| as soon as you write to a local file. Imagine a world where
| you don't waste your time testing something twice and fixing
| bugs twice.
| satyrnein wrote:
| If the dev environments are on AWS with real S3 buckets
| (etc), and everything runs as an auto scaling group, is it
| not possible to use the same exact docker images (and
| terraform) in dev and prod?
|
| We haven't achieved it (yet?) but that's the direction I was
| aspiring to!
| rdsubhas wrote:
| I was responding to the comment parent. Your article was
| practical and spot on.
| strzibny wrote:
| It is. Convenience. I mean you replaced perhaps running staff
| locally that also didn't match production in the first place...
|
| I am now using this setup[0] for development, but it doesn't
| really match production 1:1.
|
| [0] https://nts.strzibny.name/hybrid-docker-compose-rails/
| tjoff wrote:
| That is an absurdly black-and-white view.
| [deleted]
| dijksterhuis wrote:
| I have lost count of the times I've heard "but it works fine
| on MY machine, it must be the CI/CD that's broken"
|
| I then find out that no-one bothered to run the compose file
| that matches what gets put into prod on their local machines.
|
| My point is: your view might become more black and white on
| this matter after the N-th "but it works fine on MY machine"
| comment.
|
| EDIT: Where N = your personal tolerance level of bullshit.
| remram wrote:
| Sounds like this was easily caught by CI, which does run
| the prod compose file, and everything was fine.
| dijksterhuis wrote:
| This wasn't a real example. Just a generic thing that
| happens everywhere I go.
|
| There's always one in the crowd that refuses to give up
| local development.
| remram wrote:
| As long as you have CI or staging with the proper config,
| what does it matter how the devs write code?
| tjoff wrote:
| That might me your experience, I don't share it.
|
| But even so, how you can go from that to:
|
| > _[...] there 's really no point to using Docker Compose
| other than developing without internet access_
|
| boggles my mind.
|
| EDIT: apologies, mixup on my end and quote is from someone
| else.
| dijksterhuis wrote:
| Not the parent! Made no comment on developing without
| internet access.
|
| EDIT: I usually try to build, test, inspect, push and
| deploy from the same compose file. You can deploy to ECS
| direct from compose using an AWS context:
| https://aws.amazon.com/blogs/containers/deploy-
| applications-...
| rubyist5eva wrote:
| docker compose is old hat, using podman to generate k8s Kubefiles
| for pods is where it's at
| bdcravens wrote:
| Surprised it didn't mention profiles, which let you launch only
| part of your "stack" (think background processing in addition to
| web app, etc)
|
| https://docs.docker.com/compose/profiles/
|
| Prior to that feature, I had a hand-rolled approach using bash
| scripts and using overrides. Worked well enough, but felt like a
| dirty dependency.
| Ajedi32 wrote:
| One thing I really like about profiles is that they get
| activated implicitly if you reference one of their services
| when running docker compose. So you can define development,
| debugging, or maintenance tools in your compose file and just
| run them with `docker compose run some-random-tool` without
| having to worry about them interfering with the standard
| `docker compose up` workflow.
| xwowsersx wrote:
| Wow, did not know about this. Thanks for the tip.
| Cyph0n wrote:
| Wow, this is a pretty cool feature. Looks similar to Ansible
| tags.
| _joel wrote:
| Neat, this would have saved some headaches in the past (about
| 2018). Was it a feature then? If so I wasted some time
| reinventing the wheel, it seems!
| pbalau wrote:
| It's quite new, I'm still waiting for some free time to ruin
| it.
| bdcravens wrote:
| It was added in early 2021 (1.28)
| glintik wrote:
| Wondered that author didn't touch such important thing as backups
| of running containers.
| lazyant wrote:
| what's the use case for backing up running containers? the code
| in the docker image is immutable (or should be), config goes
| into .env file and permanent data should go to a mounted volume
| in the host or an external database, so what do you want to
| backup?
| glintik wrote:
| ok, how do you backup your database volumes? when db is
| running.
| lazyant wrote:
| same as if the database binary is running in the host? no
| difference with the db binary running in a container.
___________________________________________________________________
(page generated 2022-08-16 23:00 UTC)