[HN Gopher] Rails on Docker
       ___________________________________________________________________
        
       Rails on Docker
        
       Author : mikecarlton
       Score  : 220 points
       Date   : 2023-01-26 16:32 UTC (6 hours ago)
        
 (HTM) web link (fly.io)
 (TXT) w3m dump (fly.io)
        
       | brundolf wrote:
       | I used to think I hated Docker, but I think what I actually hate
       | is using Docker locally (building/rebuilding/cache-busting
       | images, spinning containers up and down, the extra memory usage
       | on macOS, etc etc). I don't need all that for development, I just
       | want to run my dang code
       | 
       | But I've been really enjoying it as a way of just telling a PaaS
       | "hey here's the compiler/runtime my code needs to run", and then
       | mostly not having to worry about it from there. It means these
       | services don't have to have a single list of blessed languages,
       | while pretty much keeping the same PaaS user experience, which is
       | great
        
         | MuffinFlavored wrote:
         | > I don't need all that for development, I just want to run the
         | dang code
         | 
         | Have you ever worked somewhere where in order to run the code
         | locally on your machine, it's a blend of "install RabbitMQ on
         | your machine, connect to MS-SQL in dev, use a local Redis,
         | connect to Cassandra in dev, run 1 service that this service
         | routes through/calls to locally, but then that service will
         | call to 2 services in dev", etc?
        
           | brightball wrote:
           | Docker compose for those services while the language itself
           | runs natively has been the best solution to this problem for
           | me in the past. Docker compose for redis, postgres, elastic,
           | etc.
           | 
           | IMO Docker for local dev is most beneficial for python where
           | local installs are so all over the place.
        
             | brundolf wrote:
             | Yeah, and python has been most of my exposure to local
             | Docker, so that may be coloring my experience here. Running
             | a couple off-the-shelf applications in the background with
             | Docker isn't too bad, but having it at the center of your
             | dev workflow where you're constantly rebuilding a container
             | around your own code is just awful in my experience
        
             | ywain wrote:
             | This is exactly what I recently set up for our small team:
             | use Docker Compose to start Postgres and Redis, and run
             | Rails and Sidekiq natively. Everyone is pretty happy with
             | this setup, we no longer have to manage Postgres and Redis
             | via Homebrew and it means we're using the same versions
             | locally and in production.
             | 
             | If anyone is curious about the details, I simply reused the
             | existing `bin/dev` script set up by Rails by adding this to
             | `Procfile.dev`:                   docker: docker compose -f
             | docker-compose.dev.yml up
             | 
             | The only issue is that foreman (the gem used by `bin/dev`
             | to start multiple processes) doesn't have a way to mark one
             | process as depending on another, so this relies on Docker
             | starting the Postgres and Redis containers fast enough so
             | that they're up and running for Rails and Sidekiq. In
             | practice it means that we need to run the `docker compose`
             | manually the first time (and I suppose every time we'll
             | update to new versions) so that Docker downloads the images
             | and caches them locally.
        
               | markhesketh wrote:
               | I do the same thing and it works well.
               | 
               | For your issue could you handle bringing up your docker-
               | compose 'manually' in bin/dev? Maybe conditionally by
               | checking if the image exists locally with `docker
               | images`. Then tear it down and run foreman after it
               | completes?
        
               | brightball wrote:
               | Yea, typically I just do the step manually to start
               | dependencies and then start the rails processes.
        
           | [deleted]
        
           | keb_ wrote:
           | I certainly have. And there is often a Dockerfile or a
           | docker-compose file. And the docs will say "just run docker-
           | compose up, and you should be good to go."
           | 
           | And 50% of the time it actually works. The other 50% of the
           | time, it doesn't, and I spend a week talking to other
           | engineers and ops trying to figure out why.
           | 
           | Not a jab at Docker, btw.
        
             | jayd16 wrote:
             | What are some common error cases you see this way?
        
             | tomhallett wrote:
             | The thing I've been trying - instead of the docs saying
             | "just run docker-compose up" (and a few other commands),
             | put all of those commands in a bash script. That single
             | bootstrap command should: docker compose down, docker
             | compose build, docker login, [install gems/npm modules],
             | rake db:reset (delete + create), docker compose up, rake
             | generate_development_data.
             | 
             | This way each developer can/should tear down and rebuild
             | their entire development environment on a semi-frequent
             | basis. That way you know it works way more than 50% of the
             | time. (Stretch goal: put this in CI.... :)
             | 
             | The other side benefit: if you reset/restore your
             | development database frequently, you are incentived/forced
             | to add any necessary dev/test data into the "rake
             | generate_development_data", which benefits all team
             | members. (I have thought about a
             | "generate_development_data.local.rb" approach where each
             | dev could extend the generation if they shouldn't commit
             | those upstream, but haven't done that by anymeans....)
        
             | brundolf wrote:
             | And to borrow from the classic regex saying: "Now you have
             | two problems"
        
         | jwcooper wrote:
         | Docker can be a pain sometimes (especially docker for mac!),
         | but in moderately complex apps, it's so nice to be able to just
         | fire up 6 services (even some smaller apps have redis,
         | memcached, pg/mysql, etc) and have a running dev environment.
         | 
         | If someone updates nodejs, just pull in the latest dockerfile.
         | Someone updates to a new postgres version? Same thing. It's so
         | much better than managing native dependencies.
        
         | bradgessler wrote:
         | The best description I heard about Docker was from a fella I
         | worked with a while back who said, "Docker is terrible, but
         | it's less worse than Puppet, etc." after another co-worker
         | challenged our decision to move forward with Docker in our
         | production infrastructure.
        
         | jchw wrote:
         | > the extra memory usage on macOS
         | 
         | It's worth noting that the Docker experience is _very_
         | different across platforms. If you just run Docker on Linux, it
         | 's basically no different than just running any other binary on
         | the machine. On macOS and Windows, you have the overhead of a
         | VM and its RAM to contend with at _minimum_ , but in many cases
         | you also have to deal with sending files over the wire or
         | worse, mounting filesystems across the two OSes, dealing with
         | all of the incongruities of their filesystem and VFS layers and
         | the limitations of taking syscalls and making them go over
         | serialized I/O.
         | 
         | Honestly, Docker, Inc. has put entirely too much work into
         | making it decent. It's probably about as good as it can be
         | without improvements in the operating systems that it runs on.
         | 
         | I think this is unfortunate because a lot of the downsides of
         | "Docker" locally are actually just the downsides of running a
         | VM. (BTW, in case it's not apparent, this is the same with
         | WSL2: WSL2 is a pretty good implementation of the Linux-in-a-VM
         | thing, but it's still _just_ that. Managing memory usage is, in
         | particular, a sore spot for WSL2.)
         | 
         | (Obviously, it's not _exactly_ like running binaries directly,
         | due to the many different namespacing and security APIs docker
         | uses to isolate the container from the host system, but it 's
         | not meaningfully different. You can also turn these things off
         | at will, too.)
        
           | philwelch wrote:
           | The solution I've settled on lately is to just run my entire
           | dev environment inside a Linux VM anyway. This solves a
           | handful of unrelated issues I've run into, but it also makes
           | Docker a little more usable.
        
           | brundolf wrote:
           | Yeah I'm aware, which is why I qualified that part of the
           | statement. But every company I've ever worked at has issued
           | MacBooks (which I'm not complaining about- I definitely
           | prefer them overall), so it is a pervasive downside
           | 
           | Though also, the added complexity to my workflow is an order
           | of magnitude more important to me than the memory
           | usage/background overhead (especially now that we've got the
           | M1 macs, which don't automatically spin up their fans to
           | handle that background VM)
        
           | kitsunesoba wrote:
           | I don't do backend work professionally so my opinion probably
           | isn't worth much, but the way Docker is so tightly tied to
           | Linux makes me hesitant to use it for personal projects.
           | Linux is great and all but I really don't like the idea of so
           | explicitly marrying my backend to any particular platform
           | unless I really have to. I think in the long run we'd be
           | better served figuring out ways to make platform irrelevant
           | than shipping around Linux VMs that only really work well on
           | Linux hosts.
        
             | treis wrote:
             | It's, more or less, practically impossible to be OS
             | agnostic for a backend with any sort of complexity. You can
             | choose layers that try to abstract the OS layer away but
             | sooner or later you're going to run into part of the
             | abstraction that leaks. That plus the specialty nature of
             | Windows/Mac hosting means your backend is gonna run on
             | Linux.
             | 
             | It made sense at one point to use Macs but these days
             | pretty much everything is electron or web based or has a
             | Linux native binary. IMHO backend developers should use x64
             | linux. That's what your code is running on and using
             | something different locally is just inviting problems.
        
             | earthling8118 wrote:
             | It's not as if you're locked to Linux though. Most if not
             | all of my applications would run just fine on Windows if I
             | wanted to. It's just that when I run them myself I use a
             | container because I'm already choosing to use a Linux
             | environment. That doesn't mean the application couldn't be
             | shipped different but rather it is just an implementation
             | detail
        
             | jchw wrote:
             | IMO, shipping OCI images doesn't tether your backend to
             | Docker anymore than shipping IDE configurations in your Git
             | repository tether you to an IDE. You could tightly couple
             | some feature to Docker, but the truth is that most of
             | Docker's interface bits are actually pretty standard and
             | therefore things you could find anywhere. The only real
             | reason why Docker can't be "done" the way it is on Linux
             | elsewhere is actually because of the unusual stable syscall
             | interface that Linux provides; it allows the userlands ran
             | by container runtimes to run directly against the kernel
             | without caring too much about the userland being
             | incompatible with the kernel. This doesn't hold for macOS,
             | other BSDs, or Windows (though Windows _does_ neatly
             | abstract syscalls into system libraries, so it 's not
             | really that hard to deal with this problem on Windows,
             | clearly.)
             | 
             | Therefore, if you use Docker 'idiomatically', configuring
             | with environment variables, communicating over the network,
             | and possibly using volumes for the filesystem, it doesn't
             | make your actual backend code any less portable. If you
             | want to doubly ensure this, don't actually tether your
             | build/CI directly to Docker: You can always use a standard-
             | ish shell script or another build system for the actual
             | build process instead.
        
             | brundolf wrote:
             | I don't think Linux is going away as the main server OS
             | anytime soon, if ever. So that just leaves local dev
             | 
             | To that end- I prefer to just stick with modern languages
             | whose first-party tooling makes them not really have to
             | care what OS they're building/running on. That way you can
             | work with the code and run it directly pretty much
             | anywhere, and then if you reserve Dockerfiles for
             | deployment (like in the OP), it'll always end up on a Linux
             | box anyway so I wouldn't worry too much about it being
             | Linux-specific
        
               | kitsunesoba wrote:
               | Yeah I'm not too worried about what's on the deployment
               | end, but rather the dev environment. I don't want to
               | spend any time wrestling Docker to get it to function
               | smoothly on non-Linux operating systems.
               | 
               | Agree that it's a strong argument for using newer
               | languages with good tooling and dependency management.
        
               | skrtskrt wrote:
               | the dev workflow usage of docker is less about developing
               | your own app locally and more about being able to
               | mindlessly spin up dependencies - a seeded/sample copy of
               | your prod database for testing, a Consul instance, the
               | latest version of another team's app, etc.
               | 
               | You can just bind the dependencies that are running in
               | Docker to local ports and run your app locally against
               | them without having to docker build the app you're
               | actually working on.
        
               | throwanem wrote:
               | I've run Docker devenvs on Linux, Windows (via WSL2 so
               | also Linux but only kinda) and Mac.
               | 
               | The closest I've come in years to having to really
               | wrestle with it was the URL hacking needed to find the
               | latest version willing to run on my 10-year-old MBP that
               | I expect to run badly on anything newer than 10.13 - the
               | installer was _there_ on the website, just not linked I
               | guess because they don 't want the support requests. Once
               | I actually found and installed it, it's been fine, except
               | that it still prompts me to update and (like every Docker
               | Desktop install) bugs me excessively for NPS scores.
        
           | cutler wrote:
           | Honestly, to an outsider Docker sure sounds like a world of
           | pain.
        
             | vxNsr wrote:
             | To a developer it probably is, as a user, it's much easier
             | to install self hosted server apps with minimal effort.
             | Especially because the Docker file usually already has the
             | sane defaults set while the binary requires more manual
             | config.
        
               | throwanem wrote:
               | It's not too bad as a developer, either, especially when
               | building something that needs to integrate with
               | dependencies that aren't just libraries.
               | 
               | It may be less than ideally efficient in processor time
               | to have everything I work on that uses Postgres talk to
               | its own Postgres instance running in its own container,
               | but it'd be a lot more inefficient in _my_ time to
               | install and administer a pet Postgres instance on each of
               | my development machines - especially since whatever I 'm
               | building will ultimately run in Docker or k8s anyway, so
               | it's not as if handcrafting all my devenvs 2003-style is
               | going to save me any effort in the end, anyway.
               | 
               | I'll close by saying here what I always say in these
               | kinds of discussions: I've known lots of devs, myself
               | included, who have felt and expressed some trepidation
               | over learning how to work comfortably with containers.
               | The next I meet who expresses regret over having done so
               | will be the first.
        
             | jchw wrote:
             | If all you need is a statically linked binary running in a
             | Screen session somewhere, then without question, you're
             | going to find Docker to be esoteric and pointless.
             | 
             | Maybe you've dealt with Python deployments and been bitten
             | by edge cases where either PyPI packages or the interpreter
             | itself just didn't quite match the dev environment, or even
             | other parts of the production environment. But still, it
             | "mostly" works.
             | 
             | Maybe you've dealt with provisioning servers using
             | something like Ansible or SaltStack, so that your setup is
             | reproducible, and run into issues where you need to delete
             | and recreate servers, or your configuration stops working
             | correctly even though you didn't change anything.
             | 
             | The thing that all of those cases have in common is that
             | the Docker ecosystem offers pretty comprehensive solutions
             | for each of them. Like, for running containers, you have
             | PaaS offerings like Cloud Run and Fly.io, you have managed
             | services like GKE, EKS, and so forth, you have platforms
             | like Kubernetes, or you can use Podman and make a Systemd
             | unit to run your container on a stock-ish Linux distro
             | anywhere you want.
             | 
             | Packaging your app is basically like writing a CI script
             | that builds and installs your app. So you can basically
             | take whatever it is you do to do that and plop it in a
             | Dockerfile. Doesn't matter if it's Perl or Ruby or Python
             | or Go or C++ or Erlang, it's all basically the same.
             | 
             | Once you have an OCI image of your app, you can run it like
             | _any other_ application in an OCI image. One line of code.
             | You can deploy it to _any_ of the above PaaS platforms, or
             | your own Kubernetes cluster, or any Linux system with
             | Podman and Systemd. Images themselves are immutable,
             | containers are isolated from eachother, and resources (like
             | exposed ports, CPU or RAM, filesystem mounts, etc.) are
             | granted explicitly.
             | 
             | Because the part that matters for you is in the OCI image,
             | the world around it can be standard-issue. I can run
             | containers within Synology DSM for example, to use Jellyfin
             | on my NAS for movies and TV shows, or I can run PostgreSQL
             | on my Raspberry Pi, or a Ghost blog on a Digital Ocean VPS,
             | in much the same motion. All of those things are one
             | command each.
             | 
             | If all you needed was the static binary and some init
             | service to keep it running on a single machine, then yeah.
             | Docker is unnecessary effort. But in most cases, the
             | problem is that your applications aren't simple, and your
             | environments aren't homogenous. OCI images are extremely
             | powerful for this case. This is exactly why people want to
             | use it for development as well: sure, the experience IS
             | variable across operating systems, but what doesn't change
             | is that you can count on an OCI image running the same
             | basically anywhere you run it. And when you want to run the
             | same program across 100 machines, or potentially more, the
             | last thing you want to deal with is unknown unknowns.
        
         | jrochkind1 wrote:
         | As someone who has not yet adopted to Docker, even though that
         | seems to be what is done on contemporary best-in-class PaaS....
         | 
         | I don't totally understand the maintenance story. On heroku
         | with buildpacks, I don't need to worry about OS-level security
         | patches, patches to anything that was included in the base
         | stack provided by the PaaS, they are responsible for. Which I
         | consider part of the value proposition.
         | 
         | When I switch to using "Docker" containers to specify my
         | environment to the PaaS... if there is a security patch to the
         | OS that I loaded in one of my layers... it's up to me to notice
         | and update my container? Or what? How is this actually handled
         | in practice?
         | 
         | (And I agree with someone else in this thread that this fly.io
         | series of essays on docker is _great_, it's helping me
         | understand docker better whether or not I use fly.io!)
        
           | jayd16 wrote:
           | I think the idea is you have a CI/CD pipeline that will
           | periodically rebuild and redeploy with the latest base images
           | along with your code updates and such.
        
           | ehhthing wrote:
           | For what it's worth, the vast majority of vulns in a web app
           | are in its code or dependencies rather than in the base OS. I
           | haven't actually seen any real-world cases of getting hacked
           | because your docker base OS image was out of date. The only
           | exception I would give would be for like language runtime
           | version, which can occasionally be an attack vector.
           | Switching runtime version usually requires manual testing
           | regardless, so I wouldn't really consider it a docker-only
           | problem.
           | 
           | If you're really concerned, just have a CI job that rebuilds
           | and tests with newer base image versions.
        
           | bvirb wrote:
           | Same boat here. I'm doubtful that we could take on the
           | additional maintenance work for less than the Heroku premium
           | we pay.
           | 
           | I wouldn't be surprised if I'm wrong and newer tools bridge
           | the gap for a lower price and/or time investment, but I also
           | wouldn't be surprised if I'm right and many places using
           | Docker could save time/money offloading the maintenance to
           | something more like a managed PaaS.
        
             | jrochkind1 wrote:
             | Agreed. I actually have not too much problem with heroku
             | pricing (I could complain about some areas, but it's
             | working for us) -- I'm just worried that heroku seems to be
             | slowly disintegrating through lack of investment, so am
             | worried that there seem to be no other realistic options
             | for that level of service! There don't seem to be other
             | reliable options for managed PaaS that takes care of
             | maintenance in the same way.
        
           | mike_hearn wrote:
           | Yeah the fly.io stuff is great, beautiful artwork too, top
           | class job.
           | 
           | Re: security updates. It's not handled. There are companies
           | that will scan your infrastructure to figure out what's in
           | your containers and find out of date OS base images.
           | 
           | One thing I'm currently experimenting with is a product for
           | people who would like an experience _slightly_ closer to
           | traditional Linux. The gist is, e.g.
           | include "#!gradle -q printConveyorConfig"   // Import Java
           | server         app {           deploy.to = "vm-
           | frontend-{lhr,nyc}{1-10}.somecloud.com"           linux {
           | services.server {                 include
           | "/stdlib/linux/service.conf"              }
           | debian.control.Depends = "postgres (>= 14)"           }
           | }
           | 
           | Then you run "conveyor push" from any OS and it downloads a
           | Linux JVM, minimizes it for your app, bundles it with your
           | JARs, produces a DEB from that, sftps it to the server,
           | installs it using apt, that package integrates the server
           | with systemd for startup/shutdown and dynamic users, it
           | healthchecks the server to ensure it starts up and it can do
           | rolling upgrades/downgrades. And of course the same for Go or
           | NodeJS or whatever other server frameworks you like. So the
           | idea is that if you use a portable runtime you can develop
           | locally on macOS or Windows and not have to deal with Linux
           | VMs, but deployment is transparent.
           | 
           | SystemD can also run containers and "portable services", as
           | well as using cgroups for isolation and sandboxing, so
           | there's no need to use debs specifically. It just means that
           | you can depend on stuff that will get upgraded as part of
           | whole OS system upgrades.
           | 
           | We use this to maintain our own servers and it's quite nice.
           | One of the things that always bugged me about Linux is that
           | deploying servers to it always looks like either "sftp a
           | tarball and then wire things up yourself using vim", or
           | Docker which involves registries and daemons and isolated OS
           | images, but there's nothing in between.
           | 
           | Not sure whether it's worth releasing though. Docker seems so
           | dominant.
        
             | jrochkind1 wrote:
             | > It's not handled.
             | 
             | So... what do actual real people _do_ in practice? I am
             | very confused what people are actually doing here.
             | 
             | LOTS of people seem to have moved to this kind of docker-
             | based deploy for PaaS. They can't all just be... ignoring
             | security patches?
             | 
             | I am very confused that nobody else seems to think the
             | "handle patches story" is a big barrier to moving from
             | heroku-style to docker-style... that everyone else just
             | moves to docker-style? What are they actually doing to deal
             | with patches?
             | 
             | I admit I don't really understand your solution -- I am not
             | a sysadmin! This is why we deploy to heroku and pay them to
             | take care of it! It is confusing to me that none of the
             | other PaaS competitors to heroku -- including fly.io --
             | seem to think this is something their customers might
             | want...
        
               | brundolf wrote:
               | It seems not that hard to just manually bump your version
               | periodically, right? I get that it's not as nice as
               | having it taken care of for you, but it's not like you're
               | having to manage a whole system by hand
        
               | jrochkind1 wrote:
               | Assume I've never used Docker before, can you tell me
               | what "manually bump your version periodically" means with
               | regard to this question? Can anyone give me concrete
               | instructions for what this looks like in a specific
               | context? Like, say the first Dockerfile suggested in OP?
               | 
               | The Dockerfile has in it:
               | 
               | > ARG RUBY_VERSION=3.2.0
               | 
               | > FROM ruby:$RUBY_VERSION
               | 
               | Which it says gets us "gets us a Linux distribution
               | running Ruby 3.2." (Ubuntu?). The version of that "linux
               | distribution" isn't mentioned in the Dockerfile. How do I
               | "bump" it if there is a security patch effecting it?
               | 
               | Then it has:
               | 
               | > RUN apt-get update -qq && apt-get install -y build-
               | essential libvips && \ ...
               | 
               | Then I just run `fly launch` to launch that Dockerfile on
               | fly.io. How do I "bump" the versions of those apt-get
               | dependencies should they need them?
        
               | hakunin wrote:
               | If you're basing on a lang-specific container like ruby,
               | then it's the version of that container in the FROM line.
               | Notice how ruby images come in various versions of OS
               | (https://hub.docker.com/_/ruby). You can specify that as
               | part of the FROM string. However, they also let you drop
               | the OS part, and only specify ruby version. This will
               | usually default to an image with the latest OS provided
               | by that docker repo. Nothing to bump in this case, just
               | occasionally rebuild your own image from scratch, instead
               | of from cache, to make sure it downloads the latest base
               | image.
        
           | regularfry wrote:
           | The answer to that is "something that is not Docker handles
           | it". Wherever you're hosting your images there'll be
           | something platform-specific that handles it. Azure, AWS, and
           | GCP all have tools that'll scan and raise a notification. If
           | you want that to trigger an automatic rebuild, the tools are
           | there to do it. Going from there to "I don't need to think
           | about any CVE ever" is a bit more of a policy and opinion
           | question.
        
             | jrochkind1 wrote:
             | I hadn't even thought I would be "hosting my images" on
             | something like Azure, aWS, or GCP to deploy to fly.io.
             | Their examples don't mention that. You just have a
             | Dockerfile in your repo, you deploy to fly.io.
             | 
             | But it sounds like for patch/update management purposes,
             | now I need to add something like that in? Another
             | platform/host to maintain/manage, at possibly additional
             | price, and then we add dealing with the specific mechanisms
             | for scanning/updating too...
             | 
             | Bah. It remains mystifying to me that the current PaaS
             | docker-based "best practices" involve _quite a bit_ more
             | management than heroku. I pay for a PaaS hoping to not do
             | this management! It seems odd to me that the market does
             | not any longer seem to be about providing this service.
        
               | regularfry wrote:
               | You can run something like trivy locally, which will do
               | that particular job. Fly.io might add that later.
               | 
               | It is odd, but the general direction of the market (at
               | least at the top) is becoming less opinionated, which
               | means you need to bring your own. Not sure I like that
               | myself.
        
         | kugelblitz wrote:
         | I work a lot on modernizing codebases. Many times it will be
         | codebases without any proper testing, different versions of
         | frameworks and programming languages, etc.
         | 
         | Currently, I'm working in 3 projects at once + my own. Even
         | though it's all PHP, the projects are PHP 8.1 / Symfony 6.2,
         | PHP 8.2 / Symfony 6.2, PHP 8.1 / Laravel 8 and the newest
         | project I joined is PHP 7.4 / Symfony 4. I have to adhere to
         | different standards and different setups; I don't want to
         | switch the language and the yarn or npm versions or the
         | Postgres or MySQL versions and remember each one. Some might
         | use RabbitMQ, another Elasticsearch.
         | 
         | Docker helps me tremendously here.
         | 
         | Additionally, I add a Makefile for each project as well and
         | include at least "make up" (to start the whole app), "make
         | down", "make enter" (to enter the main container I'll be
         | working with), "make test" (usually phpunit) and perhaps "make
         | prepare" (for phpunit, phpstan, php-cs-fixer, database
         | validate) as a final check before adding a new commit.
         | 
         | Locally, I had additional aliases on my shell. "t" for "make
         | test", "up" for "make up".
         | 
         | So now I just have to go to a project folder, write "up" and am
         | usually good to go!
        
           | brundolf wrote:
           | Sure, but that feels like a band-aid on an existing problem.
           | And sometimes a band-aid is what you need, especially with
           | legacy systems, so I'm not judging that. But I think for a
           | new project, these days, it's a mistake to plan on needing
           | Docker from the outset.
           | 
           | Modern languages with modern tooling can automatically take
           | care of dependency management, running your code cross-
           | platform, etc [0]. And several of them even have an upgrade
           | model where you should never need an older version of the
           | compiler/runtime; you just keep updating to the latest and
           | everything will work. I find that when circumstances allow me
           | to use one of these languages/toolsets, most of the reasons
           | for using Docker just disappear.
           | 
           | [0] Examples include Rust/cargo, Go, Deno, Node (to an
           | extent)
        
             | kugelblitz wrote:
             | How do you mean?
             | 
             | If the new project without any tests with a PHP 7.2 and
             | MySQL installation and I need to upgrade it to PHP 8.2, I
             | first need to write tests and I can't use PHP 8.2 features
             | until I've upgraded.
             | 
             | Composer is the dependency manager, but I still need PHP to
             | run the app later on. And a PHP 7.2 project might behave
             | differently when running on PHP 7.2 or PHP 8.2. And
             | sometimes PHP is just one part of the equation. There might
             | be an Angular frontend and a nginx / php-fpm backend, maybe
             | Redis, Postgres, logging, etc. They need to be wired
             | together as well. And I'm into backend web development and
             | do a bit of frontend development, but whoa, I get confused
             | with all the node.js, npm, yarn versioning with corepack
             | and nvm and whatnot. Here even a "yarn install" behaves
             | differently depending on which yarn version I have. I'd
             | rather have docker take care of that one for me.
             | 
             | I feel like "docker (compose)" and "make" are widely
             | available and language-agnostic enough and great for my use
             | cases (small to medium sized web apps), especially since I
             | develop on Linux.
             | 
             | Something language-specific like pyenv might work as well,
             | but might be too lightweight for wiring other tools. I used
             | to work a lot with Vagrant, but that seems to more on the
             | "heavy" side.
             | 
             | Edit: I just saw your examples, unfortunately I've only
             | dabbled a bit with Go and haven't worked with Rust yet, so
             | I can't comment on that, but would be interested to know,
             | how they work re: local dev setup.
        
               | mkraft wrote:
               | I _do_ work with Go and it doesn't preclude the utility
               | of containers in development for out-of-process services.
        
               | brundolf wrote:
               | > and I can't use PHP 8.2 features until I've upgraded
               | 
               | Yeah- so in Rust, the compiler/tooling never introduces
               | breaking changes, as (I think) a rule. For any collection
               | of Rust projects written at different times, you can
               | always upgrade to the very latest version of the compiler
               | and it will compile all of them.
               | 
               | The way they handle (the very rare) breaking changes to
               | the language itself is really clever: instead of a
               | compiler version, you target a Rust "edition", where a
               | new edition is established every three years. And then
               | any version of the Rust compiler can compile all past
               | Rust editions.
               | 
               | Node.js isn't quite as strict with this, though it very
               | rarely gets breaking changes these days (partly because
               | JavaScript itself virtually never gets breaking changes,
               | because you never want to break the web). Golang
               | similarly has a major goal of not introducing breaking
               | changes (again, with some wiggle-room for extreme
               | scenarios).
               | 
               | > I get confused with all the node.js, npm, yarn
               | versioning with corepack and nvm and whatnot. Here even a
               | "yarn install" behaves differently depending on which
               | yarn version I have.
               | 
               | Hmm. I may be biased, but I feel like the Node ecosystem
               | (including yarn) is pretty good about this stuff. Yarn
               | had some major changes to how it works underneath between
               | its major versions, but that stuff is mostly supposed to
               | be transient/implementation-details. I believe it still
               | keys off of the same package.json/yarn.lock files (which
               | are the only things you check in), and it still exposes
               | an equivalent interface to the code that imports
               | dependencies from it.
               | 
               | nvm isn't ideal, though I find I don't usually have to
               | use it because like I said, Node.js rarely gets breaking
               | changes. Mostly I can just keep my system version up to
               | date and be fine, regardless of project
               | 
               | Configuring the Node ecosystem's build tools gets really
               | hairy, but once they're configured I find them to mostly
               | be plug and play in a new checkout or on a new machine
               | (or a deployment); install the latest Node, npm install,
               | npm run build, done. Deno takes it further and mostly
               | eliminates even those steps (and I really hope Deno
               | overtakes Node for this and other reasons).
               | 
               | > maybe Redis, Postgres, logging, etc. They need to be
               | wired together as well
               | 
               | I think this - grabbing stock pieces off the shelf - is
               | the main place where Docker feels okay to use in a local
               | environment. No building dev images, minimal
               | state/configuration. Just "give me X". I'd still prefer
               | to just run those servers directly if I can (or even
               | better, point the code I'm working on at a live
               | testing/staging environment), but I can see scenarios
               | where that wouldn't be feasible
        
         | dmitryminkovsky wrote:
         | I don't use Docker locally except for building/testing
         | production containers. I also found them not helpful for
         | development.
         | 
         | That said, I recently discovered that VS Code has a feature
         | called "Dev Containers"[0] that ostensibly makes it easy to
         | develop inside the same Docker container you'll be deploying. I
         | haven't had a chance to check it out, but it seems very cool.
         | 
         | [0] https://code.visualstudio.com/docs/devcontainers/containers
        
       | agtorre wrote:
       | This is really cool. Is the Rails server production ready? I was
       | always under the impression you had to run it with Uvicorn or
       | similar, although I haven't been following Rails development
       | recently.
        
         | dewey wrote:
         | Yes, usually you still serve your asset (images,...) directory
         | through nginx or something similar, though.
        
           | ye-olde-sysrq wrote:
           | This has been my experience too, and I've never even used
           | Rails at scale. Puma will struggle to serve "lots" (tens on a
           | single page) of (for example) static images (though the
           | actual user experience varies across browsers - safari does
           | okay but firefox chokes and chromium is in the middle). This
           | is with puma on a decent intel prosumer CPU (an i7 iirc)
           | using x-sendfiles through nginx. So puma isn't even actually
           | pushing the image bytes here.
           | 
           | I replaced it with nginx intercepting the image URLs to serve
           | them without even letting puma know and it was instantly
           | zippy. I still use sendfile to support when I'm doing dev and
           | not running nginx in front of it, and I'm not happy with the
           | kind of leaky abstraction there, but damn are the benefits in
           | prod too difficult to ignore.
        
         | grk wrote:
         | The default Gemfile now bundles Puma in, so yeah it's
         | production ready
        
       | marcopicentini wrote:
       | Deploying with Cloud66 is a lot easier. Just git push.
       | 
       | I can chose my hosting provider and switch each other.
        
       | quest88 wrote:
       | But how does this work when talking to dev databases on my
       | machine?
        
       | revskill wrote:
       | This is a really good news. And to be fun, the easiest way to
       | become mainstream, is to have a guide on how to deploy a Rails
       | application in 1 click, for any cloud hosting service.
        
       | dewey wrote:
       | This is long overdue. Rails got very nice updates in the past
       | years to make it easier to handle JS and other assets.
       | 
       | Deploying it was still always a hassle and involved searching for
       | existing Dockerfiles and blog posts to cobble together a working
       | one. At the beginning I always thought I'm doing something wrong
       | as it's supposed to be easy and do everything nicely out of the
       | box.
       | 
       | And dhh apparently agrees (Now at least:
       | https://dhh.dk/posts/30-myth-1-rails-is-hard-to-deploy) as
       | there's now a default Dockerfile and also this project he's
       | working on, this will make things a lot nicer and more polished:
       | https://github.com/rails/mrsk
        
         | cromulent wrote:
         | Great name. I wonder if Maersk (the Danish shipping container
         | company) will note the hat-tip, and if it will annoy their
         | lawyers.
        
       | bradgessler wrote:
       | I have a proposal out for making the deployment story even easier
       | by providing RubyGems a way to describe the packages they depend
       | on to operate properly at https://community.fly.io/t/proposal-
       | declare-docker-images-de...
       | 
       | The idea is I could run a command like `bundle packages
       | --manager=apt` and get a list of all the packages `apt` should
       | install for the gems in my bundle.
       | 
       | Since I know almost nothing about the technicalities and
       | community norms of package management in Linux, macOS, and
       | Windows, I'm hoping to find people who do and care about making
       | the Ruby deployment story even better to give feedback on the
       | proposal.
        
         | regularfry wrote:
         | One problem you're likely to run into is that systems using the
         | same packaging lineage cut the same dependency up in different
         | ways. The "right name" for a dependency can change between
         | Ubuntu and Debian, between different releases of Ubuntu, and
         | different architectures. It very quickly gets out of hand for
         | any interesting set of dependencies. Now it might be that
         | there's enough stability in the repositories these days that
         | that's less true than it was, but I remember running into some
         | _really annoying_ cases at one point when I had a full gem
         | mirror to play with.
         | 
         | This is one of those problems that sounds easy but gets really
         | fiddly. I had a quick run at it from a slightly different
         | direction a looooong time ago: binary gems
         | (https://github.com/regularfry/bgems although heaven knows if
         | it even still runs). Precompiled binary gems would dramatically
         | speed up installation at the cost of a) storage; and b) getting
         | it right _once_. The script I cobbled together gathers the
         | dependencies together into a `.Depends` file which you can just
         | pipe through to the package manager, and could happily use to
         | strap together a package corresponding to the dependency list.
         | 
         | I've never really understood why a standard for precompiled
         | gems never emerged, but it turns out it's drop-dead simple to
         | implement. The script does some linker magic to reverse
         | engineer the dpkg package dependency list from a compiled
         | binary. I was quite pleased with it at the time, and while I
         | don't think it's bullet-proof I do think it's worth having a
         | poke at for ideas. Of course it can only detect _binary_
         | dependencies, not data dependencies or anything more
         | interesting, so there 's still room for improvement.
        
       | hit8run wrote:
       | I use dokku. Works really well.
        
         | ezekg wrote:
         | IIRC, Dokku can only manage 1 server, so it's essentially
         | useless for anything except small side projects that don't need
         | to scale horizontally.
        
           | josegonzalez wrote:
           | Maintainer of Dokku here:
           | 
           | We support Kubernetes and Nomad as deployment platforms, and
           | folks are welcome to build their own builder plugins if they
           | need to support others.
        
       | wefarrell wrote:
       | I do miss the pre-docker days of using capistrano to deploy rails
       | projects. Most deploys would take less than two minutes in the CI
       | server and most of that was tests. The deploys were hot and
       | requests that happened during the deploy weren't interrupted. Now
       | with Docker I'm seeing most deploys take around ten minutes.
       | 
       | The downside of capistrano was that you'd be responsible for
       | patching dependencies outside of the Gemfile and there would be
       | occasional inconsistencies between environments.
        
         | 5e92cb50239222b wrote:
         | Why are using Docker then? I have little experience with Ruby,
         | but for PHP projects we're still using ye olde way. Deploys are
         | hot and do not interrupt requests, just as you described (it
         | pretty much comes down to running `composer install && rsync`
         | in CI -- there are more complicated solutions like capistrano,
         | which I've also used, but they don't seem to provide enough
         | features to warrant increased complexity).
        
         | freedomben wrote:
         | I hear you, I love(d) capistrano and do miss it quite a bit.
         | 
         | That said, the cap approach was easy for a static target, but a
         | horizontally scalable (particularly with autoscale) was an
         | utter nightmare. That's where having docker really shines, and
         | is IMHO why cap has largely been forgotten.
        
           | laneystroup wrote:
           | I still use Capistrano and have full Load Balancing and
           | Autos-scaling implemented via the elbas gem. I would be happy
           | to share my configuration if anyone needs it.
        
         | viraptor wrote:
         | This experience is mostly about how you see docker used, not
         | any specific property of it. Both Capistrano and Docker can be
         | used for deploys with or without interruptions. Both can be
         | immediate or take minutes. The tool itself won't save you from
         | bad usage.
        
       | jchw wrote:
       | > Everything after that removes the manifest files and any
       | temporary files downloaded during this command. It's necessary to
       | remove all these files in this command to keep the size of the
       | Docker image to a minimum. Smaller Dockerfiles mean faster
       | deployments.
       | 
       | It isn't explicitly explained, but the reason why it must be in
       | this command and not separated out is because each command in a
       | dockerfile creates a new "layer". Removing the files in another
       | command will work, but it does nothing to decrease the overall
       | image size: as far as whatever filesystem driver you're using is
       | concerned, deleting files from earlier layers is just masking
       | them, whereas deleting them before creating the layer prevents
       | them from ever actually being stored.
        
         | rendaw wrote:
         | Self hoisting here, I put this together to make it easier to
         | generate single (extra) layer docker images without needing a
         | docker daemon, capabilities, chroot, etc:
         | https://github.com/andrewbaxter/dinker
         | 
         | Caveat: it doesn't work on Fly.io. They seem to be having some
         | issue with OCI manifests:
         | https://github.com/containers/skopeo/issues/1881 . They're also
         | having issues with new docker versions pushing from CI:
         | https://community.fly.io/t/deploying-to-fly-via-github-actio...
         | ... the timing of this post seems weird.
         | 
         | FWIW the article says
         | 
         | > create a Docker image, also known as an OCI image
         | 
         | I don't think this is quite right. From my investigation,
         | Docker and OCI images are basically content addressed trees,
         | starting with a root manifest that points to other files and
         | their hashes (root -> images -> layers -> layer configs +
         | files). The OCI manifests and configs are separate to Docker
         | manifests and configs and basically Docker will support both
         | side by side.
        
           | mrkurt wrote:
           | The newest buildkit shipped a regression. This broke the
           | Docker go client and containerd's pull functionality. It has
           | been a mess: https://community.fly.io/t/deploying-to-fly-via-
           | github-actio...
        
         | emerongi wrote:
         | If you only care about the final image size, then `docker build
         | --squash` squashes the layers for you as well.
        
           | jchw wrote:
           | Definitely, although it's worth noting that while the image
           | size will be smaller, it will get rid of the benefits of
           | sharing base layers. Having less redundant layers lets you
           | save most of the space without losing any of the benefits of
           | sharing slices. I think that is the main reason why this is
           | not usually done.
        
           | gkfasdfasdf wrote:
           | --squash is still experimental, I believe multi-stage images
           | are the new best practice here.
        
             | 411111111111111 wrote:
             | in case anyone doesn't know what that means, its basically
             | this kind of dockerfile                 FROM
             | the_source_image as builder       RUN build.sh
             | FROM the_source_image       COPY --from=builder
             | /app/artifacts /app/       CMD ....
             | 
             | i'm not sure if you can really call it the new best
             | practice though, its been the default for ... a very long
             | time at this point.
        
               | jchw wrote:
               | Typically I wind up using a different source image for
               | the builder that ideally has (most of) the toolchain bits
               | needed, but the same runtime base as the final image.
               | (For Go, go:alpine and alpine work well. I'm aware
               | alpine/musl is not technically supported by Go, but I
               | have yet to hit issues in prod with it, so I guess I'll
               | keep taking that gamble.)
        
               | [deleted]
        
               | KronisLV wrote:
               | I take advantage of multi-stage builds, however I still
               | think that the layer system could have some nice
               | improvements done to it.
               | 
               | For example, say I have my own Ubuntu image that is based
               | on one of the official ones, but adds a bit of common
               | configuration or tools and so on, on which I then build
               | my own Java image using the package manager (not unlike
               | what Bitnami do with their minideb, on which they then
               | base their PostgreSQL and most other container images).
               | 
               | So I might have something like the following in the
               | Ubuntu image Dockerfile:                 RUN apt-get
               | update && apt-get install -y \         curl wget \
               | net-tools inetutils-ping dnsutils \         supervisor \
               | && apt-get clean && rm -rf /var/lib/apt/lists
               | /var/cache/apt/*
               | 
               | But then, if I want to install additional software, I
               | need to fetch the package list anew downstream:
               | FROM my-own-repo/ubuntu              RUN apt-get update
               | && apt-get install -y \         openjdk-17-jdk-headless \
               | && apt-get clean && rm -rf /var/lib/apt/lists
               | /var/cache/apt/*
               | 
               | As opposed to being able to just leave the cache files in
               | the previous layers/images, then remove them in a later
               | layer and just do something like:                 docker
               | build -t my_optimized_java_image -f java.Dockerfile
               | --purge-deleted-files .              or maybe
               | docker build -t my_regular_java_image -f java.Dockerfile
               | .       purge-deleted-files -t my_regular_java_image -o
               | my_optimized_java_image
               | 
               | Which would then work backwards from the last layer and
               | create copies of all of the layers where files have been
               | removed/masked (in the later layers) to use instead of
               | the originals. Thus if I'd have 10 different images that
               | need to use apt to install stuff while building them, I
               | could leave the cache in my own Ubuntu image and then
               | just remove it for whatever I want to consider the
               | "final" images that I'll ship, which would then alter the
               | contents of the included layers to purge deleted files.
               | 
               | There's little reason why these optimized layers couldn't
               | be shared across all 10 of those "final" images either:
               | "Hey, there's these optimized Ubuntu image layers without
               | the package caches, so we'll use it for our .NET, Java,
               | Node and other images" as opposed to --squash which would
               | put everything in a single large layer, thus removing the
               | benefits from the shared layers of the base Ubuntu image
               | and so on.
               | 
               | Who knows, maybe someone will write a tool like that some
               | day.
        
               | Too wrote:
               | You will be happy to hear that already exists since. Read
               | up on docker buildkit and the --mount option.
        
         | freedomben wrote:
         | Yeah this is a very widely misunderstood or unknown thing about
         | docker files. After the nth time explaining it to somebody, I
         | finally threw it into a short video with demo to explain how it
         | worked: https://youtu.be/RP-z4dqRTZA
        
         | bradgessler wrote:
         | I wanted to explain layers really bad, Sam Ruby even put
         | comments in there about it, but I stayed away from it for the
         | sake of more clearly explaining how Linux was configured for
         | Rails apps.
         | 
         | It is really weird looking at Dockerfiles for the first time
         | seeing all of the `&% \\` bash commands chained together.
        
         | e12e wrote:
         | Note that the explicit "apt-get clean" is probably redundant:
         | 
         | https://docs.docker.com/develop/develop-images/dockerfile_be...
         | 
         | > Official Debian and Ubuntu images automatically run apt-get
         | clean, so explicit invocation is not required.
        
           | yrro wrote:
           | 'apt-get clean' doesn't clear out /var/lib/apt/lists. It
           | removes cached downloaded debs from /var/cacpt/apt but you'll
           | still have hundreds of MiB of package lists on your system
           | after running it.
        
         | viraptor wrote:
         | If you want to check your images for some common leftover files
         | in all the layers, I made an app for that:
         | https://github.com/viraptor/cruftspy
        
         | Too wrote:
         | The new best practice is to use the RUN --mount cache options.
         | Making the removal of intermediate files unnecessary and speeds
         | up the builds too. Surprised to see so few mentions of it.
        
           | zbentley wrote:
           | As someone whose week was ruined by an overwhelming
           | proliferation of `--mount type=cache` options throughout a
           | complex build system, I'm not so sure.
           | 
           | Externally managed caches don't have a lifecycle controlled
           | or invalidated by changes in Dockerfiles, or by docker cache-
           | purging commands like "system prune".
           | 
           | That means you have to keep track of those caches yourself,
           | which can be a pain in complex, multi-contributor
           | environments with many layers and many builds using the same
           | caches (intentionally or by mistake).
        
             | TylerE wrote:
             | Could you have the first step of the dockerfile hash itself
             | and blow away the cache(s) if the hash changes?
        
           | skrtskrt wrote:
           | hard enough to stay synced with all the latest docker
           | changes, let alone at an organizational level.
           | 
           | Example: in the last few years docker compose files have gone
           | from version 2 to version 3 (missing tons of great v2
           | features in the name of simplification) to the newest,
           | unnamed unnumbered version which is a merging of versions 2
           | and 3.
        
           | samruby wrote:
           | At the moment Rails is focused on simplicity/readability.
           | I've got a gem that I'm proposing (and DHH is evaluating)
           | that adds caching as an option:
           | https://github.com/rubys/dockerfile-rails#overview
        
         | dclowd9901 wrote:
         | Such a helpful comment. Just in this one note, I learned a few
         | things about Docker that I had no idea about:
         | 
         | 1) a layer is essentially a docker image in itself and
         | 
         | 2) a layer is as static as the image itself
         | 
         | 3) Docker images ship with all layers
         | 
         | Thanks jchw!
        
         | vidarh wrote:
         | An terbative to removing files or go through contortions to
         | stuff things in a single layer is to use a builder image and
         | copy the generated artefacts into a clean image:
         | FROM foo AS builder              .. build steps
         | FROM foo              COPY --from=builder generated-file target
         | 
         | (I hope I got that right; on a phone and been a while since I
         | did this from scratch, but you get the overall point)
        
           | d3nj4l wrote:
           | This isn't really useful for frameworks like rails, since
           | there's nothing to "compile" there. Most rails docker images
           | will just include the runtime and a few C dependencies, which
           | you need to run the app.
        
             | bdcravens wrote:
             | Pulling down gems is a bit of a compilation, which could
             | benefit, unless you're already installing gems into a
             | volume you include in the Docker container via docker
             | compose etc. Additionally, what it does compile can be
             | fairly slow (like nokogiri).
        
             | vidarh wrote:
             | There are however temporary files being downloaded for the
             | apt installation, and while in this case it's simple enough
             | to remove them in one step that's by no means always the
             | case. Depending on which gems you decide to rely on you may
             | e.g. also end up with a full toolchain to build extensions
             | and the like, so knowing the mechanism is worthwhile.
        
               | theptip wrote:
               | How would you go about copying something you installed
               | from apt in a build container?
               | 
               | Say `apt install build-essential libvips` from the OP,
               | it's not obvious to me what files libvps is adding. I
               | suppose there's probably an incantation for that? What
               | about something that installs a binary? Seems like a pain
               | to chase down everything that's arbitrarily touched by an
               | apt install, am I missing some tooling that would tame
               | that pain?
        
               | vidarh wrote:
               | It's a pain, hence for apt as long as it's the _same_
               | packages, just cleaning is probably fine. But e.g. build-
               | essential is there to handle building extensions pulled
               | in by gems, and _that_ isn 't necessary in the actual
               | container if you bring over the files built and/or
               | installed by rubgygems, so the set of packages can be
               | quite different.
        
               | viraptor wrote:
               | Run "dpkg -L libvips" to find the files belonging to that
               | package. This doesn't cover what's changed in post
               | install hooks, but for most docker-relevant things, it's
               | good enough.
        
           | orf wrote:
           | Unfortunately this messes with caching and causes the builder
           | step to always rebuild if you're using the default inline
           | cache, until registries start supporting cache manifests.
        
             | sequoia wrote:
             | A tiny bit more on this topic:
             | https://sequoia.makes.software/reducing-docker-image-size-
             | pa...
        
             | vidarh wrote:
             | How so? I just tested a build, and it used the cache for
             | every layer including the builder layers.
        
               | orf wrote:
               | You did this on the same machine, right? In a CI setting
               | with no shared cache you need to rely on an OCI cache.
               | The last build image is cached with the inline cache, but
               | prior images are not
        
               | vidarh wrote:
               | Ah, sorry I misunderstood you. Yes, I don't tend to care
               | about whether or not the steps are cached in my CI setups
               | as most of the Docker containers I work on build fast
               | enough that it doesn't really matter to me, but that will
               | of course matter for some.
        
               | claytonjy wrote:
               | I never got around to implementing it but I wonder how
               | this plays with cross-runner caches in e.g. Gitlab, where
               | the cache goes to S3; there's a cost to pulling the
               | cache, so it'll never be as fast as same-machine, but
               | should be way faster for most builds, right?
        
               | korijn wrote:
               | You can build the first stage separately as a first step
               | using `--target` and store the cache that way. No
               | problem.
        
         | sequoia wrote:
         | Is there some reason not to use multi-stage builds for this?
         | Blogspam: https://sequoia.makes.software/reducing-docker-image-
         | size-pa...
        
         | JimDabell wrote:
         | If all you want to do is run a series of commands while only
         | creating a single layer, heredocs are probably the simplest /
         | most readable approach:
         | 
         | https://www.docker.com/blog/introduction-to-heredocs-in-dock...
        
           | tomca32 wrote:
           | Why didn't I ever think of this.
           | 
           | This is great and really makes things way simpler. Thanks!
        
             | jbott wrote:
             | Unfortunately this syntax is not generally supported yet -
             | it's only supported with the buildkit backend and only
             | landed in the 1.3 "labs" release. It was moved to stable in
             | early 2022 (see
             | https://github.com/moby/buildkit/issues/2574), so that
             | seems to be better, but I think may still require a syntax
             | directive to enable.
             | 
             | Many other dockerfile build tools still don't support it,
             | e.g. buildah (see
             | https://github.com/containers/buildah/issues/3474)
             | 
             | Useful now if you have control over the environment your
             | images are being built in, but I'm excited to the future
             | where it's commonplace!
        
               | Groxx wrote:
               | And AFAIK buildkit is still a real pain to troubleshoot,
               | as you can't use intermediate stages :/
               | 
               | You can put stuff in a script file and just run that
               | script too.
        
               | cpuguy83 wrote:
               | You can! It's experimental but
               | https://github.com/docker/buildx/pull/1168
               | 
               | Tracking for this:
               | https://github.com/docker/buildx/issues/1104
        
           | gitgud wrote:
           | Nice syntax, but I like the caching that comes with creating
           | each layer.
           | 
           | If you want to reduce layer/image size, then I think [1]
           | "multi-stage builds" is a good option
           | 
           | [1] https://docs.docker.com/build/building/multi-stage/
        
         | mattficke wrote:
         | The older "Docker without Docker" blogpost linked from there
         | goes into that, it's one of the best deep dives into containers
         | (and honestly one of the best pieces of technical writing full
         | stop) I've come across. https://fly.io/blog/docker-without-
         | docker/
        
       | sdwolfz wrote:
       | I have my own local development Rails setup and template files
       | that could be dropped in any project with minimal changes (mostly
       | around configuring the db connection)
       | 
       | - https://gitlab.com/sdwolfz/docker-projects/-/tree/master/rai...
       | 
       | Haven't spent the time to document it. But the general idea is to
       | have a `make` target that orchestrates everything so `docker-
       | compose` can just spin things up.
       | 
       | I've used this sort of thing for multiple types of projects, not
       | just Rails, it can work with any framework granted you have the
       | right docker images.
       | 
       | For deployment I have something similar, builds upon the same
       | concepts (with ansible instead of make, and focused on multi-
       | server deploys, terraform for setting up the cloud resources),
       | but not open sourced yet.
       | 
       | Maybe I'll get to document it and post my own "Show HN" with this
       | soon.
        
       | ginzunza wrote:
       | This is a manual using that image with Redis and Postgres:
       | https://medium.com/@gustavoinzunza/rails-redis-pg-with-docke...
        
       | [deleted]
        
       | throwaway23942 wrote:
       | I use FROM scratch. The app itself is built with a binary packer
       | that is copied in.
       | 
       | Still, I'd much rather use Nix/Guix over Dockerfile.
        
       | iamflimflam1 wrote:
       | What kills rails for me at the moment is the time it takes to
       | start up. I'd love to be able to use on top of things like cloud
       | run where resources ramp down to zero when there are no requests,
       | but the startup time makes this very difficult.
        
         | freedomben wrote:
         | I dislike that too. I've started using Sinatra for ruby apps
         | instead of rails. You end up writing a lot more boilerplate but
         | startup times are near instant and the API is great. Also it's
         | highly stable. I haven't had to update my Sinatra apps (beyond
         | updating dependent gems) in many years.
        
         | the_gastropod wrote:
         | (I have not actually used this myself). The folks over at
         | CustomInk maintain Lamby, a project to run Rails in a quickly-
         | bootable Lambda environment. Might be worth checking out, if
         | you otherwise do enjoy working with Rails:
         | https://lamby.custominktech.com
        
         | gls2ro wrote:
         | I dont think Rails is fit for this. It is a full app so I think
         | by design it does not fit into Cloud Run architecture of cloud
         | functions.
         | 
         | May I suggest you take a look at Roda or Hanami?
        
           | regularfry wrote:
           | While I've not tried it personally, Django can be run like
           | this. It being a "full app" doesn't preclude it from having a
           | fast enough startup to allow for cloud function deployment.
        
       | laneystroup wrote:
       | I still use Capistrano. In fact, I like Capistrano so much that I
       | have full Load Balancing, Auto-Scaling, and End-to-End encryption
       | enabled for my projects on AWS via the elbas gem.
       | 
       | My primary use case for this is PCI Compliance. While PCI DSS
       | and/or HIPAA do not specifically rule out Docker, the principle
       | of isolation leans heavily twoard the principle that web hosts
       | must be running on a private virtual machine.
       | 
       | This rules out almost all docker based-PaaS (including Fly.io,
       | Render.com, AWS App Runner, and Digital Ocean), as these run your
       | containers on general Docker hosts. In fact, the only PaaS
       | provider that I can find advertising PCI compliance is Heroku,
       | which now charges +$1800/month plus for Heroku Private to achieve
       | it.
       | 
       | I would love to share my configuration with anyone that needs it.
        
         | atmosx wrote:
         | Fly.io will convert your docker image to a firecracker[^1], a
         | microVM engine based on KVM.
         | 
         | [^1]: https://firecracker-microvm.github.io/
        
         | [deleted]
        
         | amarshall wrote:
         | Did you read the OP?
         | 
         | > Fly.io doesn't actually run Docker in production--rather it
         | uses a Dockerfile to create a Docker image, also known as an
         | OCI image, that it runs as a Firecracker VM
        
         | dylanz wrote:
         | If you could share your configurations that would be great!
         | (see email in my profile). I used Capistrano for many, many,
         | years but haven't kept up with it in quite a while. I've had to
         | tackle HIPAA deployments in k8s and it's quite the ordeal for a
         | small team. I miss the days of "cap deploy".
        
       | bploetz wrote:
       | It's been a while since I've done any Ruby/Rails development, but
       | just curious why they chose to use a Debian/Ubuntu based image in
       | the default Dockerfile instead of an Alpine based image?
        
         | freedomben wrote:
         | Alpine has some razor edges in it. I would never default to it.
         | Always test your app thoroughly. musl doesn't implement
         | everything that glibc does and some of the differences can
         | cause big problems. This is not purely theoretical. I once
         | spent a week deugging a database corruption issue that happened
         | because of a difference in musl's UTF-8 handling.
         | 
         | Use Alpine liberally for local images if you like, but don't
         | use it for production.
        
           | bploetz wrote:
           | "Use Alpine liberally for local images if you like, but don't
           | use it for production."
           | 
           | We take the exact opposite approach: default to Alpine based
           | images, only use another base OS if Alpine doesn't work for
           | some reason. The majority of our underlying code base isn't
           | C-based, so maybe that's why Alpine has been successful for
           | us, but as always, everyone's situation is different and
           | YMMV.
        
         | faebi wrote:
         | I assume because Debian/Ubuntu works more likely out of the
         | box. I tried to use Alpine but ran into various issues in our
         | sad big corporate setup. Additionally, ruby provides base
         | docker images in these too.
        
       | alberth wrote:
       | Am I the only person who struggles to deploy Rails apps.
       | 
       | It's a super productive framework to develop in, but deploying an
       | actuals Rails apps - after nearly 20 years of existance, still
       | seems way more difficult than it should be.
       | 
       | Maybe it's just me.
        
         | aantix wrote:
         | Try Hatchbox https://hatchbox.io/
         | 
         | Makes deployment super easy.
        
           | excid3 wrote:
           | Thanks @aantix :D
        
         | ksajadi wrote:
         | I feel your pain and risking a shameless plug, we built a
         | company to solve this problem as it was the only thing we
         | really didn't like about Rails.
         | 
         | Checkout Cloud 66!
        
           | mfkp wrote:
           | I've been using Cloud66 since 2015 for my rails deployments,
           | makes my life much easier!
        
           | cobrabyte wrote:
           | +1 for Cloud66
        
         | brundolf wrote:
         | I've never used it, but I'm curious what makes it difficult?
        
         | pelasaco wrote:
         | I'm not sure how its harder than python + django, node.js or
         | php.. I used to deploy with capistrano, then we moved to deploy
         | it from a single rpm file, now we just use or pipeline to
         | prepare pods and push it to k8s, continuous deployment, direct
         | from the gitlab pipeline.. not sure how can it be easier than
         | that.
        
         | Deinumite wrote:
         | I think this is one of the things that Java really solved even
         | though the actual implementation is a bit weird.
         | 
         | Being able to generate a war or jar as a released binary is
         | something that would be cool to see in the ruby world.
        
         | 0x457 wrote:
         | Rails is a huge framework. I remember using capistrano to
         | deploy it on many servers, and that was much harder than with
         | containers today.
         | 
         | The issue is how much RoR does and how tightly it is with its
         | build toolchain - gems often require ways of building C code,
         | whatever you use to build assets has its own set of
         | requirements, there is often ffmpeg or imagemagik dependency.
         | In my opinion, a lot of the issues are from Ruby itself being
         | Ruby.
         | 
         | I agree that it's silly for such productive framework to be
         | such PITA to deploy. To be fair, I pick RoR deployment over
         | node.js deployment any day. I still not sure how to package TS
         | projects correctly.
        
           | ravedave5 wrote:
           | I still have battlescars from capistrano. What a pain. So
           | much easier with docker.
        
         | kdasme wrote:
         | I've never experienced issues supporting rails deployments, and
         | in fact find them quite easy to roll out, just as any other
         | service written in any other language. I prefer Kubernetes
         | though, which provides abstractions.
        
         | berkes wrote:
         | Not just you.
         | 
         | Whenever you've covered your bases, Rails grows in complexity.
         | Probably necessary complexity to keep up with modern world.
         | 
         | Solved asset pipeline pain? Here's we packer. No, let's swap
         | that out for jsbundle.
         | 
         | Finally tamed the timing of releasing that db migration without
         | downtime? Here's sidekiq, requiring you to synchronize restarts
         | of several servers. Oh, wait, we now ship activejob.
         | 
         | Managed to work around threads eaten by websocket connections
         | in unicorn? Nah, puma is default now. Oh, and here's
         | ActionCable you may need to fix all your server magic.
         | 
         | Rails is opinionated. And that is good at times. But it also
         | means a lot of work on annoying plumbing having to be rebuilt
         | for a new, or shifted opinion. Work on plumbing, that is not
         | work on your actual business core.
        
           | viraptor wrote:
           | > Finally tamed the timing of releasing that db migration
           | without downtime? Here's sidekiq, requiring you to
           | synchronize restarts of several servers
           | 
           | To be fair, this was already an issue whenever you have more
           | than one instance of anything. Whether it's an extra sidekiq
           | or two web servers or anything else, you have a choice of:
           | stop everything and migrate, or split the migration into
           | prepare, update code, finalise migration.
        
         | jrochkind1 wrote:
         | You are not!
         | 
         | That is in fact, I think, why fly.io is investing in trying to
         | make it easier to deploy Rails, on their platform.
         | 
         | But also contributing to the effort to make Rails itself come
         | with a Dockerfile solution, which Rails team is accepting into
         | Rails core because, I'd assume/hope, they realize it can be a
         | challenge to deploy Rails, and are hoping that the Dockerfile
         | generation solution will help.
         | 
         | Heroku remains, IMO, the absolute easiest way to deploy Rails,
         | and doesn't really have a lot of competition -- although some
         | are trying. Unfortunate because heroku is also a) pricey and b)
         | it's owners seem to be into letting it kind of slowly
         | disintegrate.
         | 
         | I'm really encouraged that fly.io seems to be investing in
         | trying to match that ease of deployment for Rails specifically,
         | on fly.io.
        
         | danmaz74 wrote:
         | If you control your own server, it's very easy to deploy a
         | Rails application. The downside is that you need to keep up
         | with patching, ensuring that the disk doesn't become full, etc.
         | etc.
         | 
         | Deploying Rails to PAAS solutions is also easy but used to be
         | expensive with Heroku. I very recently started using
         | DigitalOcean Apps for a personal project and it's been very
         | easy, while costs look acceptable.
        
         | bradgessler wrote:
         | Nope! You're not alone. This has been an issue in the Rails
         | community for a while. It's why DHH started working on it,
         | which Sam Ruby noticed and moved forward in a really big way.
         | 
         | It's still going to be some what challenging for some folks as
         | this Dockerfile makes its way through the community, but this
         | is a really small step in the right direction for improving the
         | Rails deployment story. There's now at least a "de facto
         | standard" for people to build tooling around.
         | 
         | Fly.io is going to switch over to the official Rails Dockerfile
         | gem for pre-7.1 apps really soon, so deploying a vanilla rails
         | app will be as simple as `fly launch` and `fly deploy`.
        
         | granshaw wrote:
         | Just use Heroku and move on with your life... its not THAT
         | expensive :) And if it is for you, youre probably at a point
         | with the product where you can afford it.
        
           | dwheeler wrote:
           | I've been deploying to Heroku and there it's been incredibly
           | easy.
        
       | rcarmo wrote:
       | > RAILS_SERVE_STATIC_FILES - This instructs Rails to not serve
       | static files.
       | 
       | ...but... it's set to True....
        
         | bradgessler wrote:
         | Open a PR! This will fix the problem if its a bug/oversight or
         | if its intended to be that way, force somebody to comment on it
         | for a future person who is puzzled by this choice.
        
           | byroot wrote:
           | It's intended. Yes serving static files from Ruby is slower
           | than from nginx or whatever, but you'd need to embed nginx in
           | the image and run both processes etc.
           | 
           | The assumption here is that there is a caching proxy in front
           | of the container, so Rails will only serve each assets once,
           | so performance isn't critical.
        
         | jrochkind1 wrote:
         | I saw that too -- I think it's a typo in the OP, that the code
         | is what they intended to recommend, and the comment is wrong.
         | 
         | Standard heroku Rails deploy instructions are to turn on
         | `RAILS_SERVE_STATIC_FILES` but also to put a CDN in front of
         | it.
         | 
         | My guess is this is what fly.io is meaning to recommend too.
         | But... yeah, they oughta fix this! A flaw in an otherwise very
         | well-written and well-edited article, how'd it slip through
         | copy editing?
        
       | thrownaway561 wrote:
       | Though this is a great step... you still have to edit the
       | Dockerfile if you are using Postgres and Redis.
        
       | dieselgate wrote:
       | I've found the "Docker for Rails Developers" book by Rob Isenberg
       | [1] to be a great resource for getting started using Rails and
       | Docker. It's only a couple years out of date at this point but
       | should still be highly relevant for anyone trying to get started.
       | The only issue I've had with Rails and Docker is serving the
       | container on a 1gb d.o. droplet - the node-sass gem is usually a
       | sticking point for a lot of people and believe the 1gb droplet to
       | be just too small to host a dockerized rails app. But the
       | benefits of using docker for Development is still overwhelmingly
       | worth the effort of containerizing things.
       | 
       | It's super cool rails 7.1 is including the Dockerfile by default
       | - not that rails apps need more boilerplate though..
       | 
       | [1]: https://pragprog.com/titles/ridocker/docker-for-rails-
       | develo...
        
       ___________________________________________________________________
       (page generated 2023-01-26 23:00 UTC)