[HN Gopher] How to optimize the security, size and build speed o...
___________________________________________________________________
How to optimize the security, size and build speed of Docker images
Author : mshekow
Score : 53 points
Date : 2022-02-20 14:44 UTC (8 hours ago)
(HTM) web link (www.augmentedmind.de)
(TXT) w3m dump (www.augmentedmind.de)
| adamgordonbell wrote:
| To make OCI images start faster, use stargz. See the image here:
|
| https://github.com/containerd/stargz-snapshotter
|
| It's a lazy file system for images.
| newman314 wrote:
| A few more things to consider:
|
| * I've been playing with checkov recently as a way to track
| Dockerfile quality and best practices
|
| * If you use GitHub, here are some additional considerations
|
| * Use image digests for base images and configure Dependabot to
| update
|
| * Look into implementing OpenSSF Scorecard and Allstar
|
| * Supply chain security is hot right now. Look into cosign
| (signing) and syft (SBOM)
|
| * Step Security has a GitHub action to harden the runner. Think
| of it as Little Snitch for runners
| dlor wrote:
| I would disagree with "Use Docker Content Trust for Docker Hub".
|
| Docker hasn't been signing official images for the last several
| years, so turning this on means you'll get the last correctly
| signed images, which happen to be years out of date.
| returningfory2 wrote:
| > 9. Use docker-slim to remove unnecessary files
|
| Doesn't this, in practice, make the Docker image size situation
| worse? Docker caches images in layers and reuses e.g. base layers
| for all operations. Creating a custom single-layer image for each
| of your binaries negates all the benefits of the layered caching.
| You have to download the full image on each pull, rather than
| just the diffs.
|
| Conversely, when I pull the Docker image for an updated version
| of my software, I typically only have to pull the last few small
| layers because the base image hasn't changed.
| KronisLV wrote:
| > ... I typically only have to pull the last few small layers
| because the base image hasn't changed.
|
| That probably depends on your circumstances! For example, you
| could use a particular OS image as your base, software with
| updates as a set of intermediate layers and your software and
| whatever else you need as the last ones. That way, leaving the
| layers as they are would indeed result in some pretty good
| efficiency, since only the changed layers would need to be
| pulled.
|
| Whereas if you base your software on a particular runtime
| image, e.g. OpenJDK or one of its varieties, then it's unlikely
| that you'll see such nice benefits, at least if you'll
| regularly update the version of the base image that you're
| using. Now, whether you should update everything that often in
| lieu of any serious security vulnerabilities, however, is
| another question.
| mshekow wrote:
| I agree. I would say that the reason for using docker-slim
| should be motivated more by security considerations, than
| trying to reduce the overall image size. If you want to
| uphold the highest security, you would very regularly (e.g.
| every couple of days) invalidate the very first (or second
| layer), because you would be re-pulling the latest base
| image, and additionally run something like "apt-get update &&
| apt-get upgrade".
|
| So, in the end, using docker-slim does make image downloads
| (and container start-up time) _less_ efficient in those
| specific cases where you are releasing new images very often
| (e.g. daily, or even multiple times per day), assuming that
| the base image is released less often (e.g. weekly of
| monthly, as is e.g. the case for Python).
___________________________________________________________________
(page generated 2022-02-20 23:00 UTC)