[HN Gopher] MinIO passes 1B cumulative Docker Pulls
___________________________________________________________________
MinIO passes 1B cumulative Docker Pulls
Author : edogrider
Score : 61 points
Date : 2022-09-21 18:41 UTC (4 hours ago)
(HTM) web link (blog.min.io)
(TXT) w3m dump (blog.min.io)
| WFHRenaissance wrote:
| I used MinIO exactly once, and overall it was a joy to use. Happy
| to hear it's getting some amount of traction.
| superb-owl wrote:
| This is a weird metric, but sadly one of the only ways of
| measuring OSS usage.
| vbezhenar wrote:
| It's a pity that S3 becomes some kind of standard object storage
| protocol. I think it's overcomplicated.
| diroussel wrote:
| In what way do you think it's over complicated? Genuinely
| interested.
|
| Not sure about Minio, but most S3 clones don't clone all API
| operations, just the ones they and their customers/users need.
| dividedbyzero wrote:
| And well-earned, too. MinIO is a really neat solution for keeping
| local data available via the same protocol as that in the remote
| S3 buckets. I do wish there was something below it, though,
| complexity-wise, just a little application that serves a
| directory as an S3-compatible bucket. MinIO can feel a little
| much for simple testing scenarios and the like.
| msarrel wrote:
| Thanks!
| snthpy wrote:
| What about Localstack?
|
| I haven't used it myself yet but discovered it the other day
| and want to use it for a test harness for an application I'm
| building.
|
| https://hub.docker.com/r/localstack/localstack
| thesimon wrote:
| The paywall decisions are sometimes kinda odd though. With
| SES you can only use the older API version, SESv2 is Pro
| only.
| pixl97 wrote:
| Um, sorry that was me when I left a script running in an infinite
| loop.
| packetlost wrote:
| I wonder how many of those are violating the AGPL license
| rglullis wrote:
| Are you implying that merely using Minio is enough to force
| people to provide sources of their own services? If so, you
| need to be disabused of this misconception.
|
| To be violating the AGPL License, someone would have to be
| providing a customized version of Minio to end users and not
| providing the source.
| athorax wrote:
| This might be a bit pedantic, but isn't it not necessarily
| that they provide a modified minio to users, but rather (or
| additionally) they are running a service that uses a modified
| minio under the hood?
| prtkgpt wrote:
| tpmx wrote:
| I wonder what Docker's peak bandwidth usage is, and how they make
| it work, financially.
|
| Just imagine the vast number of poorly cached CI jobs pulling
| gigabytes from Docker hub on every commit, coupled with naive
| aproaches to CI/CD when doing microservices, prod/dev/test
| deployments, etc.
| 5d8767c68926 wrote:
| I have thought there needs to be much more trivial plug-and-
| play caching solution that works for the major services: npm,
| pypi, cargo, docker, etc. Right now, is is justifiably annoying
| enough that nobody worries about it until they are squandering
| terabytes of bandwidth or dealt with an external outage.
| pepemon wrote:
| Please clarify what is "poorly cached" and what is your
| solution to be not so poorly cached in this context. Looks like
| you're overestimating blindly.
| mh- wrote:
| a sibling comment to yours[0], from 'treesknees, addresses
| what a decent caching setup for this looks like.
|
| _> We 've since started caching our images locally using
| Sonatype Nexus Repository Manager plus hosting our own
| registry for some simple things we used to be pulling from
| Docker Hub._
|
| [0]: https://news.ycombinator.com/item?id=32931044
| yjftsjthsd-h wrote:
| If you run two jobs right after each other and the machine
| tries to pull the image separately for each one and
| redownloads the whole thing, then that's a poor cache. Modern
| practices make this relatively easy to accomplish by
| accident.
| treesknees wrote:
| It's been 2 years now but Docker has started cracking down on
| this [1]. They introduced pull limits for anonymous and free
| accounts. Our company actually hit the anonymous limit
| regularly (several separate teams using docker/CI with the same
| public IP from our datacenter.) We've since started caching our
| images locally using Sonatype Nexus Repository Manager plus
| hosting our own registry for some simple things we used to be
| pulling from Docker Hub.
|
| As far as financially, Docker now charges for enterprise use of
| Docker Desktop, which we've also started paying for. But I'm
| sure the bandwidth for running Docker Hub isn't cheap.
|
| [1] https://docs.docker.com/docker-hub/download-rate-limit/
___________________________________________________________________
(page generated 2022-09-21 23:00 UTC)