[HN Gopher] Show HN: S3mini - Tiny and fast S3-compatible client...
___________________________________________________________________
Show HN: S3mini - Tiny and fast S3-compatible client, no-deps,
edge-ready
Author : neon_me
Score : 203 points
Date : 2025-06-11 08:55 UTC (14 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| hsbauauvhabzb wrote:
| I found the words used to describe this jarring - to me it makes
| sense to have an s3 client on my computer, but less so client
| side on a webapp. On further reading, it makes sense, but
| highlighting what problem this package solves in the first few
| lines of the readme would be valuable for people like me at least
| willwade wrote:
| I have a good suspicion this has been written with help from an
| LLM. The heavy use of emojis and strong hyper confident
| language is the giveaway. Proof: my own repos look like this
| after they've had the touch of cursor / windsurf etc. still
| doesn't take away if the code is useful or good.
| gchamonlive wrote:
| > to me it makes sense to have an s3 client on my computer, but
| less so client side on a webapp
|
| What do you mean with a webapp?
| neon_me wrote:
| he expected to be s3 client on desktop/local machhine
| gchamonlive wrote:
| It's a typescript client it seems. While you can bundle it
| in a webapp, typescript application goes beyond just web
| applications, this is why I was confused.
| neon_me wrote:
| tbh - english is not my mother-language so i do help myself
| with copy and typos ... but, if it feels uncomfy please feel
| free to open PR - I want it to be as reasonable as possible
| JimDabell wrote:
| I think "for node and edge platforms" and "No browser support!"
| makes this pretty clear? Those are in the title and first
| paragraph.
| hsbauauvhabzb wrote:
| I think if you asked the average IT person what those
| buzzwords mean, you'll find the answer unclear...
| dev_l1x_be wrote:
| for Node.
|
| These are nice projects. I had a few rounds with Rust S3
| libraries and having a simple low or no dep client is much
| needed. The problem is that you start to support certain features
| (async, http2, etc.) and your nice nodep project is starting to
| grow.
| terhechte wrote:
| I had the same issue recently and used
| https://crates.io/crates/rusty-s3
| maxmcd wrote:
| also: https://crates.io/crates/object_store
| pier25 wrote:
| for JS
|
| > _It runs on Node, Bun, Cloudflare Workers, and other edge
| platforms_
| spott wrote:
| But not in the browser... because it depends on node.js apis.
| pier25 wrote:
| Cloudflare Workers don't use any Node apis afaik
| kentonv wrote:
| Cloudflare Workers now has extensive Node API
| compatibility.
| pier25 wrote:
| huh TIL!
| everfrustrated wrote:
| Presumably smaller and quicker because it's not doing any
| checksumming
| neon_me wrote:
| does it make sense or should that be optional?
| tom1337 wrote:
| checksumming does make sense because it ensures that the file
| you've transferred is complete and what was expected. if the
| checksum of the file you've downloaded differs from the
| server gave you, you should not process the file further and
| throw an error (worst case would probably be a man in the
| middle attack, not so worse cases being packet loss i guess)
| neon_me wrote:
| yes, you are right!
|
| On the other hand S3 uses checksums only to verify expected
| upload (on the write from client -> server) ... and
| suprisingly you can do that in paralel after the upload -
| by checking the MD5 hash of blob to ETag (*with some
| caveats)
| supriyo-biswas wrote:
| > checksumming does make sense because it ensures that the
| file you've transferred is complete and what was expected.
|
| TCP has a checksum for packet loss, and TLS protects
| against MITM.
|
| I've always found this aspect of S3's design questionable.
| Sending both a content-md5 AND a x-amz-content-sha256
| header and taking up gobs of compute in the process,
| sheesh...
|
| It's also part of the reason why running minio in its
| single node single drive mode is a resource hog.
| dboreham wrote:
| Well known (apparently not?) that applications can't rely
| on TCP checksums.
| alwyn wrote:
| In my view one reason is to ensure integrity down the
| line. You want the checksum of a file to still be the
| same when you download it maybe years later. If it isn't,
| you get warned about it. Without the checksum, how will
| you know for sure? Keep your own database of checksums?
| :)
| supriyo-biswas wrote:
| If we're talking about bitrot protection, I'm pretty sure
| S3 would use some form of checksum (such as crc32 or
| xxhash) on each internal block to facilitate the Reed-
| Solomon process.
|
| If it's verifying whether if it's the same file, you can
| use the Etag header which is computed server side by S3.
| Although I don't like this design as it ossifies the
| checksum algorithm.
| everfrustrated wrote:
| You may be interested in this
| https://aws.amazon.com/blogs/aws/introducing-default-
| data-in...
| lacop wrote:
| I got some empirical data on this!
|
| Effingo file copy service does application-layer strong
| checksums and detects about 4.5 corruptions per exabyte
| transferred (figure 9, section 6.2 in [1]).
|
| This is on top of TCP checksums, transport layer
| checksums/encryption (gRPC), ECC RAM and other layers
| along the way.
|
| Many of these could be traced back to a "broken" machine
| that was eventually taken out.
|
| [1] https://dl.acm.org/doi/abs/10.1145/3651890.3672262
| vbezhenar wrote:
| TLS ensures that stream was not altered. Any further
| checksums are redundant.
| tom1337 wrote:
| Thats true, but wouldn't it be still required if you're
| having a internal S3 service which is used by internal
| services and does not have HTTPS (as it is not exposed to
| the public)? I get that the best practice would be to
| also use HTTPS there but I'd guess thats not the norm?
| vbezhenar wrote:
| Theoretically TCP packets have checksums, however it's
| fairly weak. So for HTTP, additional checksums make
| sense. Although I'm not sure, if there are any internal
| AWS S3 deployments working over HTTP and why would they
| complicate their protocol for everyone to help such a
| niche use case.
|
| I'm sure that they have reasons for this whole request
| signature scheme over traditional "Authorization: Bearer
| $token" header, but I never understood it.
| formerly_proven wrote:
| Because a bearer token is a bearer token to do any
| request, while a pre-signed request allows you to hand
| out the capability to perform _only that specific
| request_.
| degamad wrote:
| Bearer tokens have a defined scope, which could be used
| to limit functionality in a similar way to pre-signed
| requests.
|
| However, the s3 pre-signed requests functionality was
| launched in 2011, but the Bearer token RFC 6750 wasn't
| standardised until 2012...
| easton wrote:
| AWS has a video about it somewhere, but in general, it's
| because S3 was designed in a world where not all
| browsers/clients had HTTPS and it was a reasonably
| expensive operation to do the encryption (like, IE6
| world). SigV4 (and its predecessors) are cheap and easy
| once you understand the code.
|
| https://youtube.com/watch?v=tPr1AgGkvc4, about 10 minutes
| in I think.
| huntaub wrote:
| This is actually not the case. The TLS stream ensures
| that the packets transferred between your machine and S3
| are not corrupted, but that doesn't protect against bit-
| flips which could (though, obviously, shouldn't) occur
| from within S3 itself. The benefit of an end-to-end
| checksum like this is that the S3 system can store it
| directly next to the data, validate it when it reads the
| data back (making sure that nothing has changed since
| your original PutObject), and then give it back to you on
| request (so that you can also validate it in your
| client). It's the only way for your client to have
| bullet-proof certainty of integrity the entire time that
| the data is in the system.
| Spooky23 wrote:
| Not always. Lots of companies intercept and potentially
| modify TLS traffic between network boundaries.
| 0x1ceb00da wrote:
| You need the checksum only if the file is big and you're
| downloading it to disk, or if you're paranoid that some
| malware with root access might be altering the contents of
| your memory.
| arbll wrote:
| I mean if a malware is root and altering your memory it's
| not like you're in a position where this check is
| meaningful haha
| lazide wrote:
| Or you really care about the data and are aware of the
| statistical inevitability of a bit flip somewhere along
| the line if you're operating long enough.
| nodesocket wrote:
| Somewhat related, I just came across s5cmd[1] which is mainly
| focused on performance and fast upload/download and sync of s3
| buckets.
|
| > 32x faster than s3cmd and 12x faster than aws-cli. For
| downloads, s5cmd can saturate a 40Gbps link (~4.3 GB/s), whereas
| s3cmd and aws-cli can only reach 85 MB/s and 375 MB/s
| respectively.
|
| [1] https://github.com/peak/s5cmd
| rsync wrote:
| s5cmd is built into the rsync.net platform. See:
|
| https://news.ycombinator.com/item?id=44248372
| tommoor wrote:
| Interesting project, though it's a little amusing that you
| announced this before actually confirming it works with AWS?
| neon_me wrote:
| Personally, I don't like AWS that much. I tried to set it up,
| but found it "terribly tedious" and drop the idea and instead
| focus on other platforms.
|
| Right now, I am testing/configuring Ceph ... but its open-
| source! Every talented weirdo with free time is welcomed to
| contribute!
| leansensei wrote:
| Also try out Garage.
| zikani_03 wrote:
| Good to see this mentioned. We are considering running it
| for some things internally, along with Harbor. The fact
| that the resource footprint is advertised as small enough
| is compelling.
|
| What's your experience running it?
| yard2010 wrote:
| Tangibly related: Bun has a built-in S3-compatible client. Bun is
| a gift, if you're using npm consider making the switch.
| neon_me wrote:
| is there a way to wrap their s3 client for use in HonoJS/CF
| workers?
| oakesm9 wrote:
| No. It's implemented in native code (Zig) inside bun itself
| and just exposed to developers as a JavaScript API.
|
| Source code: https://github.com/oven-
| sh/bun/tree/6ebad50543bf2c4107d4b4c2...
| neon_me wrote:
| 10/10 Loving it (and how fast it is!) - its just not the
| use-case that fits my needs.
|
| I want maximum ability to "move" my projects among
| services/vendors/providers
| ChocolateGod wrote:
| I tried to go this route of using Bun for everything
| (Bun.serve, Bun.s3 etc), but was forced back to switch back to
| NodeJS proper and Express/aws-sdk due to Bun not fully
| implementing Nodes APIs.
| biorach wrote:
| What were the most significant missing bits?
| eknkc wrote:
| The worst thing is issues without any visibility.
|
| The other day I was toying with the MCP server
| (https://github.com/modelcontextprotocol/typescript-sdk). I
| default to bun these days and the http based server simply
| did not register in claude or any other client. No error
| logs, nothing.
|
| After fiddling with my code I simply tried node and it just
| worked.
| zackify wrote:
| It definitely works in bun just fine. I have a production
| built mcp server with auth running under bun.
|
| Now if you convert the request / response types to native
| bun server, it can be finicky.
|
| But it works fine using express under bun with the
| official protocol implementation for typescript.
|
| Actually writing a book about this too and will be using
| bun for it
| https://leanpub.com/creatingmcpserverswithoauth
| tengbretson wrote:
| Not sure about the specific underlying apis, but as of my
| last attempt, Bun still doesn't support PDF.js (pdfjs-
| dist), ssh2, or playwright.
| ChocolateGod wrote:
| localAddress is unsupported on sockets, meaning you can not
| specify an outgoing interface, which is useful if you have
| multiple network cards.
| pier25 wrote:
| Proividing built APIs to not rely on NPM is one of the most
| interesting aspects of Bun IMO.
| greener_grass wrote:
| Can someone explain the advantage of this?
|
| If I want S3 access, I can just use NPM
|
| If I don't want S3 access, I don't want it integrated into my
| runtime
| pier25 wrote:
| Would you rather use an officially maintained solution or
| some random package by a random author who might abandon
| the project (or worse)?
| greener_grass wrote:
| The S3 packages on NPM are maintained by AWS
| pier25 wrote:
| Indeed but I was arguing about a general point.
|
| I'd be surprised if any of your Node projects had less
| than 100 total deps of which a large number will be
| maintained by a single person.
|
| See Express for example. 66 total deps with 26 deps
| relying on a single maintainer.
|
| https://npmgraph.js.org/?q=express
|
| But even in the case of the official aws-sdk they
| recently deprecated v2. I now need to update all my not-
| so-old Node projects to work with the newer version.
| Probably wouldn't have happened if I had used Bun's S3
| client.
| greener_grass wrote:
| So let's put every package under the sun into the client?
|
| This approach does not scale. We should make NPM better.
| pier25 wrote:
| How do you make NPM better?
|
| BTW I'm not saying we should kill NPM. What I'm saying is
| we should reduce our dependance on random packages.
|
| Bun doesn't need to add everything into the core engine.
| Eg: when using .NET you still add plenty of official
| Microsoft dependencies from Nuget.
| zackify wrote:
| I came here to say the same thing.
|
| Rather ship oven/bun through docker and have a 90mb container
| vs using node.
| akouri wrote:
| This is awesome! Been waiting for something like this to replace
| the bloated SDK Amazon provides. Important question-- is there a
| pathway to getting signed URLs?
| neon_me wrote:
| For now, unfortunately, no - no signed URLs are supported. It
| wasn't my focus (use case), but if you find a
| simple/minimalistic way to implement it, I can help you with
| that to integrate it.
|
| From my helicopter perspective, it adds extra complexity and
| size, which could maybe be ideal for a separate fork/project?
| mannyv wrote:
| Signed URLs are great because it allows you to allow third
| parties access to a file without them having to authenticate
| against AWS.
|
| Our primary use case is browser-based uploads. You don't want
| people uploading anything and everything, like the wordpress
| upload folder. And it's timed, so you don't have to worry
| about someone recycling the URL.
| ecshafer wrote:
| You can just use s3 vis rest calls if you dont like their sdk.
| nikeee wrote:
| I've built an S3 client with similar goals like TFA, but
| supports pre-signing:
|
| https://github.com/nikeee/lean-s3
|
| Pre-signing is about 30 times faster than the AWS SDK and is
| not async.
|
| You can read about why it looks like it does here:
| https://github.com/nikeee/lean-s3/blob/main/DESIGN_DECISIONS...
| e1g wrote:
| FYI, you can add browser support by using noble-hashes[1] for
| SHA256/HMAC - it's a well-done library, and gives you
| performance that is indistinguishable from native crypto on
| any scale relevant to S3 operations. We use it for our in-
| house S3 client.
|
| [1] https://github.com/paulmillr/noble-hashes
| continuational wrote:
| SHA256 and HMAC are widely available in the browser APIs:
| https://developer.mozilla.org/en-
| US/docs/Web/API/SubtleCrypt...
| e1g wrote:
| SublteCrypto is async, and the author specifically said
| they want their API to be sync.
| shortformblog wrote:
| This is good to have. A few months ago I was testing a S3
| alternative but running into issues getting it to work. Turned
| out it was because AWS made changes to the tool that had the
| effect of blocking non-first-party clients. Just sheer chance on
| my end, but I imagine that was infuriating for folks who have to
| rely on that client. There is an obvious need for a compatible
| client like this that AWS doesn't manage.
| _1 wrote:
| Same as this https://github.com/minio/minio ?
| carlio wrote:
| minio is an S3-compatable object store, the linked s3mini is
| just a client for s3-compatable stores.
| arbll wrote:
| No this is an S3-compatible client, minio is an S3-compatible
| backend
| EGreg wrote:
| You know what would be really awesome? Making a fuse-based drop-
| in replacement for mapping a folder to a bucket, like goofys.
| Maybe a node.js process can watch files for instance and backup,
| or even better it can back the folder and not actually take up
| space on the local machine (except for a cache).
|
| https://github.com/kahing/goofys
| arbll wrote:
| This seem completely unrelated to the goal of OP's library ?
| EGreg wrote:
| It seems to be related to what a lot of people want and is
| low hanging fruit now that he has this library!
| TuningYourCode wrote:
| You mean like https://github.com/s3fs-fuse/s3fs-fuse ? It's
| so old that even debian has precompiled packages ;)
| EGreg wrote:
| I was talking about goofys because it is not POSIX
| compliant, so it's much faster than s3fs-fuse
|
| But either one can only work with s3. His library works
| with many other backends. Get it? I'm saying he should
| consider integrating with goofys!
| cosmotic wrote:
| > https://raw.githubusercontent.com/good-lly/s3mini/dev/perfor...
|
| It gets slower as the instance gets faster? I'm looking at
| ops/sec and time/op. How am I misreading this?
| xrendan wrote:
| I read that as the size of file it's transferring so each
| operation would be bigger and therefore slower
| math-ias wrote:
| It measures PutObject[0] performance across different object
| sizes (1, 8, 100MiB)[1]. Seems to be an odd screenshot of text
| in the terminal.
|
| [0] https://github.com/good-
| lly/s3mini/blob/30a751cc866855f783a1... [1]
| https://github.com/good-lly/s3mini/blob/30a751cc866855f783a1...
| arianvanp wrote:
| libcurl also has AWS auth with --aws-sigv4 which gives you a
| fully compatible S3 client without installing anything! (You
| probably already have curl installed)
| impulser_ wrote:
| Yeah, but that will not work on cloudflare, vercel, or any
| other serverless environment because at most you only have
| access to node apis.
| busymom0 wrote:
| Does this allow generating signed URLs for uploads with size
| limit and name check?
| brendanashworth wrote:
| How does this compare to obstore? [1]
|
| [1] https://developmentseed.org/obstore/latest/
| linotype wrote:
| This looks slick.
|
| What I would also love to see is a simple, single binary S3
| server alternative to Minio. Maybe a small built in UI similar to
| DuckDB UI.
| koito17 wrote:
| > What I would also love to see is a simple, single binary S3
| server alternative to Minio
|
| Garage[1] lacks a web UI but I believe it meets your
| requirements. It's an S3 implementation that compiles to a
| single static binary, and it's specifically designed for use
| cases where nodes do not necessarily have identical hardware
| (i.e. different CPUs, different RAM, different storage sizes,
| etc.). Overall, Garage is my go-to solution for object storage
| at "home server scale" and for quickly setting up a real S3
| server.
|
| There seems to be an unofficial Web UI[2] for Garage, but
| you're no longer running a single binary if you use this. Not
| as convenient as a built-in web UI.
|
| [1] https://garagehq.deuxfleurs.fr/
|
| [2] https://github.com/khairul169/garage-webui
| dzonga wrote:
| this looks dope.
|
| but has anyone done a price comparison of edge-computing vs say
| your boring hetzner vps ?
___________________________________________________________________
(page generated 2025-06-11 23:00 UTC)