[HN Gopher] Lambda on hard mode: serverless HTTP in Rust
___________________________________________________________________
Lambda on hard mode: serverless HTTP in Rust
Author : pierremenard
Score : 32 points
Date : 2024-03-16 18:03 UTC (4 hours ago)
(HTM) web link (modal.com)
(TXT) w3m dump (modal.com)
| nathancahill wrote:
| My first impression is that this is serverless done right. My
| second impression is I'd love to work there.
| akira2501 wrote:
| Lambda is fundamentally a request/response architecture and is
| meant to be tied together with several other AWS services. As
| such, I don't think Modals offering is really comparable, nor is
| "lambda on hard mode" a particularly good description for what
| they've made.
|
| Perhaps "EC2 on easy mode" is more like it.
| jvanderbot wrote:
| I've deployed lambdas written in rust, mostly because I needed
| a good C interop and didn't want to mess around with C++ AWS
| (been down that road, without native json support it's a pain).
|
| The rust lambda SDK is just fine. You can write your rust
| endpoint on, e.g. Axum, then deploy straightaway.
|
| I run a few full fledged apis with just one lambda in this way.
| It's not hard mode at all.
| maerF0x0 wrote:
| This plus because you can dockerize and run on lambda,
| essentially you can run most anything these days (most things
| i've encountered are reasonably easy to dockerize, i'm sure
| there are exceptions, but in the main easy)
| maerF0x0 wrote:
| > a request/response architecture
|
| fundamentally it's a HTTPS server too, you can actually invoke
| them with direct HTTPS calls, no SDK required. [1]
|
| [1]: https://docs.aws.amazon.com/lambda/latest/dg/urls-
| invocation...
| j4ah4n wrote:
| This looks amazing, nice write up!
|
| Anyone know if there's anything like this, but using Python and
| self hostable?
| WiseWeasel wrote:
| You can migrate a Django project between serverless and self-
| hosted https://dev.to/vaddimart/deploy-django-app-on-aws-
| lambda-usi...
| voiceblue wrote:
| As the other commenter points out, this offering isn't quite
| comparable to Lambda directly. This ends up comparing apples to
| oranges here and there, but overall I was able to get a good idea
| of the choices made and the tradeoffs involved. Nice work! I do
| have a complaint about the comparison table that shows 'convert
| HTTP to function calls' as an alternative to load balancers and
| reverse proxies: as we see later in the article, there is still a
| load balancer involved, and that table creates a false impression
| that there isn't.
| maerF0x0 wrote:
| Their quoted limits on AWS seem quite off.
|
| > As of 2024, they can only use 3 CPUs (6 threads) and 10 GB of
| memory
|
| Actually you get 1vCPU (eg a hyperthread) per 1769MB of memory[1]
|
| [1]
| -https://docs.aws.amazon.com/lambda/latest/dg/configuration-f...
|
| > Response bandwidth is 2 Mbps
|
| This is shockingly low, and I wouldn't believe it without data.
| 16Mbps (2MB/s) would be more believable. In my experience you can
| reliably get 25MB/s (~400Mbps) in the network layer of things in
| AWS.
| arrakeenrevived wrote:
| You're correct, it is 2 MB/s. The actual bandwidth from the AWS
| Lambda docs is:
|
| >Uncapped for the first 6 MB of your function's response. For
| responses larger than 6 MB, 2MBps for the remainder of the
| response
|
| Some of the other numbers in the article are also incorrect.
| Lambda functions using containers can use a 10 GB container
| image (the article claims 50 MB), and container images are
| actually the faster/preferred way to do it these days.
| teaearlgraycold wrote:
| Something that turned me off of GPU lambda services is that they
| don't offer a way to run everything locally. I have an instance
| with a GPU where I do my dev. I'm running Postgres, the front
| end, back end (Node), and my GPU worker thread (Python) on that
| box. Replicate and other offerings do not let me run the full
| hosted environment on my machine (you can run the service
| container but it behaves differently as there's no included
| router/scheduler). It feels wrong to use a magically instantiated
| box for my GPU worker.
|
| Really all that I want is for Render.com to have GPU instances (I
| saw fly.io now has GPUs which is great, but I've both heard bad
| things about their stability and don't care for their rethinking
| of server architecture). Please will someone give me PaaS web
| hosting with GPU instances?
|
| I'm a simple man. If there's a fundamental shift in hosting
| philosophy I will resist that change. I have loved docker and
| PaaS as revolutions in development and hosting experiences
| because the interface is at some level still just running Linux
| processes like I do on my computer. You can tell me that now my
| code is hosted in a serverless runtime. But you need to give me
| that runtime so that I can spin it up on my own computer, on EKS,
| or whatever if need be.
___________________________________________________________________
(page generated 2024-03-16 23:00 UTC)