[HN Gopher] Should you use a Lambda Monolith, a.k.a. Lambdalith,...
___________________________________________________________________
Should you use a Lambda Monolith, a.k.a. Lambdalith, for your API?
Author : e2e4
Score : 53 points
Date : 2023-10-31 09:29 UTC (13 hours ago)
(HTM) web link (rehanvdm.com)
(TXT) w3m dump (rehanvdm.com)
| giorgioz wrote:
| We use Lambda monolith usually with a Lambda containing an entire
| REST JSON API node.js server. It works very well. I feel one of
| the reason why Lambda did not gain fast market traction is
| because all the initial tutorials focused on having an endpoint
| for each lambda, which is actually a special optmization edge
| case that can come later.
|
| Services like Vercel just wrap the Lambda and indeed deploy an
| entire server to the Lambda and gained a lot of users this way.
| Lambda Monolith are good replacement to the old Heroky dynos.
| Lambda monolith are super cheap for B2B SaaS startups with little
| traffic
| intrasight wrote:
| >Lambda did not gain fast market traction is because...
|
| I'll extend this to all three cloud services, and dispute the
| assertion. Developers are smart and figured out that these
| serverless platforms are great for APIs even if the initial
| tutorials didn't present that scenario. Also, the platforms
| just got better at this over time while those early tutorials
| were somewhat cast in stone. I've not looked lately, but I
| would hope that they now have tutorials that show monolith
| scenarios.
|
| >Lambda monolith are super cheap for B2B SaaS startups
|
| Again, I'll extend this to Azure Functions and Cloud Run. Truly
| an amazing place we now sit. My first public facing API
| required me to build a physical machine and place it in a colo
| facility in town. Now I just run a command-line or check in my
| code and my API is live.
| LudwigNagasena wrote:
| > I feel one of the reason why Lambda did not gain fast market
| traction is because all the initial tutorials focused on having
| an endpoint for each lambda
|
| A big reason is that lambdas were half-baked and gradually
| became better. They are still limited.
|
| Sending a binary blob? You have to convert it to base64.
|
| Streaming a response? You can only do that on the nodejs
| runtime.
|
| Establishing a short-term ws connection? Hard/impossible.
| intrasight wrote:
| The question strikes me as odd. I can't imagine why one would
| split an API across separate isolated compute nodes. I've used
| isolated functions in the context of "message orchestration"
| scenarios, but I'd never do that for an API.
| anentropic wrote:
| Can't help thinking that a lot of best practice advice from AWS
| coincidentally maximises your AWS lock-in and billing spend
| mastazi wrote:
| Here's an example: I have one workload that needs more RAM
| because it will parse some big CSV. So this is in its own
| lambda with different memory settings.
|
| Another example could be permissions granularity (each lambda
| can have its own IAM role). For example one of your lambdas
| could have access to something in Secrets Manager.
|
| There are also disadvantages to this approach, for example
| build times and deployment times.
| oulipo wrote:
| The issue for me (using GCloud functions) was the cold boot
| issues, is this more or less fixed now?
| andrewstuart wrote:
| Why lambda?
|
| Just run nodejs.
|
| The cloud has made otherwise smart people into unthinking drones.
|
| You don't need _any of the cloud stuff_ , except a Linux virtual
| machine. Stop drinking the cloud kool ade, it's making things
| much more complex and expensive for questionable gain.
|
| Learn how to, you know, run software on a Linux computer.
| fuzzy2 wrote:
| And suddenly you have to manage an entire Linux server. (-:
| andrewstuart wrote:
| Oh the burden.
| HyprMusic wrote:
| There are plenty of cases where lambda is a better fit than
| hosting a server. Just like there are plenty of cases where
| it's not an appropriate fit.
|
| By flat out refusing to consider it, you become as much of a
| "drone" as the people you call out.
| andrewstuart wrote:
| >> There are plenty of cases where lambda is a better fit
| than hosting a server.
|
| Such as?
|
| If it's regional latency then run a machine in the region.
| diesal11 wrote:
| Out of the box lambda costs $0 when not in use, built in
| autoscaling, automatic runtime security updates, stdout
| straight into Cloudwatch, docker support...the list goes
| on.
| jddj wrote:
| Uncomplicated image conversion / thumbnailing / pdf report
| generation / whatever that you don't want to keep a whole
| container or separate long running task around for.
|
| That's all. The rest goes in VMs.
| theshrike79 wrote:
| You can have a dev team of 20, everyone has their own
| Lambda function running at all times so they can run their
| own branch without affecting others.
|
| When people go home, the dev Lambda cost instantly becomes
| zero.
|
| You can't do that with (virtual) servers, the cost is
| exactly the same 24/7. In some cases it's a good thing,
| some not.
|
| You can't also trivially handle Hug Of Death level load
| spikes.
|
| Not to mention Lambda uptime vs everything else.
| ezfe wrote:
| Simple endpoints that have extremely low-volume traffic. If
| I have a project that has low usage and requires basic
| endpoints to handle server-side logic, lambdas are by far
| the cheapest way to do it and fully performant.
|
| I use Cloudflare Workers on hobby projects to provide
| server side work for things that have low volume, because
| the price is so low per user.
|
| It's true that Lambdas will become pricier with consistent
| high volume usage, but for extremely variable usage or low
| usage, lambdas are a great option.
| Cyriou wrote:
| Say I am building a marketplace which I plan: will get many
| users. Also, we are only 2 building it.
|
| I am pretty sure this is way more expansive to spend
| engineering time making it possible for server(s) to scale
| using kubernetes and stuff, than just using serverless.
|
| And later with many users, we estimate what better be
| implemented manually to save money, because at this point it
| would actually save us money to employ a new engineer than
| continue paying serverless due to number of users.
|
| Don't you think serverless makes sense?
| andrewstuart wrote:
| Why would you run kubernetes?
|
| Do you really feel that serverless is somehow faster than a
| reasonably modern computer?
|
| Nodejs on a single machine can manage tens of thousands of
| connections if not hundreds of thousands.
|
| The vast, vast majority of sites would never be able to
| consume 50% of the computing power in a modern gaming OC.
| Cyriou wrote:
| I don't think it is faster, I think it is way more
| convenient Ok, say I don't need kubernetes.
|
| I would still do it in AWS, as it is like golden garden,
| don't need to deep dive into security measures, to manage
| authorizations manually, to handle database sharding and
| optimizations, you get notifications systems, ML, data
| streams and everything just connects together.
|
| From there using lambda is way faster than to develop your
| own api with caching and making sure it is 100% optimized
| so that 10 thousands ppl can connect, while in fact you
| might change it tmrw.
| brap wrote:
| It's not all or nothing, there are in-between approaches.
|
| For example you can write your portable app in Node.js (with
| or without containerization) and push it to a "serverless"
| PaaS, without running your own linux machine or worrying
| about k8s.
|
| e.g Fly.io, Render, Railway...
| scarface_74 wrote:
| Yes I'm going to tell my CTO to use any of those solutions
| and immediately be laughed out of the room.
| brap wrote:
| You might not have a good CTO then
| dpats wrote:
| You can scale without learning kubernetes. Heroku is mature
| and easy to use.
| gadflyinyoureye wrote:
| Here's my use case for a lambda monolith. We have a distributed
| process. It's very low activity; probably only a few hundred
| calls a day. However, it's a few hundred million in value over
| the year. We have 30 some odd services using NestJS to make
| Lambdas easier manage and to also run locally along with
| localstack.
|
| If each of these services was 1 server, and each server cost
| around $20, we'd have to spend $600/month. I know for a fact
| that there are times where each service request can consume 80%
| of the memory. For safety we'd want at least twice the memory
| or number of units, which are functionally doubling the cost.
| That $600 can easily balloon to 12 or 2400 per month if I were
| to host the services. Now add another $600 per month for the
| test environment. So we'd be looking at $1200 - $3000 a month
| if we kept things really lean.
|
| Now here's the lambda side. Most are about $3 per month (KMS is
| the lion's share of the cost) for the test & prod. So $90 in
| service hosting. Yearly lambda is $1080. Yearly self
| hosting/EC2/droplets is $14,000-36,000. Now repeat this pattern
| across 5 teams with similar approaches.
|
| The savings add up for us. Now is this a universally true
| statement? Nope. But there are valid reasons.
|
| Fun fact, if we ever needed to move off of lambdas, we can
| because NestJS sits on Express. As long as we can access the
| datastores there is minimal work to move.
| diesal11 wrote:
| My company doesn't make more $$$ because i decided to spend a
| day setting up an EC2 over Lambda or a similar PaaS solution.
| If anything i've lost valuable time that could be spent
| elsewhere. Especially when stuff starts going wrong or i have
| to schedule time to manually apply OS/Runtime security updates.
|
| > The cloud has made otherwise smart people into unthinking
| drones.
|
| The cloud has allowed smart people to focus their time on
| unsolved problems, not waste hours/days setting up linux for
| the upteenth time.
| andrewstuart wrote:
| It's fiction that configuring the cloud is easier than
| configuring a computer.
|
| I've worked at big companies with smart people who burn days
| and weeks trying to get IAM, gateways, vpcs, firewalls and
| lambda to play together. Let alone the ongoing nightmare of
| ops/dev interaction.
|
| Complete cloud fiction.
|
| The worst problem is the giant pile of cloud spaghetti you
| end up with and no one has any idea what connects to what and
| what depends on what. Easier to just accumulate more and more
| resources which cloud companies love.
|
| Just run a computer, it's easier.
| alex_lav wrote:
| > It's fiction that configuring the cloud is easier than
| configuring a computer.
|
| > Just run a computer, it's easier.
|
| Statements made confidently while also being totally
| untrue.
|
| > I've worked at big companies with smart people who burn
| days and weeks trying to get IAM, gateways, vpcs, firewalls
| and lambda to play together
|
| Working with incompetent people is not an excuse.
| diesal11 wrote:
| To be blunt that sounds like operator inexperience. Throw
| someone who's spent their life setting up Windows servers
| on a Linux box and you'd hear similar resentments.
|
| At the end of the day you still need to configure the
| instances for things like auto scaling, security patches,
| logging and so on. IAM & VPC still come into the mix when
| running on EC2, so you've avoided nothing.
| jvanderbot wrote:
| > It's fiction that configuring the cloud is easier than
| configuring a computer.
|
| We're arguing opinions and trying to apply logic.
|
| Some people find lambda easier and it must be true that
| lambda fits certain workloads better. Some people prefer
| VMs or on-prem or other long-running services. I prefer
| both in different cases.
|
| > The worst problem is the giant pile of cloud spaghetti
| you end up with and no one has any idea what connects to
| what and what depends on what.
|
| Yes, it takes discipline to use the best tool for the job.
| "You should do X for everyting" is not the right approach,
| however. This argument is moot.
|
| Right now I support:
|
| * Lambdas for some very expensive infrequent number
| crunching
|
| * Lambda-like on edge for fast response services that
| require low latency
|
| * VMs for always-on services
|
| * Computer in a closet for backups, logging, metrics, etc.
| scarface_74 wrote:
| Your people may be "smart". But they aren't "experienced".
|
| Did they use the CDK or even SAM?
| baq wrote:
| I've experienced ansible for managing a fleet of multiple
| hundred on-prem servers and now I'm experiencing CDK for
| managing a large infrastructure.
|
| Both suck real bad.
|
| Infrastrucure is hard, thankless work. Complexity blows
| up whatever you do.
| scarface_74 wrote:
| You literally can tell ChatGPT to create a CDK typescript
| app that deploys a lambda + API Gateway where the lambda
| works with Get request and a dynamodb table. The lambda
| should have permission to read and write to the Table and
| it will get you 95% there.
|
| Edit: I just did it with ChatGPT 4 expecting it to just
| create the CDK app. It actually created inline Node
| sample code as part of the construct for the actual
| lambda to read from the database.
|
| The last time I did that as a sample to show other
| developers I still had a little additional prompting to
| do
| icedchai wrote:
| I've found that "permissions" are what bites most
| developers. It's always either IAM or security groups...
| scarface_74 wrote:
| Using the ChatGPT prompt I said above, it did the
| permissions correctly
| table.grantReadWrite(lambda function)
|
| Just as an experiment, I've thrown Lambda code I've
| written from scratch into ChatGPT and asked it what
| permissions it needed. It got it right.
|
| ChatGPT is well trained on everything AWS related. It can
| transform CloudFormation to idiomatically correct CDK or
| Terraform.
|
| I hate to say this because it sounds like an appeal to
| authority. But I really want to set context. I used
| ChatGPT for projects while I was working at AWS ProServe
| and since I left. They were generic utility scripts with
| no proprietary business related code.
| icedchai wrote:
| Neat! I personally hate writing Terraform (Does anyone
| like it?)
| withinboredom wrote:
| They probably do make more money... actually. At least in net
| profit.
|
| Cost in time to setup a bare metal, production quality k8s
| cluster with all the bells and whistles from the cloud: 2
| weeks and a skill I will have forever. In fact, the second
| cluster took 4 hours.
|
| Monthly cost: $240 per month.
|
| Time to spin up a new worker, using cloud compute for elastic
| load beyond base load: 5 minutes.
|
| Base load capabilities: 120ish cores, 1tb ram, ridiculous
| terabytes of storage.
|
| Cost to run on managed k8s in the cloud: $5-20k per month.
| diesal11 wrote:
| > They probably do make more money... actually. At least in
| net profit.
|
| It's my company so I think I'd know :)
|
| Also what exactly are your estimates here? Where are you
| getting that kind of capacity for $240 a month?
| theamk wrote:
| We are talking lambdas here, not 1TB ram workers.
|
| I had to run a authenticated webhook forwarder outside of
| the out firewall. Yes, I could have made a real machine,
| but I've made a lambda instead. It's costing us less than
| $2/month and I spend zero energy on maintenance, security
| checks, etc. And all config is in a single git repo that
| anyone can read and understand - there is zero chance of
| someone ssh'ing into server for a quick fix, then
| forgetting to record what they did.
|
| Real machines are great for heavy loads, but you just
| cannot beat lambdas for the lightweight stuff.
| jvanderbot wrote:
| I have done both, for many years.
|
| I hate managing a server, unless it's running a DB, relay, or
| other long-running service.
|
| For most my backend work, it's "Please calculate this", and
| that fits so well into a lambda I don't understand why people
| hate it so much.
|
| Zero maintenance, zero costs until used, much less security to
| manage, etc. It all works very well for me.
|
| I still run all kinds of stuff on a linux server, but the
| comment seems a little gatekeepy. There's nothing strictly
| better (or even simpler) about running on a server.
| scarface_74 wrote:
| How does your proposal add business value - ie "how does it
| make the beer taste better?"
| solatic wrote:
| If you want to run node, why a full-blown VM and not a
| container?
|
| > Learn how to, you know, run software on a Linux computer.
|
| I know how to, I run NixOS at home. If you own the VM, you own
| the maintenance on the VM. Optimizing for the cost of setup is
| usually wrong, most people are better off optimizing for
| running costs and maintenance.
| meowtimemania wrote:
| I really like lambda for all my personal projects. Linux
| virtual machine means paying a monthly fee. For me Lambda is
| free.
|
| Main benefit I see to Linux machine is not having to deal with
| deprecated run times. With lambda every few years, I have to
| redeploy my Lambdas when their runtime gets deprecated.
| otteromkram wrote:
| What's with all the anti-microservices posts lately? Is there a
| mounting campaign to put software devs/engineers out of work?
| philwelch wrote:
| At least someone finally admits microservices are just a jobs
| program for software devs.
| TYPE_FASTER wrote:
| If you're going to put everything in a single lambda function
| that's continually serving HTTP requests, you might as well look
| at ECS.
| theshrike79 wrote:
| ECS doesn't a) scale to zero b) scale up to infinity near-
| instantly.
|
| Source: I've rand both a Lambda Monolith and ECS on production
| and would pick Lambda any day
| baq wrote:
| Do you scale to zero for the resume achievement or to
| actually save money?
| grose wrote:
| I like to use it for little side projects, then I can leave
| them alone forever and not have to shut them down.
| theshrike79 wrote:
| To actually save money. Mostly due to not needing to have a
| 24/7 team managing it.
|
| AWS Lambda NEVER goes down. And if it does, it's most
| likely someone messing with DNS or backbone routing again
| and half the world is broken anyway.
|
| It does need skill though, the billing model isn't as clear
| as "you pay me X euros a month for a server, I don't care
| what you do with it", but it's not dark magic either.
| goostavos wrote:
| Just FYI: You can totally scale ECS down to zero. Our ECS
| bill hovers around $1/mo because we spin it up on demand
| based on queue size (It takes a painfully long time to spin
| up, though).
| mastazi wrote:
| > ECS doesn't [...] b) scale up to infinity near-instantly.
|
| yes we have some very spiky workloads and we got bitten by
| that, subsequently we moved away from ECS for this exact
| reason
| nickdothutton wrote:
| The distant hum of hornets grows louder as you approach the HN
| thread. There is no substitute for architecting your service for
| the market and with knowledge of the computing machinery (which
| boils down to physics) that will underpin it. By which I mean
| understanding how it will be used, how it will grow, which parts
| of it will require "structural strength" and which are
| "decorative" and how what you are developing will manage to align
| both with the customers requirements and what the physics of
| available computing machinery can provide.
| cr3ative wrote:
| Ah, I see
| https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...
| is still working.
| report-to-trees wrote:
| I would argue that instead of starting with a Lambda Monolith and
| splitting routes out when they need to scale separately, you
| should be starting with an actual monolith and using Lambdas
| _only_ when a single route needs to scale separately (in a way
| that fits lambda). The Lambda Monolith is an unnecessary
| architecture as far as I'm concerned.
| theshrike79 wrote:
| So a separate server running a monolith is not "unnecessary
| architecture", but a simple Lambda function is?
|
| With a Lambda function you can have a dozen different versions
| of the same code running simultaneously with zero extra cost
| and none of them will affect each other's performance. Every
| one of them will be versioned and you can instantly roll back
| to any version or pick any version to be "production".
| dgroshev wrote:
| You can't if you have even moderately complex storage (like
| an SQL database). There is only one version of that, and
| while you can make sure that you can run one other version in
| parallel, it's just one version and a lot of extra
| complexity.
| theshrike79 wrote:
| It depends on your schema, I've worked with systems where
| we have multiple clients in a single system, the data was
| separated with per-client views.
|
| And it also depends whether the database is MSSQL
| ($2k/month on AWS) or something like Aurora.
| report-to-trees wrote:
| If you need multiple versions of something running
| simultaneously then ya lambda might be simpler.
|
| In my experience, running a single monolith server will be
| much simpler than 20+ lambda "monoliths" that call each
| other. I think the simplicity of lambdas vs a persistent
| server looks good on paper but falls apart when you have
| multiple times more deployments to manage.
| theshrike79 wrote:
| No no, you're doing it wrong if you've got Lambdas calling
| Lambdas. That's not a monolith, that's a shitty
| microservice that'll get really expensive real fast :D
| contravariant wrote:
| Surely if you're reimplementing an HTTP server inside a function
| running on an HTTP server then something somewhere has gone
| horribly wrong.
| jvanderbot wrote:
| I _just recently_ ported a hobby fastapi project into a lambda
| container and I can't speak highly enough about it. My costs
| went to zero, deployment is just as easy, performance went up
| (lambda is pretty good once warm), and my local dev build
| matches the API perfectly now. I could get some of that on
| vercel, but not the docker containerization, which makes all
| the local/remote stuff work the same.
| Spivak wrote:
| Do you feel the same about a load balancer being an HTTP server
| proxying requests to another HTTP server?
|
| Because it doesn't have to be this way, there's php-fpm and
| such that could eliminate the http->http step
| contravariant wrote:
| I'm confused, how does php-fpm connect someone to the right
| machine? It just looks like basic multiprocessing to me,
| which should be a basic feature of any decent http server.
|
| Unless you're referring to the 'load balancer' in the
| comments where someone builds their own multiprocessing by
| launching multiple servers and putting nginx in front of it.
| I mean do what you need to do but at least be aware you're
| hacking together something that should be a basic feature of
| the http server.
|
| If you wish to push me I'd be forced to admit that I don't
| like load balancers either, but that's because they're doing
| routing on the application layer which is not ideal and
| creates a single point of failure. That ship has sailed
| though.
| Aeolun wrote:
| Maybe when you decided to use lambda in the first place, for
| sure. But when you are stuck with lambda consolidation is best.
| scarface_74 wrote:
| I can tell you a good reason for it. At one company I was
| working for we had an API that we sold external access to as
| the back end for large health care providers websites and
| mobile apps.
|
| But we also used it internally for batch processes.
|
| In one case latency was important so we hosted the app in an
| auto scaling ECS Fargate cluster behind Nginx and in the other
| case throughput was important so we could just deploy the same
| API to Lambda.
|
| Yes I know about reserved concurrency.
| time0ut wrote:
| The way I have done this in the past is to use an adapter
| library that maps from the incoming lambda event to whatever
| framework you are using (like fastify or jax-rs for example).
| Your code looks like the normal http request handler stuff you
| would normally write, but it isn't running an http server
| inside the lambda beyond whatever stuff AWS does under the
| covers of course. I've deployed many Lambda based HTTP APIs in
| this pattern and its my preferred approach to using Lambda,
| though I would rather target a standard container orchestrator
| if available.
| snicker7 wrote:
| I find managing the AWS API Gateway to be too cumbersome. We use
| a proxy lambda that handles authentication and has routing logic
| to other lambdas (or SQS | State Machine).
| theshrike79 wrote:
| Yes, you should. Especially if you want to scale up without tons
| of extra work.
|
| The model of "every function should be a separate Lambda" is just
| moronic - I've seen people run into hard limits on AWS going all-
| in on this model.
|
| Instead Build a single executable/library and deploy it as a
| Lambda function.
|
| Now you have automatic versioning, just push and it's a new
| version. You can label versions and call them based on labels.
|
| Deploying to prod is just swapping the prod label to a new
| function. Want to roll back? Swap it back.
|
| Credentials: ran a top10 mobile game backend on AWS Lambda with
| zero issues using a C# monolith.
| Aeolun wrote:
| AWS limits are there to be worked around! Have too many lambdas
| to fit them in a single cloudformation stack? Just split it up
| into 'per family' stacks. Enjoy your nested cloudformation
| stacks.
|
| Oh, you say you have more than 200 endpoints in your REST API
| gateway, and suddenly your CloudFormation stop working? Yes,
| unfortunately CF only load 8 pages of resources in alphabetical
| order when starting, and your root resource starts with a 'Y'.
|
| No problem, implement a new custom resource that just creates
| and discards API gateways until you have one with your desired
| root resource starting with 'A-F'. Did I mention the 'delete a
| rest api' API is limited to a single call per minute? The fun
| doesn't end. Now you may be able to grow until 600 resources
| before you'll have to tighten the limits once again.
|
| You say you think this is bollocks? Of course, just switch to
| an Application Load Balancer. Just 100 rules max means you'll
| only need to nest about 4 of them to fit all your API
| endpoints.
|
| And people wonder why some give up on the cloud. Any $5 VPS has
| higher limits than any of these AWS services...
| theshrike79 wrote:
| 200 endpoints? Sheeesh. I've never seen anything over 50.
|
| That must've been a proper monolith then? =)
| rco8786 wrote:
| I worked on a small SaaS that deployed in this way. The AWS bill
| was about 10x what it would have been if they had just deployed
| nodejs on an EC2 instance.
| lsaferite wrote:
| There's obviously a static load level where it makes more sense
| to have an always-on server vs. a FaaS and you need to monitor
| that boundary point.
|
| The point of doing something like this "Lambda Monolith" is
| that transitioning to an always-on server is trivial. I've been
| doing this exact setup for a while and our static costs
| basically evaporated. Seeing your costs drop below a
| dollar/month on a project you are building is amazing. As soon
| as one of our projects is large enough, we move it to a
| dedicated instance or cluster. This allows us to delay that
| point until we have enough revenue to justify the costs.
| fidrelity wrote:
| I just can't get my head around this argument of 'scales to
| zero'.
|
| If you have one or more professional developers working on a
| project how can you be bothered about an extra $20-$200/month
| when you know you'll also be spending a couple of engineering
| hours (worth more than $200) on migrating later?
| rco8786 wrote:
| always-on EC2 instances start at like $20/mo? I don't
| understand this argument whatsoever.
|
| > Seeing your costs drop below a dollar/month on a project
| you are building is amazing.
|
| Honestly, this just means you don't have any users
| lsaferite wrote:
| > Honestly, this just means you don't have any users
|
| Which is where many small ideas start.
|
| My buddy had an idea for a few SaaS products and I've
| helped him develop them. The initial costs were small, but
| give it was an expense with no income in the near future,
| finding a way to scale to zero was the only way this became
| something he'd consider. It's got _very_ sporadic use
| currently and his bill for each service is literally around
| $1/month right now. Even the cheapest instance would be
| more than he's paying.
| knodi wrote:
| Also the fact that lambda it will limit the tech you can use in
| the backend such as long-lived connections stack (redis,
| postgres)
| bob1029 wrote:
| Yes, but not only API...
|
| We figured out how to get Azure Functions to serve a complete
| webapp without any extra crap wrapped around it. No
| gateways/proxies/etc. One function that serves the root URL[0]
| and has conditionals for GET/POST methods. Beyond this point,
| anyone with old-school mastery of HTML/CSS/JS can get the rest of
| the job done with basic-ass SSR and multipart form posts.
|
| For data, we just use Azure SQL Server. Authentication is like
| sticking legos together now. 100% managed identities, no magic
| strings, etc. Our functions get the user's claims principal right
| in the arg list. We have zero lines of authentication code in our
| latest products.
|
| This stuff is absolutely the future for a vast majority of boring
| old business.
|
| One thing to think about: If your FaaS offering is capable of
| serving content-type of application/json, is it not also capable
| of serving content-type of text/html? Do your providers go out of
| their way to prevent you from making a simple solve on 99% of B2B
| bullshit?
|
| [0]: Note that serving the root URL on an Azure Function is an
| undocumented hack-around item. Worst case, we would serve the
| /app route and our users (B2B) wouldn't really care. My suspicion
| is that Microsoft detected our use case and saw that it stole a
| LOT of thunder from a wide array of other Azure offerings (API
| Gateway, Front Door, et. al.), so making it slightly more
| complicated (at first glance) is likely intentional.
| scarface_74 wrote:
| Why would you ever do this instead of just storing your static
| assets in the Azure equivalent of S3 and let it do the heavy
| lifting along with the CDN?
| bob1029 wrote:
| We are B2B at the scale of 1-10k users. There is no "heavy
| lifting" to do in our contexts of use. Serving static assets
| along with the SSR content is totally acceptable. They aren't
| even served as separate files. We inline everything into 1
| final HTML payload always.
|
| Our largest assets are SVG logos that our clients bring to
| the party. We do work with dynamic PDF documents, but these
| are uncachable for a _number_ of reasons.
|
| Hypothetically, if we had millions of users and static asset
| IO was _actually causing trouble_ , we'd consider using a CDN
| or other relevant technology.
| Mortiffer wrote:
| I bet you were running c#. Been pulling my hair out with the
| python support in Azure Functions lately (starts with no local
| emulation on mac, but positively this got me to start using VS
| code server and https://github.com/coder/coder ) .
| bob1029 wrote:
| Yes. C#/V4 functions. I've been working with the in-process
| model, but close to validating our app works on the isolated
| worker model too.
| anon25783 wrote:
| Sounds like a cool hack, but why not just self-host with on-
| premises hardware, or rent a cloud server from a hosting
| provider like e.g. Digitalocean? Then you wouldn't have to
| worry so much about what Microsoft wants, no?
| icedchai wrote:
| I've seen it both ways. A lambda monolith is much simpler to work
| with.
| hk1337 wrote:
| No. It's a gross misuse and misunderstanding of serverless and
| lambda functionality.
| meowtimemania wrote:
| How so? Lambdalith means simpler, faster deployments, and
| easier to run locally.
| mastazi wrote:
| with lambdalith you miss out on some things like granular IAM
| roles and granular Memory/Cores config. Also with lambdalith
| you probably end up needing an application framework as
| opposed to just using mostly vanilla features of your
| language of choice. Of course I agree that lambdalith has
| advantages as well.
| meowtimemania wrote:
| In most cases I prefer monolithic lambda functions. It means
| faster deploys, and it's easier to reason about.
|
| I create separate lambdas only when I have some large dependency
| that can be easily be separated from the rest of the application.
| lstodd wrote:
| OMG. An application server reinvented.
|
| Another couple of years and they will reinvent microservices on
| top of that monolith thingy.
| kennu wrote:
| Typical usage is one Lambda per service. Then use switch
| (event.routeKey) { case 'GET /path: ... } to serve the different
| API Gateway routes attached to the Lambda. It doesn't have to be
| very complicated.
| mastazi wrote:
| There is a third approach besides "lambdalith" and "one lambda
| per route", it's making one lambda for a group of routes, for
| example you could group them by first segment of their path (all
| routes starting with /users/* in one lambda, all /orders/* in
| another, etc). Then, inside the lambda handler, you can use a
| routing library to select the right method.
|
| This worked for us because it mitigated the shortcomings of the
| other two approaches for example:
|
| - lambdalith: very long build times and long deploy times (due to
| the high number of lambdas)
|
| - one lambda per route: lack of permission granularity (as
| opposed to creating different IAM roles for different lambdas);
| lack of granularity for memory/cores configuration means you are
| missing out on cost optimisation; also, as the API grows, it soon
| becomes necessary to adopt some framework like Express or Fastify
| to keep things tidy.
|
| EDIT: reworded last paragraph for clarity.
| danielrhodes wrote:
| I've used Lambda at some scale. I think Lambda is pretty great.
| You can really be batting far above your weight by utilizing AWS
| services. It's especially great and cost effective for spiky
| workloads, with the exception of a few services like DynamoDB.
| But there are trade offs that I learned. You might not see this
| with a small setup, but you definitely will the bigger you get.
|
| A few that come to mind:
|
| 1) AWS services are not easy to develop with locally. This
| incentivizes a lot of testing in production, which in turn can
| slow down development speed and increase defects. Yes there are
| some wrapper libraries and helpers, but it's not 1:1. (This same
| thing applies to Cloudflare workers as well).
|
| 2) Lambdas have constraints such as execution time limits, and a
| lack of fine grained control over resource usage. This makes it
| hard to use things like long lived requests and web sockets. But
| it can also mean you could end up paying a lot more compared to a
| fixed price setup, such as an instance in EC2. The flip side is
| also true: it can be an incredibly good deal as well.
|
| 3) There's a lot of tooling and knowledge around Lambda which
| takes some time to learn. You have to be quite careful around
| things like DB connections, logging, concurrency, and so on. This
| overhead might ultimately make things more complicated than a
| more traditional setup.
|
| 4) There is a maximum package size for a lambda. When you include
| all your dependencies in NodeJS and so on, it can become very
| tricky to keep things under this size in even a medium sized code
| base. A larger size also effects your cold start time. There
| isn't a lot you can do to change this.
|
| 5) Lambda's easy integration with other AWS services is both a
| blessing and a curse. AWS is not cheap, and given how trivial it
| can be to add on services, it can require a considerable amount
| of time to reduce spend. Lambda's integration with non-AWS
| services is very hit or miss (e.g. Kafka). You only find this out
| by getting very burned.
|
| 6) If you require anything like C libraries, be prepared to spend
| a lot of time working on builds and deployment - or sometimes it
| just magically works. You are operating inside a somewhat opaque
| environment, and it can require a lot of fiddling. Additionally,
| using tools like CloudFormation do not scale very far, and can
| sometimes result in quite bad outcomes. This stuff can steal a
| lot of time.
|
| 7) There are hidden costs everywhere. For example: Are you being
| a good engineer and adding a lot of visibility by logging and
| using tools such as CloudWatch? Be prepared to pay an
| extraordinary amount relative to the value you get.
|
| 8) Using lambda as a processing pipeline? Easy to get started,
| hard to get reliable. AWS does not have good inexpensive tools to
| manage these well, and you can end up with a very leaky and
| expensive system.
|
| 9) You would be surprised at how many bugs and weird behaviors
| exist when these AWS systems play together. In some cases, the
| only recourse was to delete the entire stack and redeploy because
| something got stuck. That could be quite bad depending on the use
| case.
|
| 10) Using lambdas over data engineering tools (e.g. Kinesis Data
| Streams vs Spark) might actually be more complicated. You should
| hopefully be familiar with and have some experience in this these
| tools before making a decision when its time to build.
| locustmostest wrote:
| We use what might be called a microlith pattern with Lambda for
| our "infra meets code" document handling project
| (https://github.com/formkiq/formkiq-core).
|
| There's nothing stopping someone from taking what we have and
| porting to an EC2, but for most use cases, this pattern seems to
| work best.
|
| We break our Lambdas up by how they are used, to keep sizes down,
| with the option to break up further if we need to.
| joshstrange wrote:
| I actually agree with a lot of what they said here but the last
| half or so of the blog post was incredibly hard to follow. The
| gray blocks were what he was refuting/replying to? The X/check
| emojis meant he agreed or disagreed I guess? It made me think
| that emoji applied to the paragraph it was part of, as in "I
| agree with this ->" or "I disagree with this ->" when I think it
| meant I agree/disagree with the gray block _above_. It was hard
| to keep reading and so I stopped
|
| > Now the big one, reusing code is by far easier in a monolith.
|
| > Consider the scenario for patching an error in a library that
| is used in all your single-purpose functions:
|
| > You have to publish shared code into packages that need to be
| used by all of your routes > You would have to context switch,
| make the change in the library, merge, publish the package > Then
| context switch again and update the version in each of your 100's
| of routes to use the new version > Then you need to test and
| deploy all the 100's of routes
|
| I don't use a monolith mainly due to size. I had enough problems
| packaging all my functions in 1 archive and hitting size limits
| that I can't imagine a monolith would do anything but lock me
| into that issue with no solution other than splitting it up into
| smaller services. Currently I package all my functions
| individually so they only contain the code/deps that that
| specific function needs. This works very well and keeps my
| function sizes small (except for the Prisma engine, stay away
| from this if you can). Also I have shared code that multiple
| function use (wrapping code for catching errors, DB access,
| various other reusable services), it all works just fine and I
| don't package/publish my shared code, my packaging grabs what
| each function needs and includes it.
|
| Everything else he says, especially about getter /more/ warm
| starts and being able to really take advantage of PC is true, but
| the size issue steered me away from a monolith. If size wasn't a
| problem I'd probably be running express in my lambda but instead
| I have a 1 function per endpoint and that seems to work pretty
| well for my company.
___________________________________________________________________
(page generated 2023-10-31 23:01 UTC)