[HN Gopher] Launch HN: SST (YC W21) - A live development environ...
___________________________________________________________________
Launch HN: SST (YC W21) - A live development environment for AWS
Lambda
Hi HN, we are Jay and Frank and we are working on SST
(https://github.com/serverless-stack/serverless-stack). SST is a
framework for building serverless apps on AWS. It includes a local
development environment that allows you to make changes and test
your Lambda functions live. It does this by opening a WebSocket
connection to your AWS account, streaming any Lambda function
invocations, running them locally, and passing back the results.
This allows you to work on your functions, without mocking any AWS
resources, or having to redeploy them every time, to test your
changes. Here's a 30s video of it in action --
https://www.youtube.com/watch?v=hnTSTm5n11g For some background,
serverless is an execution model where you send a cloud provider
(AWS in this case), a piece of code (called a Lambda function). The
cloud provider is responsible for executing it and scaling it to
respond to the traffic. While you are billed for the exact number
of milliseconds of execution. Back in 2016, we were really excited
to discover serverless and the idea that you could just focus on
your code. So we wrote a guide to show people how to build full-
stack serverless applications -- https://serverless-
stack.com/#guide. But we noticed that most of our readers had a
really hard time testing and debugging their Lambda functions.
There are two main approaches to local Lambda development: 1)
Locally mock all the services that your Lambda function uses. For
example, if your Lambda functions are invoked by an API endpoint,
you'll run a local server mocking the API endpoint that invokes the
local version of your Lambda function. This idea can be extended to
services like SQS (queues), SNS (message bus), etc. However, if
your Lambda functions are invoked as a part of a workflow that
involves multiple services, you quickly end up going down the path
of having to mock a large number of these services. Effectively
running a mocked local version of AWS. There are services that use
this approach (like LocalStack), but in practice they end up being
slow and incomplete. 2) Redeploy your changes to test them. This
is where we, and most of our readers eventually end up. You'll make
a change to a Lambda function, deploy that specific function,
trigger your workflow, and wait for CloudWatch logs to see your
debug messages. Deploying a Lambda function can take 5-10s and it
can take another couple of seconds for the logs to show up. This
process is really slow and it also requires you to keep track of
the functions that've been affected by your changes. We talked to
a bunch of people in the community about their local development
setup and most of them were not happy with what they had. One of
the teams we spoke to mentioned that they had toyed with the idea
of using something like ngrok (or tunneling) to proxy the Lambda
function invocations to their local machine. And that got us
thinking about how we could build that idea into a development
environment that automatically did that for you. So we created
SST. The `sst start` command deploys a small _debug_ stack (a
WebSocket API and DynamoDB table) to your AWS account. It then
deploys your serverless app and replaces the Lambda functions in
it, with a _stub_ Lambda function. Finally, it fires up a local
WebSocket client and connects to the _debug_ stack. Now, when a
Lambda function in your app is invoked, it'll call the WebSocket
API, which then streams the request to your local WebSocket client.
That'll run the local version of the Lambda function, send the
result back through the WebSocket API, and the _stub_ Lambda
function will respond with the results. This approach has a few
advantages. You can make changes to your Lambda functions and test
them live. It supports all the Lambda function triggers without
having to mock anything. Debug logs are printed right away to your
local console. There are also no third-party services involved. And
since the _debug_ stack uses a serverless WebSocket API and an on-
demand DynamoDB table, it's inexpensive, and you are not charged
when it's not in use. SST is built on top of AWS CDK; it allows
you to use standard programming languages to define your AWS
infrastructure. We currently support JavaScript and TypeScript. And
we'll be adding support for other languages soon. You can read
more about SST over on our docs (https://docs.serverless-
stack.com), and have a look at our public roadmap to see where the
project is headed (https://github.com/serverless-stack/serverless-
stack/milesto...). Thank you for reading about us. We'd love for
you to give it a try and tell us what you think!
Author : jayair
Score : 94 points
Date : 2021-03-02 14:23 UTC (8 hours ago)
| jedberg wrote:
| How does this differ from 'serverless invoke local'?
|
| I use that model and it's pretty good. I can output local log
| lines and see them on the console.
| jayair wrote:
| Yeah `sls invoke local` works well for testing individual
| functions where you are mocking the specific inputs or
| triggers. But there are a couple of cases where it can become
| hard to use. For example, if you want to test a Lambda function
| that's invoked by an API that needs authentication. Or if
| Lambda A publishes to an SNS topic that invokes Lambda B and
| you want to test the entire flow.
|
| To fix this SST will deploy your entire stack, and let you
| trigger the workflow live. So you can hit the endpoint just as
| your frontend would, or trigger the entire SNS workflow. Except
| all the Lambda function invocations will be streamed to your
| local machine and the responses will be sent back.
|
| This approach allows you to get the best of both worlds. The
| speed on an `sls invoke local`, without having to mock
| anything.
| jedberg wrote:
| > This approach allows you to get the best of both worlds.
| The speed on an `sls invoke local`, without having to mock
| anything.
|
| But I have to deploy stub functions and have a different
| deployment model for prod. :) So there are definitely
| tradeoffs.
|
| I like this idea and will certainly try it out!
|
| What I was really hoping for is someone who could figure out
| a way to launch a terminal _in Lambda_ so that I can modify
| my functions in place and view the logs in place, and then
| when I 'm done, save my work to git, like I used to be able
| to do when I had a fleet of servers and could just log onto
| one of them in prod and muck around with logs (and yes I know
| this is a bad idea but it's also how things get done
| quickly).
| jayair wrote:
| Awesome!
|
| While this doesn't quite do the edit in place and I
| wouldn't recommend running this for prod with live traffic;
| we use it to run some scripts against prod. You get an
| environment where you can make changes quickly as opposed
| to having to deploy your function to run them.
| thdxr wrote:
| Been using this for a few months and it has been excellent!
| Highly recommend this for full stack serverless deployment
| jayair wrote:
| Thank you! Glad to hear that it's been working well.
| worldsoup wrote:
| Congrats Jay and Frank! It's been fun watching your progress and
| this certainly feels like the natural product evolution for you.
| Best of luck in the future!
| jayair wrote:
| Thanks Nick! Appreciate it.
| FanaHOVA wrote:
| How do you think about SST compared to projects like
| arc.codes/Begin.com in the long run? Do you want to expand to do
| a lot more around serverless? I always think about Rails vs
| Sinatra, and how both can make you happy :) The demo gave me
| Sinatra vibes in that sense (minimal code, no boilerplate
| generators, it just works)
| pranavtad wrote:
| Wow. I tend not to comment much on HN, but this is another level.
| I'm trying it out now -- if it works like the video, you've
| unlocked days of my time over the next year.
|
| Really excited about this and it has been a LONG time coming.
| jayair wrote:
| Thanks! SST takes a little bit to start up when you run it the
| first time. But it's pretty fast after that.
| rundmsef wrote:
| I've been migrating a small Serverless Framework project over to
| SST and I've been loving it. The teams commitment to clear and
| useful documentation is second to none. The support they provide
| in various channels is top notch.
|
| Wonderful tooling, I've been enjoying it quite a bit. Keep it up!
| I'm excited to see the feature set grow.
| jayair wrote:
| Appreciate it!
|
| After having struggled with all the confusing serverless
| related content out there, the docs are a big point of emphasis
| for the project.
| [deleted]
| fakedang wrote:
| I'm either missing something but what's the difference between
| this and zappa?
| jayair wrote:
| The key difference here is that Zappa and the other frameworks
| emulate the API (or SNS, SQS) service on your local machine.
| This means that if you want to test an API that requires
| authentication or if your Lambda function is triggered by some
| other async event, you cannot test it locally. You'll need to
| deploy it first.
|
| SST on the other hand, deploys your app and when a deployed
| Lambda function gets invoked remotely, it'll stream that
| request to your local machine, execute it, and send the results
| back to AWS. So you get the advantage of working against real
| infrastructure while still making changes locally.
|
| The video helps a illustrate it a little better --
| https://www.youtube.com/watch?v=hnTSTm5n11g
| f6v wrote:
| I don't want to be that guy, but this sounds incredibly
| complicated. What happened to spinning up a local environment
| with "docker compose up"? At this point I need a i) SaaS to
| deploy lambdas ii) SaaS for live-testing lambdas.
|
| It's a cool business though - selling shovels.
| jayair wrote:
| Yeah the internals of the `sst start` command are a little
| complicated. But it only needs to setup it up initially, from
| that point on it simply connects to the WebSocket API.
|
| SST is open source and runs on your AWS account. We don't
| charge you for it and it's most likely going to be within the
| free tier on AWS.
| abrgr wrote:
| This is amazing. Congratulations! I've been thinking along
| similar lines and so happy this exists.
|
| Have you considered extracting the lambda portion into a lambda
| extension so that you can deploy this as part of any
| cdk/cloudformation stack or standalone lambda and toggle on/off
| local debugging based on some registration in dynamo (e.g. store
| a predicate applied to the event and proxy to local if the
| predicate evaluates true)? This would allow something like this
| to be hugely useful in mitigating outages.
|
| I didn't catch it in the repo yet but I'd you're proxying local
| AWS calls through the lambda to allow quickly catching IAM and
| security group/routing issues, that would be incredible.
|
| Happy to help with this stuff and will definitely be a user!
| jayair wrote:
| Thanks!
|
| I hadn't thought of using something like this for mitigating
| outages. Do you mind elaborating on that a bit more?
|
| Currently there isn't a simple way to switch off the debug
| mode. So if `sst start` deploys the Lambda with the debug mode,
| `sst deploy` will disable it by putting the original Lambda
| back. But your idea of switching it on and off should work!
|
| For the IAM portion, we are don't use the IAM credentials of
| the local machine while executing the Lambda. We use the
| credentials of the original Lambda. This allows for testing
| against the right credentials.
|
| Appreciate the feedback! If you've have any questions or are
| curious about the internals, I'd love to hear from you
| jay@anoma.ly
| offtop5 wrote:
| How does this handle dependencies, for example if I'm working on
| a python project and I need to run a pip install. I will give you
| money if you can automate getting those python dependencies up
| into a lambda function.
|
| If you can do this reliably, and pass corporate venting, you
| easily have a product worth millions.
|
| On the other hand, if this gets popular Amazon can easily just
| clone the product and integrate it directly. You'll know it's a
| good idea then.
|
| Edit : MIT ! Awesome, thanks for the donation to The Open source
| community. If I ever use this in a real project I'll go ahead and
| donate a few bucks.
| jayair wrote:
| We'll be looking at Python in the coming weeks, currently it's
| JS and TS only.
|
| But the way this works locally (for Node) is that you do an npm
| install just as you would and it executes your application
| locally. The packaging part comes into play when deploying
| these functions to AWS. And I agree that portion even for Node
| isn't bullet proof. It's something we want to do a better job
| of.
| offtop5 wrote:
| Wait so you automatically deploy the lambda dependencies for
| me. The other issue I run into is occasionally you have to
| split npm packages into different layers when you deploy
| lambdas. I don't have time personally to contribute, but I
| hope you're able to develop this into something which becomes
| a part of my workflow.
| jayair wrote:
| So the `sst start` command fires up a local environment but
| it doesn't deploy your functions. Instead it'll run it
| locally when it gets invoked.
|
| But when you `sst deploy` it, we'll package your functions.
| To do this we use esbuild (https://esbuild.github.io), it's
| like Webpack but 10x faster. It'll generated a single js
| file that should be fairly small and you shouldn't have to
| use Layers.
|
| However, this isn't bullet proof. There are some
| dependencies that are not compatible with esbuild/webpack,
| and you'll end up having to zip them up as a directory.
| That's something we are going to work to improve in the
| future.
| rirze wrote:
| Not to downplay Serverless Stack, but the AWS CDK does this for
| you: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-
| lambda-p...
| JackMcMack wrote:
| you can do 99% with pipreqs . && pip install -t . -r
| requirements.txt
| time0ut wrote:
| Please forgive me if I am misunderstanding the problem, but we
| approach this using Docker now that Lambda supports container
| images for packaging. Our process is to just put all that sort
| of stuff in the Dockerfile, run docker build, push the result
| up to ECR, and point Lambda to the new version.
| holler wrote:
| Lambda layers is another option.
| [deleted]
| ranman wrote:
| What made you pick the websocket connection over something more
| like a pure TCP connection or something like ngrok?
| jayair wrote:
| Yeah it's a good question. The serverless WebSocket API allows
| us to keep the debug stack entirely serverless. This makes it
| really cheap (basically free) and it's very reliable because we
| don't have to manage starting and stopping the servers.
| nozepas wrote:
| Your project looks fairly similar to Chalice but using TS. Am I
| correct? How do you compare yourselves to Chalice?
| jayair wrote:
| The big difference is that Chalice (from what I remember) runs
| a local server where you test your functions and once you are
| ready, you'll deploy it to AWS.
|
| This deploys it to AWS first, and when a function gets invoked
| remotely, streams that request to your local machine where
| it'll execute that function and send the results back to AWS.
| You basically get to test and develop against your real
| infrastructure.
| wyck wrote:
| Debugging Lambda is a pain, your site is amazing, so clear and
| easy to follow.
| jayair wrote:
| Appreciate it! Yeah we really want to fix this, given how much
| time we spend working on it!
| vincentmarle wrote:
| Very interesting. It's open source so what's your business model?
| jayair wrote:
| Yeah this is a framework that's completely for local
| development.
|
| We have a separate service that's a CI/CD pipeline for
| serverless apps. We use a SaaS model there -- https://seed.run.
| It supports SST out of the box. But it also supports Serverless
| Framework, the other popular option out there.
| jasonhansel wrote:
| Though it seems like a decent approach, I find it sad that this
| is necessary, since this slows down dev cycles. If AWS services
| were built on open standards, it would be easy to write local
| emulators and this whole problem would be fixed. Furthermore, if
| people avoided the nanoservice/serverless architecture the issue
| would be much less severe.
| nickthemagicman wrote:
| The idea sounds cool. No servers to manage. I don't know what
| to think though. A true language framework for backends is just
| so much better for me.
|
| There's Apache Openwhisk and OpenFass which are all open
| standards for serverless.
| jayair wrote:
| I partly agree that it's weird that it takes something
| complicated to solve local development. That said, we've
| explored a wide range of options over the past few years and
| the local emulation route can be really hard to setup and ends
| up being pretty slow.
|
| I've personally gone back and forth on the idea of purely local
| vs cloud based development. And fwiw, SST strikes a very
| interesting balance between the two. You get the feel of a
| local environment, while still connecting to every cloud
| resource.
| benatkin wrote:
| It is only necessary if you choose "serverless" (somebody
| else's servers). Hey (the email service) has what should be an
| ideal use case for it but they're using Kubernetes instead.
| dempseye wrote:
| Good luck with this. It solves the most annoying problem with
| serverless development.
| jayair wrote:
| Thank you!
| pritambarhate wrote:
| Right now it looks like a MIT project. What's your monetisation
| strategy?
| jayair wrote:
| Yup, we have a separate SaaS product for teams, a CI/CD
| pipeline for deploying serverless apps. It supports SST and
| Serverless Framework.
|
| https://seed.run
| shortlived wrote:
| BRAVO! Just tried it locally and its like driving a Tesla for
| first time.
| jayair wrote:
| Thank you, that's awesome to hear!
| debarshri wrote:
| In my opinion this beats the purpose of serverless. Call me a
| shitty engineer, a hacky dev, I see purpose of serverless to be
| quick iterations, get your idea out as fast as you can, develop
| in production mode. The way people did in Google app engine days.
| I am not sure why people realise it lot of ways the whole
| serverless infra from a business value standpoint is very similar
| to Google app engine, which when an orgs growth out of tries to
| get out of. Building a serious CI/CD or dev stack is waste of
| precious resource that startups or mid market should not invest.
| jayair wrote:
| The interesting thing is that it lets you develop against your
| real infrastructure. This speeds up your iteration cycle
| drastically when compared to the other serverless tools.
|
| But I do feel your pain, building a CI/CD for serverless is
| quite annoying and is a big waste of time for startups. I would
| not recommend it.
|
| And quite frankly serverless as an ecosystem needs to be
| improved before it can compete with app engine, when it comes
| to developer experience. We care a great deal about it and
| that's what we are working on.
| palbrattberg wrote:
| SST is amazingly great! Being able to develop serverless
| applications with live development support while also accessing
| the proper AWS runtime is such a game changer!
|
| Very developer-friendly founders as well, being ever helpful over
| Slack and more.
|
| Highly recommended!
| jayair wrote:
| Appreciate the support Pal! All your feedback has been really
| helpful.
___________________________________________________________________
(page generated 2021-03-02 23:01 UTC)