[HN Gopher] Netlify Edge Functions: A new serverless runtime pow...
___________________________________________________________________
Netlify Edge Functions: A new serverless runtime powered by Deno
Author : csmajorfive
Score : 213 points
Date : 2022-04-19 15:18 UTC (7 hours ago)
(HTM) web link (www.netlify.com)
(TXT) w3m dump (www.netlify.com)
| markhaslam wrote:
| What are the cold start times like? Compared to say Cloudflare
| Workers where they claim you can have no cold starts?
| lucacasonato wrote:
| Deno Deploy (https://deno.com/deploy) uses the same
| optimizations as CFW to achieve effectively 0ms cold starts.
|
| Netlify Edge Functions are still in beta and don't have all of
| the same optimizations yet, but we're going to be working with
| Netlify over the next few months to enable these optimizations
| to Netlify Edge Functions too.
| markhaslam wrote:
| Thank you! That's so impressive that you are able to achieve
| 0 ms cold starts in Deno Deploy. That and CFWs are game
| changers.
| [deleted]
| mfsch wrote:
| Does anyone know how those compare to regular Netlify Functions,
| other than running on the edge nodes? The main difference I've
| found is that they have much stricter CPU time budgets, but it
| seems to me that the use cases overlap quite a bit.
| the_common_man wrote:
| This is a new concept for me, what is the use case for edge
| functions?
| tppiotrowski wrote:
| You have a hosted static web app but want to dynamically change
| the <meta> tags in your index.html to provide a unique url
| preview for each route (/about, /careers, etc)
| the_common_man wrote:
| Why would one want to do this? For analytics?
| tppiotrowski wrote:
| When you share a url like a news article on Twitter or send
| it on Slack, iMessage etc, you get a small preview widget
| showing you the headline and a photo from the story.
|
| The way this is generated is Twitter/iMessage scrapes the
| HTML of the url you are sharing and looks for <meta
| name="og:image" content="https://imgur/adfstdd"> and
| displays whatever image is in the content field. Similarly
| the widget title is populated from <meta name="og:title"
| content="About Us">
|
| In a static html site there is only one index.html so all
| routes have the same meta tags unless you overwrite the
| index.html file using an edge worker.
| dom96 wrote:
| > In a static html site there is only one index.html so
| all routes have the same meta tags unless you overwrite
| the index.html file using an edge worker.
|
| I guess this is specifically for SPA static sites? I
| would expect a static html site to have different html
| files for each page, like blog/2020-09-09-title.html and
| that could then have its own metatags, so no need for
| this dynamic rewriting, unless I'm missing something.
| tppiotrowski wrote:
| Correct. This is only for SPAs that use client side
| routing like react-router. React SPAs are quite common
| with startups in my experience.
| brycewray wrote:
| For one (minor) thing, they're a great way to add certain HTTP
| headers which can't be handled through other means. I use a
| Cloudflare Worker to give my site the necessary headers for its
| Content Security Policy (some parts of which aren't to be added
| via a <meta> tag[0]), as well as the nonces[1] for that CSP.
| This only scratches the surface, of course.
|
| [0]: https://content-security-policy.com/examples/meta/
|
| [1]: https://content-security-policy.com/nonce/
| anyfactor wrote:
| I was was told to use firebase cloud functions literally
| yesterday.
|
| You can pre-parse and pre-process JSON responses to minimize
| the payload size and customize it for your frontend needs.
| Makes dealing with client secrets and configuration easier too
| I believe. I didn't want to rewrite a bunch of backend code so
| this was one of the simplest solution.
| valbaca wrote:
| The first link in the article points to this:
| https://www.netlify.com/products/#netlify-edge-functions
|
| How to use them Drop JavaScript or TypeScript functions inside
| an edge-functions directory in your project.
|
| Use cases Custom authentication, personalize ads, localize
| content, intercept and transform requests, perform split tests,
| and more.
| nawgz wrote:
| > Use cases
|
| > A bunch of server functionality
|
| Why is that the use case? I don't see how an edge function
| can be faster than a centralized server endpoint if it has to
| reach out to literally any other component of the system
| involved in auth / persistence
| FinalBriefing wrote:
| It's just serverless architecture. It may not fit for every
| project.
|
| We use the concept to add meta tags to our client-side-
| rendered webapp for search indexing. We can decouple our
| client app from our server and deploy each separately. We
| use "serverless functions" to add some meta tags that need
| to be added server-side.
| nawgz wrote:
| I mean, it's incredibly easy to deploy client apps
| separately from the server in a lot of ways
|
| To me, the only value I see for Edge compute is when some
| chunk of data requires processing going one way across
| the network, and that processing can be done entirely
| locally. I suppose what you describe with the meta tag
| qualifies in this case, otherwise I think serverless
| architecture looks like a pretty sweet deal for the cloud
| companies promoting it.
| skybrian wrote:
| Even if it doesn't reduce latency, it could reduce
| bandwidth and load on other servers. (Or increase it, if
| it's used badly.)
| nawgz wrote:
| I can't imagine a successful argument that you use less
| compute cycles spinning up Edge functions to interact
| with your app than just hitting an already-running API
| endpoint on an already-running server. And rearranging
| your load just because you can isn't worth consideration.
| qbasic_forever wrote:
| Serverless API functions, like if you were going to use Amazon
| AWS lambda functions to add interactivity or simple APIs to a
| site without have to manage and run a full server.
| pcj-github wrote:
| Does this complement or compete with Deno Deploy?
| TobyTheDog123 wrote:
| Netlify pricing has always been confusing to me, but I'm not
| entirely sure why. I guess I'm more accustomed to pay-as-you-go
| in this space (CFW) than tiered plans (Netlify bundles their
| features into starter/pro/business).
|
| It seems that the free plan is 3M invocations/mo, starter is
| 15M/mo, and business is 150M/mo, but there aren't any ways to
| increase those limits (business says to contact them for higher
| limits).
|
| Personally I'd prefer true pay-as-you-go without hard limits,
| even if it's a bit more expensive. To me the point is to sign-up-
| and-forget-it without having to worry if I'm within those
| limitations.
| phphphphp wrote:
| I have no special insight into Netlify, so this is (educated)
| speculation: there's an important difference between pay-as-
| you-go compute providers, like AWS, and Netlify: Netlify is a
| platform, their value is not derived from the workloads they
| process, so charging (or not charging) based on compute doesn't
| align with their value proposition. The value of Netlify is
| that it's an end to end platform, taking a business from having
| some code to having a live website, where compute is just one
| component of the entire value proposition.
|
| The marginal cost of a request is probably negligible, hence
| the tens of millions of requests included, but there _is_ a
| cost associated with each user making use of their platform
| because it includes a lot more than just compute, and that 's
| the value they're charging for.
|
| I think if you're looking for a compute provider that offers
| pay as you go billing in order to minimise your costs, then
| Netlify probably isn't the platform for you, and you'd be
| better off using their service provider directly (in this case,
| Deno, but many Netlify alternatives use Lambda, Cloudflare
| Workers etc.).
| mavsman wrote:
| Different strokes for different folks.
|
| This has been one of the big knocks on AWS, that a poor little
| old lady can setup a "free" AWS account then when her website
| (and accompanying Lambda function) goes viral she gets hit with
| a $100k bill from uncle Jeff.
| rmbyrro wrote:
| Would the Lady prefer to let her site go down?
|
| I don't understand this way of thinking. One of the main
| benefits of serverless is scalability, peace of mind for
| precisely when you go viral.
|
| If you're doing something good, especially if you're selling
| something good, all you want is to go viral. And of you went
| viral, you don't mind paying the AWS costs, which should be
| tiny compared to your revenue. Just need to care about your
| unit economics.
| yurishimo wrote:
| Sure, but I think the GP was referring to a situation where
| revenue doesn't match the traffic.
|
| Imagine the grandma scenario. Let's say that she cooks and
| sells artisanal jams and jellies. With the help of her
| granddaughter, she creates a TikTok that goes viral. Her
| web store immediately sells out, and the traffic from the
| video hammers her website. She cannot react fast enough to
| enable back orders and so most of those visits go to waste.
|
| Putting aside the technical absurdities (why is she hosting
| on a lambda, etc), in this scenario, grandma is up a creek.
|
| If this scenario were real, I would feel really bad for the
| grandma with a huge bill and not enough revenue to cover
| it, but I would be livid at whatever imbecile decided to
| set her up with such a ridiculous hosting paradigm.
|
| "But it only costs pennies a month to run!*"
|
| Yeah, until she goes viral. This scenario right here is why
| services like Squarespace et al are still valuable. You'll
| pay a few extra bucks a month, but if you go viral, you
| won't go bankrupt when the bill is due.
| webmobdev wrote:
| > Personally I'd prefer true pay-as-you-go without hard limits,
| even if it's a bit more expensive.
|
| Please, a hard no to that. That's the worse aspect of AWS,
| Azure and all those new huge hosting centers - hard to
| calculate the real cost and set a budget.
|
| I don't know about Netlify, but the old Linode (before it got
| acquired), was flexible with its "hard" limits in a plan - for
| example, if your site got slashdotted / Digged (or was that
| dug?) and suddenly saw a spike on its resource, exceeding the
| limits, they were quite accommodating in not charging their
| users for the unexpected extra usage. Linode even wouldn't mind
| an occasional surge in resource a few times a year. But if it
| happened more frequently, they would recommend that you upgrade
| to a more suitable plan. They earned a lot of goodwill that way
| from their clients who really appreciated that their server /
| site wasn't unexpectedly taken offline because of a resource
| crunch they hadn't paid for and / or anticipated.
| TobyTheDog123 wrote:
| Like another comment said, different strokes for different
| folks.
|
| I would much rather pay overage fees than have my site go
| down due to a hard limit, but I would also like the option to
| choose the opposite.
|
| That way you appeal to both sides of the scalability-vs-
| predictability crowd.
| foxbarrington wrote:
| Netlify's creative pricing is what lost me as a customer. They
| decided to start reading our git commits to decide how much to
| charge us. Instead of charging for usage of bandwidth and build
| minutes they decided to charge based on how many different
| authors we had--even though those people never interacted with
| Netlify or even knew how we were deployed. If we didn't hurry
| up and migrate to Render.com this would have taken our bill
| from $1.5k/year to over $25k/year.
| realmod wrote:
| > Personally I'd prefer true pay-as-you-go without hard limits,
| even if it's a bit more expensive. To me the point is to sign-
| up-and-forget-it without having to worry if I'm within those
| limitations.
|
| Sure, if you can set a max budget. Otherwise, you'd constantly
| have to worry about the unbounded cost.
| lewisjoe wrote:
| This is great news. I'm really rooting for a successful trend of
| Serverless runtimes, mainly as a weapon against rising cloud
| deployment costs.
|
| While the general trend today is to back the serverless
| environment with Javascript runtimes (Cloudflare runs its edge on
| top of V8, Netlify uses deno, most other serverless runtimes use
| nodejs), I'm optimistic that WebAssembly will take over this
| space eventually, for a bunch of reasons like:
|
| 1. Running a WASM engine on the cloud means, running user code
| with all security controls, but with a fraction of the overhead
| of a container or nodejs environment. Even the existing
| Javascript runtimes, comes with WebAssembly execution support out
| of the box! which means these companies can launch support for
| WASM with minimal infra changes.
|
| 2.It unlocks the possibility of running a wide range of
| languages. So there's no lock-in with the language that the
| Serverless provider mandates.
|
| 3.Web pages that are as ancient as the early 90s are perfectly
| rendered even today in the most modern browsers because the group
| behind the web standards strive for backward compatibility.
| WebAssembly's specifications are driven by those same folks -
| which means WASM is the ultimate format for any form of code to
| exist. Basically, it means a WASM binary is future proof by
| default.
|
| I've published my (ranty) notes on why Serverless will eventually
| replace Kubernetes as the dominant software deployment technique,
| here -
| https://writer.zohopublic.com/writer/published/nqy9o87cf7aa7...
| adam_arthur wrote:
| Also a common spec for serverless is badly needed. Serverless
| code should be portable between different cloud providers,
| otherwise there's vendor lock in and much greater opportunity
| to price gouge.
|
| Anybody know if a common API for serverless components is being
| worked on?
| gardnr wrote:
| There's the serverless framework which is easy to use and
| there is terraform which is more powerful.
|
| Relevant: https://xkcd.com/927/
| huijzer wrote:
| It doesn't avoid vendor lock-in. Instead of a cloud
| provider, it is now Terraform.
| phphphphp wrote:
| Check out knative, it's used for Google Cloud Run:
| https://knative.dev/docs/
| dmitriid wrote:
| > mainly as a weapon against rising cloud deployment costs.
|
| Cloud Functions is literally code you're running in the cloud.
| And the moment you approach their limit(ation)s, you will see
| the same "rising cloud deployment costs"
|
| > Running a WASM engine on the cloud means ... a fraction of
| the overhead of a container or nodejs environment
|
| You do realise that there are other languages than javascript
| in nodejs? That there other environments than cloud functions?
| And that you can skip that overhead entirely by running with a
| different language in a different environment? Or even run Rust
| in AWS Lambda if you so wish?
|
| > so there's no lock-in with the language that the Serverless
| provider mandates.
|
| And at the same time you're advertising for a runtime lock in.
| This doesn't compute.
|
| > Web pages that are as ancient as the early 90s are perfectly
| rendered even today... Basically, it means a WASM binary is
| future proof by default.
|
| It's not future proof.
|
| Web Pages from the 90s are not actually rendered perfectly
| today because browsers didn't agree on a standard rendering
| until late 2000s, and many web pages from the 90s and 2000s
| were targeting a specific browser's feature set and rendering
| quirks. Web Pages from the 90s are rendered good enough (and
| they had few things to render to begin with).
|
| As web's standards approach runaway asymptotical complexity,
| their "future-proofness" is also also questionable. Chrome
| broke audio [1], browsers are planning to remove
| alert/confirm/prompt [2], some specs are deprecated after
| barely seeing the light of day [3], some specs are just shitty
| and require backtracking or multiple additional specs on top to
| fix the most glaring holes, etc.
|
| > I've published my (ranty) notes on why Serverless will
| eventually replace Kubernetes as the dominant software
| deployment technique
|
| "Let's replace somewhat unlimited code with severely limited,
| resource constrained code running in a slow VM in a shared
| instance" is not a good take.
|
| [1] https://www.usgamer.net/articles/google-chromes-latest-
| updat...
|
| [2] https://dev.to/richharris/stay-alert-d
|
| [3] https://chromestatus.com/feature/4642138092470272 and
| https://www.w3.org/TR/html-imports/
| atoav wrote:
| I sought to unserstand "serverless" but every deployment
| diagram I have ever seen show things that look an aweful lot
| like they run on... servers?
|
| Maybe I don't get the idea (and honestly I was too lazy to
| put in the legwork), but when I hear something like
| "serverless" I imagine some p2p javascript federated
| decentralized beast where the shared state is stored through
| magic and tricks with the users clients and there is
| literally _no server_ anywhere to be found.
|
| Instead it seems like a buzzword (?) for a weirdly niche way
| of running things that someone with a 4 Euro/Month nginx
| instance that hosts 10 websites will probably never
| understand.
|
| Maybe I also don't need to understand because I know how to
| leverage static content, caching, fast Rust reverse proxy
| services and client side javascript to develope fast web
| stuff that gets the job done).
| ithkuil wrote:
| "Server" can mean many things. A few (obvious) meanings:
|
| * A role in the client-server model. One machine makes
| requests and asks for info or things to be done, and the
| other end executes the request.
|
| * The physical (or virtual) machine that runs the processes
| that fulfill the server role
|
| * A class of "serious" machines that do important stuff
| somewhere not directly facing the end user (desktop or
| mobile device).
|
| * A unit of administration: the thing you log in with ssh,
| install/update software, rotate logs, organize user
| accounts on, create groups, handle file permissions, craft
| backup scripts, cron jobs, handle disk space, and generally
| care about filesystem health etc etc etc
|
| As you correctly pointed out, the "serverless" buzzword
| clearly talks about code that has to run on some machine,
| which is not your desktop or mobile device, so it's still
| about the client-server model and it's still running in
| servers, which ultimately have ti be administered by
| somebody somehow.
|
| The "less" suffix in the buzzword means that that person is
| not you. You don't have to manage the server.
|
| It's hard to find a better word. Sysadminless? NoPet?
| JustRunIt? FocusOnMyCode?
|
| All names have their drawbacks. My main qualm with
| serverless is not that there are servers involved, but that
| it's not clear what is serverless and what it's not.
|
| For example, is kubernetes a serverless platform? As a user
| if it (not an admin) you don't need to worry about any of
| the good old sysadmin chores, i.e. you don't manage the
| actual servers where your code runs.
|
| Otoh generally when people talk about serverless they don't
| talk about abstractions like kubernetes but usually about
| going on step further in the abstaction ladder and imagine
| a world that's not only "serverless" but "processless",
| where you don't build software and deploy it somewhere but
| where you write some "functions" and map of them to some
| endpoint and the system takes care of figuring out how to
| build, deploy and manage the full lifecycle.
| paulgb wrote:
| > Instead it seems like a buzzword (?) for a weirdly niche
| way of running things that someone with a 4 Euro/Month
| nginx instance that hosts 10 websites will probably never
| understand.
|
| To me, serverless means that I as the developer don't have
| to do ongoing server maintenance work. A 4 Euro/month setup
| sounds great, until you find out that you never enabled log
| rotation and filled up the disk space, or your certificate
| refresh was improperly configured and now you don't have
| SSL, or your site gets popular for a day and the site slows
| to a crawl unless you add an instance.
|
| The dream of serverless is that I can deploy code in a "set
| it and forget it" manner. Stuff can still break at the
| application layer, but should work the same at the
| infrastructure layer in a year as they do today, and auto-
| scaling happens automatically.
| deckard1 wrote:
| > but should work the same at the infrastructure layer in
| a year
|
| Serious doubt, there. This brave new world seems to be
| entirely focused on making it an _incredibly_ fragile
| world with your code scattered to the winds. It 's bad
| enough dealing with library semver breakages in a
| monolithic app. I can't imagine tracking a dozen
| serverless functions running god-knows-where with
| whatever resources some cloud service decides to allocate
| for you today. Billing is opaque as a black hole. Which
| I'm sure is more a feature than a bug, for these cloud
| providers.
|
| > and auto-scaling happens automatically.
|
| _wheeze_
| paulgb wrote:
| > but should work the same at the infrastructure layer in
| a year
|
| It's been my experience. For example, I've had periodic
| data fetching jobs last for years without giving them any
| thought. In some cases I've gone back years later and
| found them still chugging away, obediently putting data
| where I told them to years earlier. The one exception I
| can think of is when Lambda EOL'd Python 2.7, but that
| happened about 12 years after Python 3's initial release.
|
| I've found the same to be true of web services. I have
| one that's been running continuously for 5+ years that I
| actually forgot about until just now.
|
| > _wheeze_
|
| Why?
| atoav wrote:
| But where does the code run physically? Of course on a
| server, otherwise it wouldn't be reachable from the net.
| But who maintains those servers? Is there some contract
| with those who maintain it?
|
| In my experience if you run things professionally you
| have to set up log rotation purely for legal reasons
| anyways. Is serverless without logs? Or how would you
| there ensure to log privacy relevant data only for the
| legally allowed periods?
|
| How do you do SSL on serverless and who is in control of
| the certs that guarantee safe communications between you
| and your customers? If it is not you, are they somehow
| contractually bound to keep your user data private?
| paulgb wrote:
| > In my experience if you run things professionally you
| have to set up log rotation purely for legal reasons
| anyways. Is serverless without logs? Or how would you
| there ensure to log privacy relevant data only for the
| legally allowed periods?
|
| AFAIK the big providers use log rotation by default. I
| just know that I've been running some low-stakes
| serverless projects for years and have always been able
| to access recent logs, and never worried about disk
| space. Privacy laws is a good point I hadn't thought of
| (in the context of these projects), though.
|
| > How do you do SSL on serverless
|
| In the case of Netlify, they already manage the
| certificate if you point your domain at them and click a
| button, so it works automatically with their functions.
| Same story with Cloudflare. AWS and Google make you jump
| through a few more hoops, or you can host the endpoint
| from one of their domains and piggyback on their
| certificate.
|
| I imagine the security practices of all four would hold
| up to the security practices of a 4 Euro / month VPS
| host.
| dmitriid wrote:
| > Instead it seems like a buzzword (?) for a weirdly niche
| way of running things that someone with a 4 Euro/Month
| nginx instance that hosts 10 websites will probably never
| understand.
|
| That's exactly it. It's a way to intermittently run a piece
| of code in a managed environment. You're basically
| guaranteed that the environment is setup, and that the code
| will start up and execute. That is basically _it_.
|
| People are extremely enamoured with it, for no apparent
| reason. The only usecase I've found for myself so far is
| running small analytics BigQuery queries to look for easily
| detectable anomalies once a day and send a Slack message if
| something's wrong. This way you avoid etting up a separate
| Kubernetes job etc. Makes no sense outside of GCP.
| olah_1 wrote:
| I agree about WASM. I am sort of worried that Deno may be too
| late tbh. Why would I bother with an interpreted language at
| all when I can code in any language I want and run it anywhere
| with WASM?
| jrockway wrote:
| What is "any language" these days? I feel like WebAssembly's
| day will come when one of those is Javascript, and so far
| that hasn't happened.
|
| Go's support is pretty good (with tinygo offering a tiny
| runtime more suited to this application). Rust appears to
| support compiling directly to WebAssembly, and there are some
| smaller languages like AssemblyScript and Lua with support.
| I'm guessing plain C works fine. Then there are projects that
| compile the runtime for interpreted languages to WebAssembly,
| so you can theoretically run things like Python.
|
| Nobody is writing applications in C or AssemblyScript, so
| that leaves rust or go. If you're using one of those
| languages, though, you can just (cross-)compile a binary and
| copy it to a VM that is on some cloud provider's free tier,
| so this isn't really easing any deployment woes. It was
| already as easy with native code, so WebAssembly isn't adding
| much stuff here. (The isolation aspect was interesting in the
| days before Firecracker, but now every computer has hardware
| virtualization extensions and so you can safely run untrusted
| native code in a VM at native speeds.)
|
| Anyway, I always wanted WebAssembly for two things: 1) To
| compile my React apps to a single binary. 2) To use as a
| plugin system for third-party apps (so I don't have to
| recompile Nginx to have OpenTracing support, for example).
| The language support hasn't really enabled either, so I'm a
| little disappointed. (Disappointed isn't really fair. I've
| invested no effort in this, and I can't be disappointed that
| someone didn't make a really complicated thing for me for
| free. But you know what I mean.)
| teleclimber wrote:
| > The isolation aspect was interesting in the days before
| Firecracker...
|
| I don't think Firecracker's existence makes WASM's
| isolation uninteresting. First, I think you are looking at
| way more resources running a full VM (even a "micro" VM)
| compared to a WASM runtime. I think startup times are not
| comparable either, so if that matters you'll find WASM to
| be the way to go.
|
| Second, WASM's capability-based security model is wonderful
| for giving the untrusted code the things it needs to work
| with. With a VM, you have to stitch together with shared
| directories, virtual eth bridges or, linux capabilities,
| maybe some cgroups, and who knows what else. (Granted you
| may need to do some of that with WASM too, but less so).
| paulgb wrote:
| WASM still needs an interface layer to interact with the
| outside world (filesystem, etc.) My money is on WASI, but
| Deno becoming the interface layer has some advantages, mainly
| that most WASM-supporting languages already have tooling
| around JavaScript ffi.
| [deleted]
| searchableguy wrote:
| Deno supports wasm. :p
| teleclimber wrote:
| The "any language" advantage of WASM is theoretical. Each
| language is at a different level of support of WASM,
| libraries aren't all caught up and at the same point for all
| languages, etc...
|
| I love the promise of WASM, but every time I look at it I get
| lost in a sea of acronyms, and my optimistic ideas of using
| language X with library Y on runtime Z are dashed because
| there is some missing piece somewhere.
|
| If anything, the "any language" thing creates a giant matrix
| of potential pitfalls for the programmer.
|
| In comparison, the combination of JS/TS, the browser API and
| a solid std lib looks pretty good for some problems.
| irq-1 wrote:
| Agreed. There is also the lack of a GC in WASM and Denos'
| use of existing web standard APIs.
| syrusakbary wrote:
| I fully agree with your take. I think JS-directed computation
| is very good towards short-term adoption (since running JS on
| the Edge is probably the most popular use case), but eventually
| the needs to run other programming languages at the Edge will
| likely eclipse the JS use case.
|
| At Wasmer [1] we have been working actively towards this
| future. Lately more companies have been also doing awesome work
| on these fronts: Lunatic, Suborbital, Cosmonic (WasmCloud) and
| Fermyon (Spin). However, each of us with a different
| take/vision on how to approach the future of Computation at the
| Edge. I'm very excited about to see what each approach will
| bring into the table.
|
| [1] https://wasmer.io/
| me_me_mu_mu wrote:
| Really, really dumb question. I've seen a lot of
| node/python/etc serverless offerings. Is there something where
| you just provide a binary and its executed each time?
|
| For example, I write a simple single responsibility piece of
| code in Go `add_to_cart.go` and build it, deploy it, and
| somehow map it to some network request. dot slash pass args,
| and return the result?
|
| No need to have containers or runtime?
| brundolf wrote:
| It's possible you'd run into environment issues. JS or Python
| code (or a container) doesn't have to care as much what OS or
| architecture it's run on. A raw binary could pierce that
| abstraction and make the service more complicated to offer.
| kasey_junk wrote:
| AWS lambda's work that way.
| pid-1 wrote:
| That's how every FaaS (Function as a Service) offering works.
|
| A caveat is that most non trivial applications need something
| more than running a function.
|
| You might need secrets management, ephemeral and non
| ephemeral storage, relational databases, non relational
| databases, dependency management, AAA capabilities,
| observability, queues/async, caching, custom domains...
| That's where said offerings differ.
|
| EDIT: actually most FaaS offerings take code as input, not
| binaries. I'm not sure if that was the relevant part of your
| question. If it was, then yeah I don't know of such service.
| electroly wrote:
| > a fraction of the overhead of a container
|
| I mean, only in theory or when looking at it from the right
| angle, right? Or are you only comparing against JavaScript
| (unclear)? WASM is still much slower than native code.
| Containers spend most of their time executing native code; the
| "overhead" of containers is at the boundaries and is minor
| compared to the slowdown by moving from native code to WASM. In
| the future WASM may approach native performance, but it's not
| there now. I'm 100% certain that transitioning my native-code-
| in-containers workloads to WASM would be slower, not faster.
| teleclimber wrote:
| Edge functions are typically run intermittently, with their
| runtime stopped to free up resources between runs. Therefore
| a big factor is startup and shutdown speed. Containers are
| pretty bad there. Deno is better, and WASM is unbeatable,
| especially with things like Wizer[0].
|
| [0]https://github.com/bytecodealliance/wizer
| electroly wrote:
| Makes sense when talking about edge functions, but then OP
| started talking about Kubernetes. Our Kubernetes workloads
| don't resemble that at all; there's virtually none of that
| container startup/shutdown overhead to be concerned about.
| Most weeks, no containers are started or stopped at all.
| lucacasonato wrote:
| Deno can in theory do the pre-initialization that wizer
| does for JS too. We have all of the infrastructure for it,
| we just have not gotten around to actually implementing it
| yet.
| teleclimber wrote:
| Issue 3335. I know. I'm watching it. :))
| api wrote:
| > I'm really rooting for a successful trend of Serverless
| runtimes, mainly as a weapon against rising cloud deployment
| costs.
|
| How would that work? Don't these tend to facilitate cloud lock-
| in or at least be cloud-only in the sense that they make it
| hard to operate your own metal infrastructure?
| [deleted]
| dbbk wrote:
| I would love to jump over to something like Vercel or Netlify
| Edge, but maddeningly none of these platforms give you control
| over the cache key. I have pages that are server-side rendered
| with Cache-Control headers, but because our visitors come with
| unique tracking params on the end of their URL (e.g. from
| Mailchimp or Branch), we would essentially have no cache hits.
|
| It seems the only way to have control over this is to write your
| own Cloudflare Workers. There must be a better way? I can't
| imagine this is an infrequent problem for people at scale.
| eli wrote:
| Cloudflare Transform Rules let you rewrite URLs on the fly
| https://developers.cloudflare.com/rules/transform/
| cxr wrote:
| > There must be a better way?
|
| You're experiencing friction trying to use something in a way
| that it's supposed to not be used. (I.e., click-tracking by
| junking up URLs.) You could look for an answer, or you could
| take a step back, evaluate your expectations, and then decide
| not to do what you're trying to do.
| dbbk wrote:
| Unfortunately this is an enormous business and asking them to
| stop all tracking is well outside of my remit.
| patches11 wrote:
| Why don't you just link to an API route, consume the tracking
| params, set a cookie, and redirect to a statically rendered
| page?
| teaearlgraycold wrote:
| Time to first paint
| bobfunk wrote:
| So far Netlify Edge Functions runs before the cache layer, so
| you can actually use a minimal function to rewrite the URL to
| remove all unique params, etc, and then let it pass through our
| system to a Netlify Function which runs behind our caching
| layer.
|
| For anything you can do at build time as a static HTML pages we
| already strip query parameters from cache keys.
| dbbk wrote:
| Interesting, thanks... do you have any docs on how we might
| achieve this with Next.js? Am I right in thinking we would
| have essentially a custom Edge Function first that handles
| query params, and then a second Edge Function that renders
| the Next app?
| ascorbic wrote:
| I work at Netlify on framework integrations. Next has beta
| support for running the whole app at the edge, and Netlify
| supports that. If you create you own custom edge functions
| they will run first, so you can do just that. You can also
| run Next "normally" (i.e. in the node-based Netlify
| Functions) and run your own edge functions in front of
| that. In those you can modify the Request in any way you'd
| like, before passing it on to the origin.
| dbbk wrote:
| Yeah I'm very intrigued by running the whole app on the
| edge (in their Edge Runtime).
|
| This sounds pretty promising. I'll take a dive and see if
| I can get it working, thanks for the tip!
| ajdegol wrote:
| Great stuff! Well done!!
| AaronO wrote:
| Aaron from Deno here, happy to answer any questions you may have
| !
| simonw wrote:
| Question about https://edge-functions-
| examples.netlify.app/example/rewrite export
| default async (request: Request, context: Context) => {
| return context.rewrite("/something-to-serve-with-a-rewrite");
| };
|
| I'm surprised that the function is async but context.rewrite()
| doesn't use an await. Is that because the rewrite is handed
| back off to another level of the Netlify stack to process?
| lucacasonato wrote:
| Actually `context.rewrite` returns a `Promise<Response>`. The
| `async` isn't necessary here, but it also doesn't
| particularly hurt. You can return a `Promise` from an async
| function no problem.
| seniorsassycat wrote:
| Promises are flat, so if a async function or promise callback
| returns a promise the result is just a promise, not
| promise<promise>.
|
| Using async for functions that do not use await is still a
| good idea because thrown errors are converted to rejected
| promises.
|
| `return await` can be useful because it's a signal that the
| value is async, causes the current function to be included in
| the async stack trace, and completes local try/catch/finally
| blocks when the promise resolves
| glenjamin wrote:
| Since it's being returned it doesn't really matter whether
| `.rewrite()` is returning a promise or not. `return await x`
| is mostly equivalent to `return x` within an async function.
| robertlagrant wrote:
| Thanks! Any useful pros and cons vs Cloudflare Workers?
| hobo_mark wrote:
| Workers can have persistent storage attached to them (as a KV
| store), I can't see whether this has anything similar.
| bobfunk wrote:
| One of the big reasons for going with Deno is that it's an
| open runtime closely based on web standards. You can download
| the open source Deno Cli and all code written for our edge
| layer will run exactly the same there.
|
| As more and more front-end frameworks starts leaning in on
| running part of their code at the edge, we felt it was
| important to champion and open, portable runtime for this
| layer vs a proprietary runtime tied to a specific platform.
| irq-1 wrote:
| Cloudflare uses v8 and the client is open:
| https://github.com/cloudflare/wrangler
| tppiotrowski wrote:
| Looks like it comes with Typescript support
| forty wrote:
| Anything running JS comes with some TS support, you just
| have to transpile it before releasing :) I'm not sure why
| shipping the transpiler on the production server rather
| than keeping it in your CI is a good idea, but I think
| that's what Deno is doing.
| tppiotrowski wrote:
| True, but one feature I enjoy about Cloudflare workers is
| that I just edit them in the browser, even on devices
| without nodejs installed.
| paulgb wrote:
| > I'm not sure why shipping the transpiler on the
| production server rather than keeping it in your CI is a
| good idea, but I think that's what Deno is doing.
|
| IMHO, the decoupling of build step and runtime step in
| JavaScript was a terrible mistake. I've wasted hours just
| trying to find tsconfig settings that are compatible with
| the other parts I'm using. Shipping a transpiler with a
| known-good configuration alongside the runtime forces
| everyone to write their packages in a way that are
| compatible with that configuration, instead of creating a
| wild west.
|
| The current state of modules and npm reminds me a bit of
| the bad old "php.ini" days, where you would have to make
| sure you enabled the language features enabled by the
| code you wanted to import. What a mess.
| forty wrote:
| I understand that it might be a problem for browser
| target, but nodejs is pretty easy to target (at least I
| never had anyissue).
|
| Also speaking of wild west, Deno did not even manage to
| have their TS be the same as everyone else, as apparently
| they do import with .ts file extension, while everyone
| else is using .js. I feel like this would be creating
| more mess than fixing anything...
| rektide wrote:
| How many Deno instances might an edge server run? Does each
| tenant have an instance or is there multi-tenancy? What
| interesting tweaks have you made making a cloudified offering
| of Deno tailored for http serving?
| AaronO wrote:
| We're building a highly multi-tenant "isolate cloud" (think
| VMs => containers => isolates, as compute primitives).
|
| The isolate hypervisor at the core of our cloud platform is
| built on parts of Deno CLI (since it has a modular design),
| but each isolate isn't an instance of Deno CLI running in
| some kind of container.
|
| Isolate clouds/hypervisors are less generic and thus flexible
| than containers, but that specialization allows novel
| integration and high density/efficiency.
| ignoramous wrote:
| Is Netlify running Deno on their edge and not on _Deno.com
| Deploy_ 's? Is this also what Slack, Vercel (?), and Supabase
| do?
| lucacasonato wrote:
| Netlify and Supabase use Deno's infrastructure for code
| execution (https://deno.com/deploy/subhosting). Vercel hosts
| their edge functions on Cloudflare (nothing to do with Deno).
| Slack's Deno runtime is hosted on AWS.
| Tajnymag wrote:
| Ok, I stay corrected.
| zja wrote:
| Glad to see the Deno Company is getting a piece of the pie!
| Funding open source projects is tricky, but it seems like
| you're figuring it out.
| ahmed_ds wrote:
| This is really great to see an open source project coming
| up with a viable business plan to support the project
| development.
| teleclimber wrote:
| Are you willing to talk a bit about how Deno Deploy works
| internally? I think you have an internal build of Deno that
| can run multiple isolates (unlike the CLI, which basically
| runs one). How do you limit the the blast radius in case of
| a vuln in Deno?
|
| Kenton Varda did a pretty great writeup on CF worker
| security [0]. Would love to see Deno Deploy do something
| similar.
|
| [0] https://blog.cloudflare.com/mitigating-spectre-and-
| other-sec...
| lucacasonato wrote:
| We probably will eventually. A talk like this takes a
| _lot_ of time to prepare though, so it's not on the top
| of our priority list. But it will happen eventually.
|
| The TLDR is that Deno Deploy works pretty similarly to
| CFW in that it can run many isolates very tightly packed
| on a single machine. The isolation strategy differs
| slightly between CFW and Deploy, but both systems make
| extensive use of "defense in depth" strategies where you
| minimize the blast radius by stacking two or more
| defenses against the same issue on-top of each other.
| That makes it _much_ more difficult to escape any
| isolation - instead of breaking out of one sandbox, you
| might have to break out of two or three layers of
| isolation.
|
| These levels of isolation could happen at different
| layers. For example network restrictions could be
| initially restricted by an in-process permission check,
| then additionally a network namespace, and finally a
| routing policy on the network that the machine is
| connected to. Imagine this, but not just for network, but
| also for compute, storage, etc.
| wizwit999 wrote:
| big-time red flag that they're using deno's infra... wouldn't
| trust that
| undefined_void wrote:
| care to elaborate?
| carnitine wrote:
| Why?
| Tajnymag wrote:
| Deno is an offline runtime. Similar to nodejs. Not an
| infrastructure.
| wizwit999 wrote:
| https://news.ycombinator.com/item?id=31085357
| kitd wrote:
| That's discussing their subhosting offering, which is
| unrelated to the subject of this thread, the actual
| runtime.
| wmf wrote:
| It sounds like Netlify is essentially reselling a third-party
| service here. Isn't operating infrastructure Netlify's job? Why
| outsource this? Can requests end up taking circuitous paths
| where Netlify and Deno's infra don't line up?
| manquer wrote:
| Netlify also uses aws and rackspace . They are in the
| business of selling PaaS on top of IaaS by adding value to
| developer workflows .
|
| They could host their own infra at large enough scale when
| that makes sense, the same way AWS decided after many years
| to make their own chips (graviton), but that is not their
| core identity like AWS is not a chip manufacturer.
| wmf wrote:
| It looks like Netlify is essentially reselling Deno Deploy
| as is, not building a higher-level service on top of it.
| And latency matters in CDNs.
| manquer wrote:
| Just integrating into their own workflows and other apps
| could be sufficient value, only time will tell.
| ksajadi wrote:
| Netlify for me is a prime example of a great company gone wrong
| by raising too much VC money. The basic product of Netlify is a
| great one: build and host static sites without the need to mess
| with any of the tech stack. For us developer folk, this should be
| easy: run the build command of any static site generator and
| stick the results into an S3 bucket. And yet, something as simple
| as this became so popular with even developer companies (see
| Hashicorp's quotes on Netlify).
|
| This could have been a great story but then tons and tons of VC
| money came in and now you'd have to think of ways to make the
| valuation worth it and make the product sticky: so now we have
| edge Deno powered functions, lambda-esq applications, form
| embedded HTML and so much other features that are used by the
| long tail of their customer base while they changed their price
| to charge by git committees and have daily short downtimes of 1
| to 5 mins for the past month (monitored by external services, as
| they wouldn't reflect that in their status page).
|
| Soon, they'll sell the company to some corp like Akamai or
| similar "enterprise" outfit leaving us high and dry.
|
| There is a lot of money in building businesses that do boring
| stuff that just makes peoples lives easier. But when you take VC
| money, you'd need to build a moat to fend off cloud providers
| from the bottom, capture the value for the top from developers
| and everything in between.
| madeofpalk wrote:
| I think this is a natural and fine extension of the Netlify
| platform. They've had various "serverless functions" for a few
| years that's mostly been out of the way if you don't need it.
|
| It fits within their goal of a 'heroku for frontend websites',
| for easily deploying sites.
| throwthere wrote:
| I guess Netlify still offers the basic static site hosting,
| which can be anything from drag-drop to easy to set up
| automated github deployments. I mean, it's not like Netlify
| offers a worse static hosting service post-funding, right? With
| VC funding they've just built out more features. Not to mention
| I think they've always aimed to build out the "JAM" stack and
| support as many frameworks as possible.
___________________________________________________________________
(page generated 2022-04-19 23:00 UTC)