[HN Gopher] Vercel Serverless Functions vs. Cloudflare Workers
___________________________________________________________________
Vercel Serverless Functions vs. Cloudflare Workers
Author : alexey2020
Score : 29 points
Date : 2021-03-25 13:39 UTC (9 hours ago)
(HTM) web link (moiva.io)
(TXT) w3m dump (moiva.io)
| brendanmc6 wrote:
| Wow this was extremely informative for me! Cleared up a cacheing
| concern I had. Thanks for sharing.
|
| I've had good success using the free tier of Vercel functions to
| handle the low-traffic storefront and user accounts for
| offsetra.com - Just wrapper functions around stripe and
| firestore. It's a godsend for independent, unskilled, time-
| constrained front-end devs like me!
| alexey2020 wrote:
| Thanks for feedback! Caching... it took me time to get my head
| around it. With Vercel it works more or less the way I
| imagined. It surprised me that Cloudflare has a different
| approach. But once I got it, it started making sense and I like
| it :)
|
| Good luck with your project!
| andrewbarba wrote:
| One important piece missing from this article is that on Vercel
| you do not get global Serverless functions on any plan except
| the Enterprise plans. By default you can pick one preferred
| region for your Serverless functions and that's the region
| that's always used. In practice, assuming you have a somewhat
| decent caching strategy, this doesn't really matter as far as
| latency is concerned. Where it could potentially matter is that
| AWS region having an outage and now you can't fallback to
| another. We deploy all our functions to at least two regions
| and Vercel does handle region failover in this case.
|
| Disclaimer: I'm a Vercel enterprise customer
| WORMS_EAT_WORMS wrote:
| Great post, enjoyed your writing style and drawings a ton!
|
| One of the bigger things I think Workers have going for it versus
| others is it's ability to bind WASM modules. Hyper efficient,
| basically native running computation at edge is a really cool
| concept. Especially since you can talk to it with a dead-simple
| JavaScript API.
|
| For example, Cloudflare charges a good amount of money for image
| resizing using their CDN however they do it. At the same time,
| they also have this proof of concept worker that uses Web
| Assembly to do it for basically free! [1]
|
| Another cool demo that was on HN the other day was embedding
| SQLite with WASM for edge quick transactions. [2]
|
| Completely mind blowing... Still, very few demos in the wild. The
| learning curve is a beast and hopefully gets better as Rust and
| WASM become more mainstream for web developers.
|
| [1] https://github.com/cloudflare/cloudflare-workers-wasm-demo
|
| [2] https://github.com/lspgn/edge-sql
| siquick wrote:
| I had been using Vercel for a Next.js SSR deployment up until
| this week when I moved it to a basic AWS Lightsail box with no
| real NGINX optimisations.
|
| I have Lightsail server in Frankfurt and I am in Sydney and the
| Lightsail box gets a higher Pagespeed score than the Vercel
| deployment and from my own anecdotal usage, the page load is
| noticeably faster. I had Vercel region set to Paris (no Frankfurt
| region yet).
|
| I loved the simplicity of Vercel, especially the per-branch-
| deployments (which Im still using on the free tier) but it was
| surprising that for all the serverless boasts, its not actually
| any faster than a basic server.
| carmen_sandiego wrote:
| It's a little more complex than that. Naturally an 'always
| running' server is faster when you're not getting a cache hit
| or you're running into a Lambda cold start. But for stuff
| served from CDN cache it won't make any difference.
| Vercel/nextjs are geared towards encouraging you to make
| everything static so that it does get served that way.
|
| If you need to generate every part of your page to be user-
| specific then I would say that's a different use case and
| nextjs isn't necessarily the right tool.
|
| That said, you can actually do some pretty dynamic pages with
| it. You should try out what they call 'Incremental Static
| Generation'. It's basically the SWR pattern, but for server-
| side rendering.
| chmod775 wrote:
| > Vercel/nextjs are geared towards encouraging you to make
| everything static so that it does get served that way.
|
| That would mean they have no reason to exist. If they're
| slower than a regular server, and only as fast as a regular
| CDN for static pages, they're beaten by the old server + CDN
| combination.
| jariel wrote:
| Can someone knowledgeable please explain where these workers are
| useful?
|
| Serverless components within an main infa. make sense - it's an
| easier way to deploy.
|
| But these 'edge' functions ... what is the advantage of saving a
| few ms on a transaction?
|
| I understand that we may want standard content pushed out to the
| edge, but in what situation is it really worth all the added
| complexity of risk of pushing out functions to the edge, to save
| a few ms?
| zackbloom wrote:
| It's often not actually more complex, it's simpler. With
| Cloudflare Workers, for example, you don't think about regions,
| availability zones, provisioning resources, or cold starts. You
| just write code, and it can scale from one request per second
| to thousands without any thought or work on your part,
| partially because of how its designed and partially because
| it's scaled across so many locations and machines.
| bentaber wrote:
| One use case is validating auth tokens at the edge so you can
| edge cache API responses that require auth.
| alexey2020 wrote:
| > in what situation is it really worth all the added complexity
| of risk of pushing out functions to the edge
|
| If you are talking about developer point of view, then there is
| no additional complexity. All the complexity is covered by the
| underlying platform
|
| > what is the advantage of saving a few ms on a transaction?
|
| one example - if a transaction consists of a few separate
| sequential transactions, then ms add up and might affect user
| experience. Also an app might need to issue lots of requests on
| a page load and taken that there is a limit on parallel
| requests (6 requests per domain), the advantage might be
| sensible.
|
| Having said that, I tend to agree that many use cases are not
| sensible to a few ms advantage
| [deleted]
___________________________________________________________________
(page generated 2021-03-25 23:01 UTC)