[HN Gopher] Workerd: Open-source Cloudflare workers runtime
___________________________________________________________________
Workerd: Open-source Cloudflare workers runtime
Author : kentonv
Score : 515 points
Date : 2022-09-27 13:01 UTC (9 hours ago)
(HTM) web link (blog.cloudflare.com)
(TXT) w3m dump (blog.cloudflare.com)
| thdxr wrote:
| one thing I've been curious about is since workerd is not a
| secure sandbox, what does Cloudflare use internally to make it
| secure?
|
| AWS uses/maintains Firecracker which meets criteria to be
| considered secure but has a 300ms start. I've been confused how
| Cloudflare can sandbox things safely without the same cost
| sixo wrote:
| The post links to this: https://blog.cloudflare.com/mitigating-
| spectre-and-other-sec...
|
| so the answer is, a lot of things: * v8 isolates
|
| * private processes for debugging/inspection
|
| * "cordoned" V8 runtimes organized by level of trust (guards
| against V8 zero-days for one)
|
| * linux namespacing as an outer sandbox layer, blocking fs and
| network access
|
| * explicitly allowed capability-based security ...etc.
| 0xbkt wrote:
| I am interested in knowing more about possible deployment options
| that workerd is going to offer. Any plans to cluster workerd
| nodes up to propagate code changes through a central orchestrator
| API? Or can we expect to have something like Krustlet on K8s but
| for workerd instead of wasmtime?
| kentonv wrote:
| Good question. I don't think we have a clear answer yet.
| Ideally I think we want to be agnostic to these things as much
| as possible, so as to fit into whatever orchestration systems
| people are already using. At the same time, I personally have
| very little experience with k8s so it's hard for me to say what
| will work well there.
|
| Abstractly speaking, I expect we need to build some workflow
| tooling around the nanoservices concept, since somehow all your
| teams' nanoservices eventually need to land on one machine
| where one workerd instance loads them all. But should we do
| that by updating whole container images, or distributing
| nanoservice updates to existing containers, or...?
|
| This would be a great discussion topic for the "discussions"
| tab in GitHub. :)
| gfodor wrote:
| This is great. The Cloudflare model for Workers really deserves
| to become a de facto standard that can be pulled off the shelf
| for open source serverless functions. Really excited they're
| opening this up this way. It seems like a great positive sum
| move.
| gradys wrote:
| What do you (or others) see as its advantages relative to other
| similar offerings like those of AWS, GCP, Deno, Netlify, etc.?
| gfodor wrote:
| The main advantage to me is the dedication to using
| standardized browser APIs. They have done a really good job
| of applying the Service Worker API to the problem of server
| side functions, and having the whole system be "MDN
| compatible" just makes everything more straightforward and
| leads to minimal surprises.
| dirheist wrote:
| Cloudflare offers web standard javascript API's out of the
| box and is very fast as is
| ignoramous wrote:
| > _The Cloudflare model for Workers really deserves to become a
| de facto standard_
|
| Our small tech shop has been using Workers since Jan 2020.
| Leaving behind the monstrous and intimidating world of AWS, I
| simply couldn't contain my excitement, even back then. In the
| words of Eminem, _Ease over these beats and be so breezy;
| Jesus, how can shit be so easy?!_
|
| Today, the entire platform around Workers is nothing short of
| astonishing. With Cloudflare's clout, Kenton and his team have
| utterly transformed serverless computing, staying true to the
| lessons-learnt from running sandstorm [0] way back when [1]!
|
| It is almost comical (or diabolical, depending) that none of
| the Big3 have bothered competing with them. It is only the
| upstarts like Bun, Deno, SecondState, Kalix (Lightbend) and
| others that are keeping Cloudflare honest.
|
| [0] https://news.ycombinator.com/item?id=20980278
|
| [1] https://news.ycombinator.com/item?id=15364978
| meah0 wrote:
| I'm glad they opened this up, but I really hope "javascript
| first, every runtime should look like v8" doesn't become the
| standard here.
| vlovich123 wrote:
| It's JavaScript first but the entire V8 dependency is
| abstracted behind an abstraction layer. Conceivably could be
| ported to other runtimes like Spidermonkey or JavaScriptCore
| with some work.
| gfodor wrote:
| I didn't mean to propose there should be a single standard.
| But for Javascript oriented FaaS, you really will have a hard
| time finding a better paradigm than the one they've settled
| on for Workers imo.
| cheriot wrote:
| Agreed. Hopefully the WASM ecosystem continues developing and
| takes over.
| madeofpalk wrote:
| What part of the Cloudflare runtime looks v8 specific, rather
| than just (web) javascript specific?
| Existenceblinks wrote:
| > workerd is not just another way to run JavaScript and Wasm. Our
| runtime is uniquely designed in a number of ways.
|
| How many non-javascript folks use this? What's better than using
| fly.io?
| drexlspivey wrote:
| I don't know any JavaScript but i use workers for discord bots
| written in rust (and compiled to wasm).
| thegagne wrote:
| Hey congrats on the release! This is a big step and really helps
| build some trust in the platform.
| dontlaugh wrote:
| Looks like a significant technical achievement.
|
| However, I don't entirely get it. If every machine runs every
| service with library-like overhead for calls across them, how is
| that different from a monolith that links in libraries? It just
| seems more complex for no benefit.
| kentonv wrote:
| Because each service runs in its own isolate, it can be built
| and deployed independently of the others. So if some other
| team's code is broken at HEAD, that doesn't stop your team from
| deploying. Also you don't have to agree with other teams about
| exactly what versions of dependencies you use, you can monkey-
| patch APIs without braking anyone else, etc. So, all the usual
| development workflow benefits of microservices, without the
| network overhead.
| dontlaugh wrote:
| Sure, I see.
|
| I guess it's not a problem I've encountered so far. Perhaps I
| just haven't worked in a company big enough? Only on the
| scale of hundreds so far.
|
| Branching, agreeing on interfaces and tests have been
| sufficient to allow linking, in my experience.
| atonse wrote:
| We are curious about this set of solutions for allowing
| clients to run their own code on our system.
|
| For example, a client wants to write a workflow but has a
| custom step that integrates with their system only. They
| can write the code and run as part of our workflow without
| us worrying about them writing something nefarious and
| breaking out of the sandbox.
|
| That said, the workerd github says that this can't (yet?)
| be relied on to be a security sandbox.
| pclmulqdq wrote:
| LD_PRELOAD is calling. There are advantages and disadvantages
| to this approach: shipping one single binary means that you
| don't have to worry about testing compatibility between
| versions of your microservices/libraries. We went through
| this with dynamic/static linking a long time ago. Static
| linking seems to have generally won.
| [deleted]
| jmull wrote:
| An isolate is a runtime thing, not a deployment thing.
|
| A regular javascript module can be deployed independently.
| And have its own version of dependencies. I'm just not seeing
| what this really adds over existing widely available, well
| understood options.
|
| I guess protection from other teams uncontrolled monkey-
| patching is something (I don't think there's probably any
| legitimate reason to monkey patch something yourself in
| normal production -- why wouldn't you patch or replace the
| problematic thing?), but there might be other ways to deal
| with that short of a new runtime environment.
|
| I'm a little worried truly independent updating... imagine
| starting a request with one version of the auth module but
| finishing it with another. Not sure people really want to
| have to account for that. (Maybe that's accounted for, so
| maybe an unwarranted worry.)
| pier25 wrote:
| This is awesome and probably should have been a priority from day
| one. I'm sure it will boost the Workers ecosystem.
|
| Anyone else thinks the name is kinda weird? After reading the
| article it makes sense that it's "worker d" but I first read it
| as "work erd" or a single word like "worked" but with a rogue R
| in the middle.
| kentonv wrote:
| This comes from unix tradition, where you have servers like
| httpd, smtpd, sshd, systemd, etc.
|
| Honestly, it's a boring name. We had a big debate over naming
| with a lot of interesting ideas, but ultimately every idea was
| hated by someone. No one loves workerd, but no one hates it, so
| here we are! Oh well.
| pier25 wrote:
| Thanks that makes more sense!
|
| Obviously I'm not a Unix guy and looking at the downvotes on
| my previous comment, people seem bothered by that :)
| arez wrote:
| What is the endgame here? Do I run all my backend stuff at some
| point as workers that are closer to the customer or do I mostly
| use it for server-side redering?
| asim wrote:
| Kudos to Kenton who keeps delivering great stuff on Workers.
| kentonv wrote:
| Thank you! But for the record there are at least 56 people who
| contributed to this code and they all deserve credit. :)
|
| https://github.com/cloudflare/workerd/commit/a265d5c22bf2a6b...
| BonoboIO wrote:
| Workers are a really nice, fast and awesome way to bend legacy
| APIs to new interfaces.
|
| I fear that, every worker could end in spaghetti Code and nobody
| remembers why they used this or that and where it was saved.
|
| You could follow the path of the request, but I see some
| delevopers going crazy with workers.
|
| I use workers too, I love them.
| rubenfiszel wrote:
| This is great, thank you for open-sourcing. One limitation is see
| with this model which is clearly stated in the post is that it is
| javascript/wasm and HTTP servers only. The deployable workers
| model is a great one and I had hoped to provide an adapter for
| the open-source windmill[1] project by transforming the normal
| scripts that are our units of execution into http servers
| runnable in workderd.
|
| We support typescript (deno), go and python. Short of those
| languages becoming 100% transpilable to wasm, we will have to
| fallback to use firecracker or keep our current system
| implemented from scratch using nsjail for sandboxing and a fork
| for each process.
|
| [1]: https://github.com/windmill-labs/windmill
| kentonv wrote:
| Regarding the HTTP limitation, we are working towards adding
| raw TCP and other protocols soon. (I know we've been saying
| this for a while... too much to do, too little time, and this
| one got reprioritized down the list a bit, but is getting close
| to the top again.)
|
| But as far as JavaScript/Wasm, that's a pretty inherent design
| feature to the Workers architecture. Cloudflare will probably
| provide containers eventually but that'll likely present itself
| as more of an add-on than a core part of the platform.
|
| The problem is, containers are just too heavy to support the
| nanoservices model as described in the blog post. JavaScript
| and Wasm are really the only viable options we have for
| something lighter-weight that meets our isolation needs. Other
| language VMs are generally not designed nor battle-tested for
| this sort of fine-grained isolation use case.
| the_mitsuhiko wrote:
| That's probably the first codebase I have seen in years where
| someone picked .c++ as file extension.
|
| (In my opinion objectively the worst possible choice because +
| requires URL encoding ;))
| andrewnyr wrote:
| people on this website will complain about the stupidest shit
| j16sdiz wrote:
| C++ can be .C, .cc, .cxx, or.c++
| Maksadbek wrote:
| How about .cpp ?
| ceejayoz wrote:
| Oh, there's worse; you can use emoji in filenames.
| [deleted]
| staticassertion wrote:
| Anything other than null and `\\` are valid on linux iirc
| yencabulator wrote:
| You mean slash, not backslash.
| kentonv wrote:
| Haha, I started doing this with Cap'n Proto 10 years ago. :)
|
| I periodically get crap for it but so far it hasn't really
| broken anything.
|
| The biggest annoyance is exactly what you point out -- the
| GitHub URLs look ugly because they %-encode the plus signs.
| Though it turns out you can manually un-encode them and it
| works fine.
| Thaxll wrote:
| Why would you use .c++ instead of the standard .cpp?
| kentonv wrote:
| I got tired of arguing about .cpp vs. .cc vs. .cxx vs. .C.
|
| So I used the actual name of the language, and it worked
| fine. :)
| breakingcups wrote:
| https://xkcd.com/927/
| teddyh wrote:
| It breaks Make, which does not have a default rule for
| "*.c++" files, but does have rules both for "*.cpp" files and
| the older naming custom of "*.C" files.
| jchw wrote:
| In modern GNU make it is generally advised to disable
| default rules anyways. I know some people try to do
| portable Make, but I am pretty over that idea myself, so if
| I do use Make, I'm using GNU extensions and GNUmakefile.
| teddyh wrote:
| GNU Make does not have a rule for "*.c++" files, either.
|
| Do you have a reference to the advice to disable default
| rules? It sounds counterproductive to me.
| jchw wrote:
| There's lots of different opinions on how to use Make,
| but here's one. https://tech.davis-hansson.com/p/make/
| teddyh wrote:
| Not very well recieved here on HN a month ago:
| https://news.ycombinator.com/item?id=32541016
| misiek08 wrote:
| "We will open source it in near future" Till now this was the
| biggest lie from many companies. I love Cloudflare and this is
| why!
|
| Using Cap'n Proto instead of some bloat <3
| merb wrote:
| it's sad that they didn't go the same way than fastly which
| actually use a wasm runtime even for javascript by compiling
| spidermonkey to wasm: https://github.com/fastly/js-compute-
| runtime I think thats way more clever than having a mixed
| runtime. and they use wizer to make it more performant.
| azakai wrote:
| The Fastly approach has some advantages for wasm, but
| Cloudflare have mentioned that the vast majority of their users
| want JavaScript and TypeScript. Cloudflare's approach is more
| optimized for that: running a JS VM directly will take a lot
| less overhead than one compiled to wasm atm (since JS VMs
| depend heavily on JITing, and compiled JS VMs will have a much
| larger memory footprint due to the bundled compiled runtime).
|
| It's possible that Cloudflare and Fastly users are different
| enough that different approaches make sense.
| vlovich123 wrote:
| It's a clever technique no doubt. In practice for JS-centric
| workloads (which we've found is the bulk of demand), using V8
| is still much much more efficient, has faster performance than
| Wizer's approach and easier to scale out (the scale that
| Workers runs at can boggle the mind).
|
| We are trying to figure out how to move more of the API parts
| of runtime to something not c++ without significantly
| negatively impacting memory / CPU (safety, correctness, speed
| of development, etc). It's an interesting problem to try to do
| at Cloudflare scale.
| merb wrote:
| thanks for the info, well yeah wizer still does not make it
| as fast as c++ v8
| otar wrote:
| Workers are so damn good and easy...
|
| I am originally from the PHP/Laravel world, recently had to
| quickly develop a very simple (5 endpoints) REST API and decided
| to give CF workers & KV storage a try. In an hour the API was
| running in production...
|
| CF is really, really good at many things.
| mirekrusin wrote:
| Can you dynamically push/deploy things to running daemon or you
| always have to explicitly specify it up front, before booting, in
| cap'n proto config?
| kentonv wrote:
| At present you have to reload the process after changes, but
| the underlying codebase obviously supports dynamic loading so
| this may change. Today is about getting the code out there, the
| next step is figuring out exactly what workflows people want to
| use this with. Please feel free to post under the discussions
| tab of the GitHub repo if you have ideas!
| mwcampbell wrote:
| I've been watching Workers with interest basically since the
| beginning. Now they just need to bring WebSockets support out of
| beta. Or is the long-term plan to replace raw WebSockets servers
| with something higher-level like the new Cloudflare Queues?
| kentonv wrote:
| Hmm, I believe WebSocket support came out of beta at the same
| time as Durable Objects, about a year ago. It's entirely
| possible we forgot to update a doc, though.
|
| We do have some plans around making WebSockets more efficient
| by allowing the application to shut down while keeping idle
| WebSockets open, and start back up on-demand when a message
| arrives.
| mwcampbell wrote:
| See the top of this doc:
| https://developers.cloudflare.com/workers/learning/using-
| web...
| kentonv wrote:
| Doh. Yeah that's way out-of-date, I'll get it fixed.
|
| FWIW contrary to the note on that page, WebSocket pricing
| is documented along with Durable Objects here:
|
| https://developers.cloudflare.com/workers/platform/pricing/
| #...
| DenseComet wrote:
| > A WebSocket being connected to the Durable Object
| counts as the Object being active.
|
| > If 100 Durable Objects each had 100 WebSocket
| connections established to each of them which sent
| approximately one message a minute for a month, the
| estimated cost in a month would be, if the messages
| overlapped so that the Objects were actually active for
| half the month:
|
| I remember looking at that page before and finding these
| two lines confusing. Based on the first line, it seems
| that the objects would be considered active for the
| entire month, whereas the second one raises the
| possibility that a websocket can be connected to a
| durable object without it being considered active.
| yencabulator wrote:
| Yeah that seems to be in conflict with this earlier
| passage:
|
| > A WebSocket being connected to the Durable Object
| counts as the Object being active.
| mwcampbell wrote:
| Does the WebSockets protocol now have some kind of built-in
| keepalive/ping functionality? Something like that is
| practically necessary for any long-running idle connection on
| top of TCP. Without built-in keepalive, the application would
| have to stay running just to handle pings, right?
| yencabulator wrote:
| WebSockets has a protocol-level ping/pong interaction. If
| one wrote a WebSocket proxy, one could respond to pings on
| that level, without proxying them to a backend.
| kentonv wrote:
| While the WebSocket protocol indeed has built-in
| ping/pong, it's somewhat difficult to rely upon. Web
| browsers do not provide any API to send pings from the
| client side. As a result a lot of applications end up
| implementing application-level pings.
|
| In order to support efficient WebSocket idling, we
| probably need to have some ability to respond to
| application-level pings without actually executing
| JavaScript. TBH I'm not exactly sure how this will work
| yet.
| yencabulator wrote:
| You could make an incoming WebSocket message behave much
| like an incoming HTTP request: The DO worker starts
| running, reads the message, runs any `event.waitUntil`
| work to completion, and goes to sleep. Let your runtime
| unload DO workers while holding on to their WebSocket
| connections.
|
| That would mean maintaining "chatroom" state in memory
| wouldn't be enough, the DO worker would have to always
| write it to the durable store, to be safe to unload.
|
| You'd have some sort of a "WebSocket connection ID" that
| could be used as key for storing state, and when the next
| message comes in, the worker would use that key to fetch
| state.
|
| EDIT: And the ability to "rejuvenate" other WebSocket
| connections from their IDs, so you can e.g. resend a chat
| room message to all subscribers.
| kettlecorn wrote:
| > We do have some plans around making WebSockets more
| efficient by allowing the application to shut down while
| keeping idle WebSockets open, and start back up on-demand
| when a message arrives.
|
| Please make noise when that rolls out! I was literally
| wishing for that yesterday.
|
| I want to use Cloudflare Workers to quickly connect p2p
| WebRTC clients, but when a new peer joins there needs to be
| open WebSockets to send connection-setup messages to each
| peer in the room. But with how that works / is billed today
| it really doesn't make sense to have a bunch of idling
| WebSockets.
|
| An alternative is having each client poll the worker
| continuously to check for new messages, but that introduces a
| bunch of latency during the WebRTC connection negotiation
| process.
| jedschmidt wrote:
| Instead of adding a non-standard API to opt-out of existing
| networking functionality, it would be great to see a lower-
| level networking primitive to unlock these kinds of use
| cases. I wrote more (with a proof-of-concept) here:
| https://readable.writable.stream
| jkarneges wrote:
| Fastly has something like this:
| https://docs.fanout.io/docs/http-streaming
| syrusakbary wrote:
| This is great. They mentioned a few months ago their will to
| open-source it [1] and now it's finally a reality, so props to
| Kenton and the Workers team on achieving it!
|
| I've dug a bit on the OSS repo [2], and here are some of the
| findings I got: 1. They are using a forked
| version of V8 with very small/clean set of changes (the diff with
| the changes is really minimal. Not sure if this makes sense, but
| it might be interesting to see if changes could be added
| upstream) 2. It's coded mainly in C++, with Bazel for the
| builds and a mix between CapNProto and Protocol Buffers for
| schema 3. They worked deeply the Developer Experience, so
| as developer you can start using `workerd` with pre-build
| binaries via NPM/NPX (`npx workerd`). Still a few rough edges
| around but things are looking promising [3]. 4. I
| couldn't find the implementation anywhere of the KV store. Is it
| possible to use it? How it will work?
|
| Open sourcing the engine, mixed with their recent announcement on
| a $1.25B investment fund for startups built on workers [4] I
| think will propel a lot the Wasm ecosystem... which I'm quite
| excited about!
|
| [1] https://twitter.com/KentonVarda/status/1523666343412654081
|
| [2] https://github.com/cloudflare/workerd
|
| [3] https://www.npmjs.com/package/workerd
|
| [4] https://blog.cloudflare.com/workers-launchpad/
| chrismorgan wrote:
| (Meta: please don't use indentation/preformatted text for
| lists; just make each list item a paragraph of its own. Even
| now they wrap, it's still very disruptive, harder to read and
| feels more like a citation of some form, in this instance it
| took me a moment to decide this was _your_ writing rather than
| something you were quoting.)
| kentonv wrote:
| Thanks! Some notes on your notes:
|
| 1. We float a couple patches on V8 but we keep it very up-to-
| date. Generally by the time a version of V8 hits Chrome Stable,
| we're already using it. I would indeed like to upstream the
| patches, which should be easier now that everything is open.
|
| 2. FWIW the one protobuf file in the repo is from Jaeger, a
| third-party tracing framework. We actually aim to factor that
| out into something more generalized, which may eliminate the
| protobuf dependency. Workerd is very much based on Cap'n Proto
| for most things (and the Workers team does most of the
| maintenance of Cap'n Proto itself these days).
|
| 3. Yep, one of our main goals was to bake this into our
| `wrangler` CLI tooling to make local testing easier. I'd also
| like to ship distro packages once things are a bit more stable.
|
| 4. The code includes the ability to configure KV bindings, so
| you can run code based on Workers KV. But, the binding is a
| thin API wrapper over an HTTP GET/PUT protocol. You can in
| principle put whatever you want behind that, and we plan to
| provide an implementation that stores to disk soon. The actual
| Workers KV implementation wouldn't really be that interesting
| to release as it mostly just pushes data over to some commodity
| storage back-ends in a way that's specific to our network.
| ignoramous wrote:
| > _...(and the Workers team does most of the maintenance of
| Cap 'n Proto itself these days)._
|
| A team on _workerd_? I 'm sure there is a lot of work
| involved in running the massive Workers platform, but
| curiously enough, the _workerd_ git repo itself has
| relatively few commits from other engs... just _where_ is the
| team? ;)
| kentonv wrote:
| I had to squash history on the first commit for a bunch of
| reasons, but there are 55 co-authors listed. :)
|
| https://github.com/cloudflare/workerd/commit/a265d5c22bf2a6
| b...
| kentonv wrote:
| Also on point 4: The next version of Miniflare -- our local
| testing environment -- is based on workerd and will include
| implementations of those HTTP APIs behind KV and R2 for
| testing purposes, which the Wrangler CLI tool will hook up
| automatically. (In fact maybe it already does, that team has
| been working so fast lately that I haven't been able to keep
| up...)
| gregbrimble wrote:
| `npx wrangler dev --experimental-local`!
| sandGorgon wrote:
| sorry - cloudflare is running this in production ? this is wasm ?
|
| this basically becomes the first large scale deployment of wasm
| in the world right ?
|
| can someone talk about the engineering choices here. why this ?
| why not rust/java/c++...any of that ? a people perspective.
| 0x457 wrote:
| Let me bring you back to 2017:
| https://blog.cloudflare.com/introducing-cloudflare-workers/
|
| I'm not sure what to explain here? It's a runtime based on V8
| to run JS and WASM code. Used by Cloudflare to run on edge. Why
| not just rust/java/c++? Well, good luck running that in multi-
| tenant environment.
| [deleted]
| styren wrote:
| What exactly are the obstacles prohibiting multiple tenants
| from running rust/java/c++ on the edge?
| kentonv wrote:
| You have to use heavier isolation mechanisms with those
| languages -- processes, containers, virtual machines. These
| are too heavy to support thousands of tenants per machine.
| Edge locations are typically smaller clusters, and you
| typically want all your customers to present in every point
| of presence, so you need to pack them tighter. V8 isolates
| are light enough for this to work and be cost-effective.
| 0x457 wrote:
| Many reasons.
|
| First, all 3 allow far too much: uncontrolled file system
| access, uncontrolled network access.
|
| Second, customer's code cannot be trusted. Running unknown
| code on your server leaves with huge attack surface.
|
| Third, isolation to deal with first two is expensive:
| resources and cold-start.
|
| Third+, the point of running code on edge is for it to be
| quick. No one will use it if your cold-start time is higher
| network round-trip, and you're not going to get paid a lot
| if you have to run customer code even if not in use.
|
| Fourth, running WASM allows unified runtime without any
| dependencies:
|
| - Java, which JVM do you target? What language version do
| you target? How often you upgrade? Do you run one JVM per
| tenant? How do you deal with cold-start times?
|
| - Rust/C++ which architecture your servers use? What if you
| want to switch to ARM? What if it's mixed? Which version of
| ARM it is? Can I use AVX-512?
|
| - WASM: WASM is WASM. You don't care if customer compiled
| rust or C++ or Go to get that wasm.
| jtwebman wrote:
| The event nature seems like a light version of Erlang or Elixir
| without the power of those. Erlang and Elixir can run millions of
| light weight processes though vs 1000s. Still so many questions
| though like what is deployment like? Build something using just
| workers so we can see the power of it.
| kentonv wrote:
| You might be interested in Durable Objects, which is
| essentially an actor model within Workers.
|
| https://blog.cloudflare.com/introducing-workers-durable-obje...
|
| Durable Objects aren't exactly the same as Erlang actors (which
| is why we don't usually call them "actors" -- we found it led
| to some confusion), but at some level it's a similar concept.
|
| You can easily have millions of Durable Objects in a Workers-
| based application.
|
| Currently workerd has partial support for Durable Objects --
| they work but data is stored in-memory only (so... not actually
| "durable"). The storage we use in Cloudflare Workers couldn't
| easily be decoupled from our network, but we'll be adding an
| alternative storage implementation to workerd soon.
| munhitsu wrote:
| superdug wrote:
| What does this mean for miniflare?
|
| Currently there is no way to test for cloudflare-esque failures.
| For instance, what does my worker do when the kv lookup fails or
| another cloudflare infrastructure related promise isn't
| fulfilled? I think actually running the true worker runtime in a
| testing environment is key.
| pbowyer wrote:
| https://news.ycombinator.com/item?id=32997774
| rishsriv wrote:
| This is awesome! Had been meaning to open-source parts of my
| startup [1] that is built almost entirely on Workers right now.
| Glad we can do that now without making massive changes to the
| code-base.
|
| Amazed by what OP and the Workers team have done over the years.
| Took a while for us to get used to the Workers paradigm. But once
| we did, feature velocity has been great.
|
| Last wish list item: a Postgres service on Workers (D2?) becoming
| available in the not too distant future
|
| [1] https://datanarratives.com/
| icelancer wrote:
| I went to use your product and connect Google Analytics to give
| it a shot, but you haven't verified the app w/ Google, so you
| get a big warning. I closed out of the browser. Might want to
| look into that.
| rishsriv wrote:
| Yikes, yes. Thank you for the heads up!
|
| We've mostly been using the GA plugin for use with beta
| users. Had completely forgotten about the need to verify the
| app with Google. Will do that this week.
| ignoramous wrote:
| > _a Postgres service on Workers (D2?)_
|
| Not sure where Cloudflare's with _relational database
| connectors_ [0], but the next best thing is MySQL via Workers:
| https://news.ycombinator.com/item?id=32511577
|
| [0] https://archive.is/Q18Gs
| rishsriv wrote:
| They do have relational DB connectors [1], which have been
| working great for us. But maintaining a centralised DB is
| still a bit of a pain and latency can be high, depending on
| where users are accessing them from. Having a managed,
| distributed SQL service will be easier to manage and will
| likely have much lower latency. Their SQLite DB, D1 [2] looks
| interesting. But would be awesome to have Postgres' more
| complete feature set as a managed, low-latency service
|
| [1] https://blog.cloudflare.com/relational-database-
| connectors/ [2] https://blog.cloudflare.com/whats-new-
| with-d1/
| aiansiti wrote:
| Is this just to cover their tracks so startups don't think
| they'll get locked in?
| resonious wrote:
| Even if the answer is yes, I still think it's a good thing.
| bastawhiz wrote:
| That idea makes the implicit assertion that the only thing
| valuable about the code is that someone else might get it
| running in production, and not that they actually published
| their production runtime for the world to inspect, debug, and
| offer improvements to.
| williamstein wrote:
| It is also to provide a more accurate testing environment when
| doing development before deploying.
| superdug wrote:
| absolutely key point here. miniflare doesn't allow you to
| check for conditions in which cloudflare might not fulfill a
| promise, but can't actually be tested for as the
| infrastructure is simply represented as an artifact currently
| with miniflare
| pier25 wrote:
| Maybe, but I think it's probably more related creating a larger
| Workers ecosystem.
|
| Right now, you can't use most NPM modules on CF Workers since
| this is a custom runtime which is also pretty barebones
| (probably by design).
|
| Deno in comparison offers a rich standard library which solves
| plenty, but they are also working on a compat layer for Node
| modules. And the Deno ecosystem is also growing, probably
| because you can run it anywhere.
| thejosh wrote:
| Lol, considering how many just use AWS or use Firestore or a
| myriad of other vendor lock in solutions for everything, sadly
| no-one really cares about lock in anymore.
| fsociety wrote:
| No the author has created and maintained many open source
| projects. I imagine they advocated heavily for this.
| kentonv wrote:
| Thanks. Indeed, I always expected to release this code
| eventually and I'm very happy that it's finally happened.
| Also very happy we could do it under an actual open source
| license.
|
| We've always designed Workers around open standard APIs, so
| even without this, Workers-based code should not be too hard
| to port to competing runtimes. But, this makes it much easier
| to move your code anywhere, with bug-for-bug compatibility.
|
| You can't build a successful developer ecosystem that has
| only one provider. We know that. We'd rather people choose
| Cloudflare for our ability to instantly and effortlessly
| deploy your application to hundreds of points of presence
| around the world, at lower cost than anyone else. :)
| gfodor wrote:
| Thank you for releasing this! Had a question: if I have
| worker code that uses other Cloudflare storage APIs like KV
| or R2, what would you recommend as far as shimming those
| calls so this code could run via workerd (but, for example,
| just use basic process-local in-memory storage for now.) Is
| that doable? For my use case durability isn't that
| important and I'd like to offer people the ability to self
| host my code via workerd.
| kentonv wrote:
| workerd includes implementations of the KV and R2 binding
| APIs, which convert all your calls into HTTP requests
| under the hood. You can point that at another Worker or
| to a separate HTTP service to implement the storage
| behind them. We plan to provide some example
| implementations in the future that store to disk or a
| database.
| gfodor wrote:
| amazing, thank you!
| lijogdfljk wrote:
| re: self hosting,
|
| I'm curious on the use cases for self hosting? Ie i have
| a heavy interest in self hosting. I also find workerd
| very interesting. Though i'm struggling to think of
| reasons why i'd build my apps around workerd instead of
| just a process.
|
| If i was dedicated to WASM, Perhaps workerd might be a
| better solution than wasmer, unsure.
| gfodor wrote:
| The use case I'm referring to is where someone (in this
| case, me) publishes code intended to be run on Cloudflare
| Workers _or_ self hosted with workerd.
|
| Reasons you'd write first party code targeting workerd
| would be a different question.
| ignoramous wrote:
| Thanks for _workerd_! I saw a tweet from you mention that
| _workerd_ could be used as a reverse-proxy (a la nginx),
| but the github readme is scant on the details and ends with
| _(TODO: Fully explain how to get systemd to recognize these
| files and start the service.)_.
|
| ---
|
| _Cloudflare... uses... workerd, but adds many additional
| layers of security on top to harden against such bugs.
| However, these measures are closely tied to our particular
| environment... and cannot be packaged up in a reusable
| way._
|
| _Durable Objects are currently supported only in a mode
| that uses in-memory storage -- i.e., not actually
| "durable"... Cloudflare's internal implementation of this
| is heavily tied to the specifics of Cloudflare's network,
| so a new implementation needs to be developed for public
| consumption._
|
| ...the runtime parts I was most curious about couldn't be
| open-sourced :(
| kentonv wrote:
| The config format is defined and documented here:
|
| https://github.com/cloudflare/workerd/blob/main/src/worke
| rd/...
|
| We definitely need to provide more higher-level
| documentation around this, which we're working on, but if
| you're patient enough to read the schemas then everything
| is there. :)
|
| To act as a proxy, you would define an inbound socket
| that points to a service of type `ExternalServer`. There
| are config features that let you specify that you want to
| use proxy protocol (where the HTTP request line is a full
| URL) vs. host protocol (where it's just a path, and
| there's a separate Host header), corresponding to forward
| and reverse proxies.
|
| Next you'll probably want to add some Worker logic
| between the inbound and outbound. For this you'd define a
| second service, of type `Worker`, which defines a binding
| pointing to the `ExternalServer` service. The Worker can
| thus send requests to the ExternalServer. Then you have
| your incoming socket deliver requests to the Worker.
| Voila, Workers-based programmable proxy.
|
| Again, I know we definitely need to write up a lot more
| high-level documentation and tutorials on all this. That
| will come soon...
___________________________________________________________________
(page generated 2022-09-27 23:00 UTC)