[HN Gopher] Show HN: OpenWorkers - Self-hosted Cloudflare worker...
       ___________________________________________________________________
        
       Show HN: OpenWorkers - Self-hosted Cloudflare workers in Rust
        
       I've been working on this for some time now, starting with vm2,
       then deno-core for 2 years, and recently rewrote it on rusty_v8
       with Claude's help.  OpenWorkers lets you run untrusted JS in V8
       isolates on your own infrastructure. Same DX as Cloudflare Workers,
       no vendor lock-in.  What works today: fetch, KV, Postgres bindings,
       S3/R2, cron scheduling, crypto.subtle.  Self-hosting is a single
       docker-compose file + Postgres.  Would love feedback on the
       architecture and what feature you'd want next.
        
       Author : max_lt
       Score  : 334 points
       Date   : 2026-01-01 15:09 UTC (7 hours ago)
        
 (HTM) web link (openworkers.com)
 (TXT) w3m dump (openworkers.com)
        
       | indigodaddy wrote:
       | Perhaps it might be helpful to some to also lay out the things
       | that don't work today (or eg roadmap of what's being worked on
       | that doesn't currently work?). Anyway, looks very cool!
        
         | max_lt wrote:
         | Good idea! Main things not yet implemented: Durable Objects,
         | WebSockets, HTMLRewriter, and cache API. Next priority is
         | execution recording/replay for debugging. I'll add a roadmap
         | section to the docs.
        
       | vmg12 wrote:
       | Does this actually use the cloudflare worker runtime or is this
       | just a way to run code in v8 isolates?
        
         | max_lt wrote:
         | It's a custom V8 runtime built with rusty_v8, not the actual
         | Cloudflare runtime (github.com/openworkers/openworkers-
         | runtime-v8). The goal is API compatibility - same Worker syntax
         | (fetch handler, Request/Response, etc.) so you can migrate code
         | easily. Under the hood it's completely independent.
        
       | simonw wrote:
       | The problem with sandboxing solutions is that they have to
       | provide _very_ solid guarantees that code can 't escape the
       | sandbox, which is really difficult to do.
       | 
       | Any time I'm evaluating a sandbox that's what I want to see:
       | evidence that it's been robustly tested against all manner of
       | potential attacks, accompanied by detailed documentation to help
       | me understand how it protects against them.
       | 
       | This level of documentation is rare! I'm not sure I can point to
       | an example that feels good to me.
       | 
       | So the next thing I look for is evidence that the solution is
       | being used in production by a company large enough to have a
       | dedicated security team maintaining it, and with real money on
       | the line for if the system breaks.
        
         | vlovich123 wrote:
         | Since it's self hosted the sandboxing aspect at the
         | language/runtime level probably matters just a little bit less.
        
         | samwillis wrote:
         | Yes, exactly. The other reason Cloudflare workers runtime is
         | secure is that they are incredibly active at keeping it patched
         | and up to date with V8 main. It's often ahead of Chrome in
         | adopting V8 releases.
        
           | oldmanhorton wrote:
           | I didn't know this, but there are also security downsides to
           | being ahead of chrome -- namely, all chrome releases take
           | dependencies on "known good" v8 release versions which have
           | at least passed normal tests and minimal fuzzing, but also v8
           | releases go through much more public review and fuzzing by
           | the time they reach chrome stable channel. I expect if you
           | want to be as secure as possible, you'd want to stay aligned
           | with "whatever v8 is in chrome stable."
        
             | kentonv wrote:
             | Cloudflare Workers often rolls out _V8 security patches_ to
             | production before Chrome itself does. That 's different
             | from beta vs. stable channel. When there is a security
             | patch, generally all branches receive the patch at about
             | the same time.
             | 
             | As for beta vs. stable, Cloudflare Workers is generally
             | somewhere in between. Every 6 weeks, Chrome and V8's dev
             | branch is promoted to beta, beta branch to stable, and
             | stable becomes obsolete. Somewhere during the six weeks
             | between verisons, Cloudflare Workers moves from stable to
             | beta. This has to happen before the stable version becomes
             | obsolete, otherwise Workers would stop receiving security
             | updates. Generally there is some work involved in doing the
             | upgrade, so it's not good to leave it to the last moment.
             | Typically Workers will update from stable to beta somewhere
             | mid-to-late in the cycle, and then that beta version
             | subsequently becomes stable shortly thereafter.
             | 
             | (I'm the lead engineer for Cloudflare Workers.)
        
         | ForHackernews wrote:
         | Not if you're self-hosting and running your own trusted code,
         | you don't. I care about resource isolation, not security
         | isolation, between my own services.
        
           | twosdai wrote:
           | Completely agree. There are some apps that unfortunately need
           | to care about some level of security isolation, but with an
           | open workers they could just put those specific workers on
           | their own isolated instance.
        
         | max_lt wrote:
         | Fair point. The V8 isolate provides memory isolation, and we
         | enforce CPU limits (100ms) and memory caps (128MB). Workers run
         | in separate isolates, not separate processes, so it's similar
         | to Cloudflare's model. That said, for truly untrusted third-
         | party code, I'd recommend running the whole thing in a
         | container/VM as an extra layer. The sandboxing is more about
         | resource isolation than security-grade multi-tenancy.
        
           | gpm wrote:
           | I think you should consider adjusting the marketing to
           | reflect this. "untrusted JavaScript" -> "JavaScript", "Secure
           | sandboxing with CPU (100ms) and memory (128MB) limits per
           | worker" -> "Sandboxing with CPU (100ms) and memory (128MB)
           | limits per worker", overhauling
           | https://openworkers.com/docs/architecture/security.
           | 
           | Over promising on security hurts the credibility of the
           | entire project - and the main use case for this project is
           | probably executing trusted code in a self hosted environment
           | not "execut[ing] untrusted code in a multi-tenant
           | environment".
        
             | max_lt wrote:
             | Great point, thanks. Just updated the site - removed
             | "untrusted" and "secure", added a note clarifying the
             | threat model
        
         | imcritic wrote:
         | I don't think what you want us even possible. How would such
         | guarantees even look like? "Hello, we are a serious cybersec
         | firm and we have evaluated the code and it's pretty sound,
         | trust us!"?
         | 
         | "Hello, we are a serious cybersec firm and we have evaluated
         | the code and here are our test with results that proof that we
         | didn't find anything, the code is sound; Have we been through?
         | We have, trust us!"
        
           | gpm wrote:
           | In terms of a one off product without active support - the
           | only thing I can really imagine is a significant use of
           | formal methods to prove correctness of the entire runtime.
           | Which is of course entirely impractical given the state of
           | the technology today.
           | 
           | Realistically security these days is an ongoing process, not
           | a one off, compare to cloudflare's security page: https://dev
           | elopers.cloudflare.com/workers/reference/security... (to be
           | clear when I use the pronoun "we" I'm paraphrasing and not
           | personally employed by cloudflare/part of this at all)
           | 
           | - Implicit/from other pieces of marketing: We're a reputably
           | company with these other big reputable companies who care
           | about security and are juicy targets for attacks using this
           | product.
           | 
           | - We update V8 within 24 hours of a security update, compared
           | to weeks for the big juicy target of Google Chrome.
           | 
           | - We use various additional sandboxing techniques on top of
           | V8, including the complete lack of high precision timers, and
           | various OS level sandboxing techniques.
           | 
           | - We detect code doing strange things and move it out of the
           | multi-tennant environment into an isolated one just in case.
           | 
           | - We detect code using APIs that increase the surface area
           | (like debuggers) and move it out of the multi-tennant
           | environment into an isolated on just in case.
           | 
           | - We will keep investing in security going forwards.
           | 
           | Running secure multi-tenant environments is not an easy
           | problem. It seems unlikely that it's possible for a typical
           | open source project (typical in terms of limited staffing,
           | usually including a complete lack of on-call staff) to
           | release software to do so today.
        
             | max_lt wrote:
             | Agreed. Cloudflare has dedicated security teams, 24h V8
             | patches, and years of hardening - I can't compete with
             | that. The realistic use case for OpenWorkers is running
             | your own code on your own infra, not multi-tenant SaaS. I
             | will update the docs to reflect this.
        
           | simonw wrote:
           | That's the problem! It's really hard to find trustworthy
           | sandboxing solutions, I've been looking for a long time. It's
           | kind of my white whale.
        
             | indigodaddy wrote:
             | I imagine you messed about with Sandstorm back in the day?
        
           | d4mi3n wrote:
           | Other response address how you could go about this, but I'd
           | just like to note that you touch on the core problem of
           | security as a domain: At the end of the day, it's a problem
           | of figuring out who to trust, how much to trust them, and
           | when those assessments need to change.
           | 
           | To use your example: Any cybersecurity firm or practitioner
           | worth their salt should be *very* explicit about the scope of
           | their assessment.
           | 
           | - That scope should exhaustively detail what was and wasn't
           | tested.
           | 
           | - There should be proof of the work product, and an
           | intelligible summary of why, how, and when an assessment was
           | done.
           | 
           | - They should give you what you need to have confidence in
           | *your understanding of* you security posture as well as
           | evidence that you *have* a security posture you can prove
           | with facts and data.
           | 
           | Anybody who tells you not to worry and take their word for
           | something should be viewed with extreme skepticism. It is a
           | completely unacceptable frame of mind when you're legally and
           | ethically responsible for things you're stewarding for other
           | people.
        
         | ZiiS wrote:
         | I think this is, sandboxed so your debugging didn't need to
         | consider interactions, not sandboxes so you can run untrusted
         | code.
        
         | m11a wrote:
         | I agree, and as much as I think AI helps productivity, for a
         | high security solution,
         | 
         | > Recently, with Claude's help, I rewrote everything on top of
         | rusty_v8 directly.
         | 
         | worries me
        
           | CuriouslyC wrote:
           | Have you tried Opus 4.5?
        
         | andrewaylett wrote:
         | Cloudflare needs to worry about their sandbox, because they are
         | running your code and you might be malicious. You have less
         | reason to worry: if you want to do something malicious to the
         | box your worker code is running on, you already have access
         | (because you're self-hosting) and don't need a sandbox escape.
        
       | kachapopopow wrote:
       | I see anything that reduces the relience on vendor lock-in I
       | upvote. Hopefully cloud services see mass exodus so they have to
       | have reasonable pricing that actually reflects their costs
       | instead of charging more than free for basic services like NAT.
       | 
       | Cloud services are actually really nice and convenient if you
       | were to ignore the eye watering cost versus DIY.
        
         | geek_at wrote:
         | I'm worrying that the increasing ram prices will drive more
         | people away from local and more to cloud services because if
         | the big companies are buying up all the resources it might not
         | be feasible to self host in a few years
        
           | kachapopopow wrote:
           | the pricing is so insane it will always be cheaper to self
           | host by 100x, that's how bad it is.
        
             | Imustaskforhelp wrote:
             | Wait what? can you show me some sources to back this up? I
             | assume you are exaggerating but still, what would be the
             | definition of cheap is interesting to know.
             | 
             | I don't think after the fact that ram prices spiked 4-5x
             | that its gonna be cheaper to self host by 100x, Like
             | hetzner's or ovh's cloud offerings are cheap
             | 
             | Plus you have to put a lot of money and then still pay for
             | something like colocation if you are competing with them
             | 
             | Even if you aren't, I think that the models are different.
             | They are models of monthly subscription whereas in
             | hardware, you have to purchase it.
             | 
             | It would be interesting tho to compare hardware-as-a-
             | service or similar as well but I don't know if I see them
             | for individual stuff.
        
               | andruby wrote:
               | 100x is probably hyperbole. 37 signals saved between 50
               | and 66% in hosting costs when moving from cloud to self
               | hosted.
               | 
               | https://basecamp.com/cloud-exit
        
               | victorbjorklund wrote:
               | But they have scale. A small company will save less
               | because it's not that much more work to handle say a 100
               | node kubernetes cluster vs a 10 node kubernetes cluster.
        
               | shimman wrote:
               | Self hosting nowadays is way way way way easier than
               | you're thinking. I'm involved working with various
               | political campaigns and the first thing I help every team
               | do is provision a 10 year old laptop, flash linux, and
               | setup a DDNS. A $100 investment is more than enough for a
               | campaign of 10-20ish dedicated workers that will only be
               | hitting this system one/two users at a time. If I can
               | teach a random 70 year old retiree or 16 year old on how
               | to type a dozen different commands, I'm sure a paid
               | professional can learn too.
               | 
               | People need to realize that when you selfhost you can
               | choose to follow physical business constraints. If no one
               | is in the office to turn on a computer, you're good. Also
               | consumer hardware is so powerful (even 10 year old
               | hardware) that can easily handle 100k monthly active
               | users, which is barely 3k daily users, and I doubt most
               | SMBs actually need to handle anything beyond 500
               | concurrent users hardware wise. So if that's the choice
               | it comes down to writing better and more performant
               | software, which is what is lacking nowadays.
               | 
               | People don't realize how good modern tooling + hardware
               | has come. You can get by with very little if you want.
               | 
               | I'd bet my years salary that a good 40% of AWS customers
               | could probably be fine with a single self hosted server
               | using basic plug in play FOSS software on consumer
               | hardware.
               | 
               | People in our industry have been selling massive lies on
               | the need for scalability, the amount of companies that
               | require such scalability are quite small in reality. You
               | don't need a rocket ship to walk 2 blocks, and it often
               | feels like this is the case in our industry.
               | 
               | If self hosting is "too scary" for your business, you can
               | buy a $10 VPS but after one single year you can probably
               | find decade old hardware that is faster than what you pay
               | for.
        
               | victorbjorklund wrote:
               | Yea, but admit that I am right that it is not that much
               | harder to manage 100 nodes vs 10 nodes. (At least you can
               | agree you don't need 10x more staff to manage 100 nodes
               | instead of 10)
               | 
               | That's the key. If you need one person or 3 persons
               | doesn't matter. The point is the salaries are fixed
               | costs.
        
               | mystifyingpoi wrote:
               | You are right, but it's a feature of Kubernetes actually.
               | If you treat nodes as cattle, then it doesn't matter if
               | there is 10 or 100 or 1000, as long as the apiserver can
               | survive the load and upgrades don't take too long (though
               | upgrades/maintenance can be done slowly for even days
               | without any problems).
               | 
               | But all the stateful crap (like databases) gets trickier
               | and harder the more machines you have.
        
               | shimman wrote:
               | Ah sorry, I completely misread. You are right, and to add
               | another dimension even when you choose to go to the cloud
               | you still have to hire nearly the same amount of personal
               | to deal with those tools. I've never worked at a software
               | company that didn't have devs specifically to deal with
               | cloud issues and integrations.
        
               | oldandboring wrote:
               | I'm in your camp but I go for the cheap VPS. Lightsail
               | and DigitalOcean are amazing -- for $10/mo or less you
               | get a cheap little box that's essentially everything you
               | describe, but with all the peace of mind that comes from
               | not worrying about physical security, physical backups,
               | dynamic IPs/DDNS, and running out of storage space.
               | You're right that almost nobody needs most of the stuff
               | that AWS/GCP/Azure can do, but some things in the cloud
               | are worth paying for.
        
               | Imustaskforhelp wrote:
               | Yea absolutely this. This is what I was saying, so like
               | having a vps for starting out definitely makes sense.
               | Like, I think when it starts making sense to build your
               | own cloud is around the 500-1000$ mark per month
               | 
               | I searched hetzner and honestly at just around the 500
               | mark (506.04) seeing it on their refurbished auction for
               | sale, I can get around 1024 GB of ram AMD EPYC 7502 2 x
               | 1.92 TB Datacenter SSD
               | 
               | In this ramflation imagine getting so much ram would cost
               | a bank.
               | 
               | Like I love homelabbing too and I think that an old
               | laptop sometimes can be enough for basic things to even
               | more modern things but I did some calculations and
               | colocrossing or the professional renting model or even
               | buying new hardware model in this ramflation would
               | probably not work.
               | 
               | It's sad but like, the only place it might make sense is
               | if you can get yourself a good firewall and have an old
               | laptop or server and will do something like this but even
               | then I have heard it be described as not worth it by many
               | but I think its an interesting experiment.
               | 
               | Also regarding the 1024 GB of ram's, holy.. I wonder how
               | many programs need so much ram. I will still do
               | homelabbing and things but like, y'know I am kinda hard
               | pressed in how much we can recommend if ramflation is so
               | much and that's when I saw someone originally writing
               | saying 100x I really wondered how much is enough and at
               | what scale or what others think
        
               | kachapopopow wrote:
               | A small company benefits more than anyone since it's not
               | rocket science to learn these things so you can just put
               | on your system administrator hat once every few weeks,
               | would not be ideal to lose that employee which is why I
               | always suggest a couple of people picking up this very
               | useful skill.
               | 
               | But I don't know much about how it is a real world and
               | normal 9 to 5 I have taken up jobs from system
               | administration to reverse engineering and to even making
               | plugins and infrastructure for minecraft I generally only
               | work these days when people don't have any other choice
               | and need someone who is pretty good at everything so I am
               | completely out of the loop.
        
               | Imustaskforhelp wrote:
               | Considering the fact that ramflation happened, and we
               | assume the cost of hardware to be spread between 5 years,
               | someone please run the numbers again.
               | 
               | It would be interesting to see the scale of basecamp. I
               | just saw right now that hetzner offers 1024 GB of ram for
               | around 500$
               | 
               | Um 37signals spent around 700k$ I think on servers so if
               | someone has this much amount of money floating around,
               | perhaps.
               | 
               | Yea I looked at their numbers and they mentioned a
               | 1300$/month for just hardware for 1.3 TB and so hetzner
               | might still make economically more sense somehow.
               | 
               | I think the problem for some of these is that they go too
               | hard on the managed services and those are good sometimes
               | as well but like, there are cheaper managed cloud than
               | aws etc. as well (upcloud,ovh etc.) but at the end of the
               | day, it's good to remember that if it bothers you
               | financially, you can migrate.
               | 
               | Honestly do whatever you want. Start however you want
               | because like these things definitely interest me (which
               | is why I am here) but I think most compute providers have
               | really gone the path of the bottom.
               | 
               | The problem usually feels to me when you are worried that
               | you might break the term of service or anything similar
               | if you are at scale or anything, not that this stops
               | exactly being a problem with colo but that still brings
               | more freedom
               | 
               | I think if one wants freedom, they can always contact
               | some compute providers and find what can support their
               | use case the best while still being economical. And then
               | choose the best option from the multitude of available
               | options.
               | 
               | Also vertical scaling is a beast.
               | 
               | I really went into learning a lot about cloud prices
               | recently etc. so I want to ask a question but can you
               | tell me more about the servers that 37signals brought or
               | any other company you know of, I can probably create a
               | list when it makes sense and when it doesn't perhaps and
               | the best options available in markets.
        
             | dijit wrote:
             | not 100x.
             | 
             | 10% is the number I ordinarily see, counting for members of
             | staff and adequate DR systems.
             | 
             | If we had paid our IT teams half of what we pay a cloud
             | provider, we would have had better internal processes.
             | 
             | Instead we starved them and the cloud providers
             | successfully weaponised extremely short term thinking
             | against us, now barely anyone has the competence to
             | actually manifest those cost benefits without serious
             | instability.
        
               | kachapopopow wrote:
               | I genuinely mean that, fly.io (although as unreliable as
               | it might seem) is already around ~5x to 10x cheaper
               | depending on use case, depending on some services it's
               | actually <infinity> times cheaper because it's actually
               | completely free when you self host!
               | 
               | GCP pricing is absolutely wicked when they charge
               | $120/month for 4vcore 16gb ram, can get around 23 times
               | more performance and 192gb ram for $350/month with Xtbps-
               | ish ddos protection.
               | 
               | I have 2 2x7742 1tb ram each, 3 9950x3ds 192gb ecc, 2
               | 7950x3d's all at <$600/month obv the original hardware
               | cost in the realm of $60k - the epyc cpu's were bought
               | used for around $1k each so not a bad deal, same with ram
               | overall the true cost is <20k. This is entirely for
               | personal use and will last me more than a decade most
               | likely unless there are major gains in efficiency and
               | power cost continues to grow due to AI demand. This also
               | includes 100tb+ hdd of storage and 40tb of nvme storage
               | all connected with 100gbps switch pair for redundancy for
               | a cheap cheap price of $500 for each switch.
               | 
               | I guess I owe some links: (Ignore minecraft focused
               | branding)
               | 
               | https://pufferfish.host/ (also offers colocation)
               | 
               | telegram: @Erikb_9gigsofram direct colocation at
               | datacenter (no middlemen / sales) + good low cost bundle
               | deal
               | 
               | anti-ddos: https://cosmicguard.com/ (might still offer
               | colocation?)
               | 
               | anti-ddos: https://tcpshield.com/
        
         | rozenmd wrote:
         | Probably worth pointing out that the Cloudflare Workers runtime
         | is already open source: https://github.com/cloudflare/workerd
        
           | max_lt wrote:
           | True, workerd is open source. But the bindings (KV, R2, D1,
           | Queues, etc.) aren't - they're Cloudflare's proprietary
           | services. OpenWorkers includes open source bindings you can
           | self-host.
        
             | buremba wrote:
             | I tried to run it locally some time ago, but it's buggy as
             | hell when self-hosted. It's not even worth trying out given
             | that CF itself doesn't suggest it.
        
               | ketanhwr wrote:
               | I'm curious what bugs you encountered. workerd does power
               | the local runtime when you test CF workers in dev via
               | wrangler, so we don't really expect/want it to be buggy..
        
         | re-thc wrote:
         | > so they have to have reasonable pricing that actually
         | reflects their costs instead of charging more than free for
         | basic services like NAT
         | 
         | How is the cost of NAT free?
         | 
         | > Cloud services are actually really nice and convenient if you
         | were to ignore the eye watering cost versus DIY.
         | 
         | I don't doubt clouds are expensive, but in many countries it'd
         | cost more to DIY for a proper business. Running a service isn't
         | just running the install command. Having a team to maintain and
         | monitor services is already expensive.
        
           | otterley wrote:
           | They said "charging more than free" - i.e., more than $0,
           | i.e., they're not free. It was awkwardly worded.
        
             | re-thc wrote:
             | They said "instead of charging more than free", which means
             | should be free.
             | 
             | Please read it again.
        
               | otterley wrote:
               | I think we're in violent agreement, but you were
               | ambiguous about what "cost" meant. It seems you meant
               | "cost of _providing_ NAT" but I interpreted it as "cost
               | to the customer."
               | 
               | > Please read it again.
               | 
               | There's no need to be rude.
        
           | kachapopopow wrote:
           | salesforce had their hosting bill jump orders of magnitude
           | after ditching their colocation, it did not save anything and
           | colocation staff were replaced with AWS engineers
           | 
           | nat is free to provide because the infrastructure to have NAT
           | is already there and there is never anything maxing out a
           | switch cluster(most switches sit at ~1% usage since they're
           | overspeced $1,000,000 switches), so other than host CPU time
           | managing interrupts (which is unlikely since all network
           | cards offload this).
           | 
           | sure you could argue that regional NAT might should be
           | priced, but these companies have so much fiber between their
           | datacenters that all of nat usage is probably a rounding
           | error.
        
       | kristianpaul wrote:
       | Interesting option to consider next to openfaas
        
       | st3fan wrote:
       | This is very nice! Do you plan to hook this up to GitHub, so that
       | a push of worker code (and maybe a yaml describing the
       | environment & resources) will result in a redeploy?
        
         | max_lt wrote:
         | Not yet, but it's one of the next big features. I'm currently
         | working on the CLI (WIP), and GitHub integration with auto-
         | deploy on push will come after that. A yaml config for
         | bindings/cron is definitely on the roadmap too.
        
           | max_lt wrote:
           | I'm also working on execution recording/replay - the idea is
           | to capture a deterministic trace of a request, so you can
           | push it as a GitHub issue and replay it locally (or let an AI
           | debug it).
        
       | strangescript wrote:
       | Cool project, but I never found the cloudflare DX desirable
       | compared to self hosted alternatives. A plain old node server in
       | a docker container was much easier to manage, use and is
       | scalable. Cloudflare's system was just a hoop that you needed to
       | jump through to get to the other nice to haves in their cloud.
        
         | skybrian wrote:
         | Would it be useful for testing apps that you're going to deploy
         | on Cloudflare anyway?
        
       | mohsen1 wrote:
       | This is super nice! Thank you for working on this!
       | 
       | Recently really enjoying CloudFlare Workflows (used it in
       | https://mafia-arena.com) and would be nice to build Workflows on
       | top of this too.
        
         | max_lt wrote:
         | Thanks! Workflows is definitely interesting - it's basically
         | durable execution with steps and retries. It's on the radar,
         | probably after the CLI and GitHub integration.
        
       | kachapopopow wrote:
       | Could you add a kubernetes deployment quick-start? Just a simple
       | deployment.yaml is enough.
        
       | dangoodmanUT wrote:
       | This is similar to what rivet (1) does, perhaps focusing more on
       | stateless than rivet does
       | 
       | (1) https://www.rivet.dev/docs/actors/
        
       | tbrockman wrote:
       | Cool project, great work!
       | 
       | Forgive the uninformed questions, but given that `workerd`
       | (https://github.com/cloudflare/workerd) is "open-source" (in
       | terms of the runtime itself, less so the deployment model), is
       | the main distinction here that OpenWorkers provides a complete
       | environment? Any notable differences between the respective
       | runtimes themselves? Is the intention to ever provide a managed
       | offering for scalability/enterprise features, or primarily focus
       | on enabling self-hosting for DIYers?
        
         | max_lt wrote:
         | Thanks! Main differences: 1. Complete stack: workerd is just
         | the runtime. OpenWorkers includes the full platform -
         | dashboard, API, scheduler, logs, and self-hostable bindings
         | (KV, S3/R2, Postgres). 2. Runtime: workerd uses Cloudflare's
         | C++ codebase, OpenWorkers is Rust + rusty_v8. Simpler, easier
         | to hack on. 3. Managed offering: Yes, there's already one at
         | dash.openworkers.com - free tier available. But self-hosting is
         | a first-class citizen.
        
       | buremba wrote:
       | I wonder why V8 is considered as superior compared to WASM for
       | sandboxing.
        
         | skybrian wrote:
         | On V8, you can run both JavaScript and WASM.
        
           | buremba wrote:
           | Theoretically yes, but CF workers or this project doesn't
           | support it. Indeed none of the cloud providers support WASM
           | as first-party support yet.
        
             | otterley wrote:
             | The problem is that there's not much of a market
             | opportunity yet. Customers aren't voting for WASM with
             | their wallets like they are mainstream language runtimes.
        
             | justincormack wrote:
             | Workers does support wasm
             | https://developers.cloudflare.com/workers/runtime-
             | apis/webas...
        
               | buremba wrote:
               | Maybe it's better now but I wouldn't call this first-
               | class support, as you rely on the JS runtime to
               | initialize WASM.
               | 
               | The last time I tried it, the cold start was over 10
               | seconds, making it unusable for any practical use case.
               | Maybe the tech is not there but given that WASM
               | guarantees the sandboxing already and supports multiple
               | languages, I was hoping we would have providers investing
               | in it.
        
         | m11a wrote:
         | Is WASM's story for side effects solved yet? eg network calls
         | seems too complicated (https://github.com/vasilev/HTTP-request-
         | from-inside-WASM etc)
        
       | byyll wrote:
       | Isn't the whole point of Cloudflare's Workers to pay per
       | function? If it is self-hosted, you must dedicate hardware in
       | advance, even if it's rented in the cloud.
        
         | shimman wrote:
         | Many companies run selfhosted servers in data centers still
         | need to run software on top of this. Not every company needs to
         | pay people to do things they are capable themselves.
         | 
         | Having options that mimic paid services is a good thing and
         | helps with adoptability.
        
       | j1elo wrote:
       | To the author: The ASCII-art Architecture diagram is very broken,
       | at least on my Pixel phone with Firefox.
       | 
       | These kinds of text-based diagrams are appealing for us techies,
       | but in the end I learned that they are less practical. My
       | suggestion is to use an image, and think of the text-based
       | version as the "source code" which you keep, meanwile what gets
       | published is the output of "compiling" it into something that is
       | for sure always viewable _without mistake_ (that one is where we
       | tend to miss it with ascii-art).
        
         | vishnugupta wrote:
         | Rendered perfectly on my iPhone 11 Safari.
        
           | simlevesque wrote:
           | That's why we need to test websites on multiple browsers.
        
         | max_lt wrote:
         | Thanks for the heads up! Fixed - added a simplified ASCII
         | version for mobile.
        
           | j1elo wrote:
           | Thanks! Now I can make more sense of it! Very cool project by
           | the way, thanks for posting it
        
       | bob1029 wrote:
       | > It brings the power of edge computing to your own
       | infrastructure.
       | 
       | I like the idea of self-hosting, but it seems fairly strongly
       | opposed to the concept of edge computing. The edge is only made
       | possible by big ass vendors like Cloudflare. Your own
       | infrastructure is very unlikely to have 300+ points of presence
       | on the global web. You can replicate this with a heterogeneous
       | fleet of smaller and more "ethical" vendors, but also with a lot
       | more effort and downside risk.
        
         | patmorgan23 wrote:
         | But do you need 300 pops to benefit from the edge model? Or
         | would 10 pops in your primary territory be enough.
        
           | trevor-e wrote:
           | I agree, latency is very important and 300 pops is great, but
           | seems more for marketing and would see diminishing returns
           | for the majority of applications.
        
           | st3fan wrote:
           | many apps are fine on a single server
        
           | andrewaylett wrote:
           | Honestly, for _my own_ stuff I only need one PoP to be close
           | to my users. And I 've avoided using Cloudflare because
           | they're too far away.
           | 
           | More seriously, I think there's a distinction between "edge-
           | style" and actual _edge_ that 's important here. Most of the
           | services I've been involved in wouldn't benefit from any kind
           | of edge placement: that's not the lowest hanging fruit for
           | performance improvements. But that doesn't mean that the
           | "workers" model wouldn't fit, and indeed I _suspect_ that
           | using a workers model would help folk architect their stuff
           | in a form that is not only more performant, but also more
           | amenable to edge placement.
        
           | nrhrjrjrjtntbt wrote:
           | For most applications 1 location is probably good enough.I
           | assume HN is single location and I am a lomg way from CA but
           | have no speed issues.
           | 
           | Cavaet for high scale sites and game servers. Maybe for image
           | heavy sites too (but self hosting then adding a CDN seems
           | like a low lock in and low cost option)
        
             | locknitpicker wrote:
             | > For most applications 1 location is probably good enough.
             | 
             | If your usecase doesn't require redundancy or high-
             | availability, why would you be using something like
             | Cloudflare to start with?
        
               | robertcope wrote:
               | Security. I host personal sites on Linodes and other
               | external servers. There are no inbound ports open to the
               | world. Everything is accessed via Cloudflare Tunnels and
               | locked down via their Zero Trust services. I find this
               | useful and good, as I don't really want to have to
               | develop my personal services to the point where I'd
               | consider them hardened for public internet access.
        
               | RandomDistort wrote:
               | When you have a simple tool you have written for
               | yourself, that you need to be reliable and accessible but
               | also that you don't use frequently enough that it's worth
               | the bother of running on your own server with all of that
               | setup and ongoing maintenance.
        
               | gpm wrote:
               | Free bandwidth. (Also the very good sibling-answer about
               | tunnels).
        
           | locknitpicker wrote:
           | > But do you need 300 pops to benefit from the edge model? Or
           | would 10 pops in your primary territory be enough.
           | 
           | I don't think that the number of PoPs is the key factor. The
           | key factor is being able to route requests based on a edge-
           | friendly criteria (latency, geographical proximity, etc) and
           | automatically deploy changes in a way that the system ensures
           | consistency.
           | 
           | This sort of projects do not and cannot address those
           | concerns.
           | 
           | Targeting the SDK and interface is a good hackathon exercise,
           | but unless you want to put together a toy runtime to do some
           | local testing, this sort of project completely misses the
           | whole reason why this sort of technology is used.
        
         | closingreunion wrote:
         | Is some sort of decentralised network of hosts somehow working
         | together to challenge the Cloudflare hegemony even plausible?
         | Would it be too difficult to coordinate in a safe and reliable
         | way?
        
           | geysersam wrote:
           | If you have a central database, what benefits are you getting
           | from edge compute? This is a serious question. As far as I
           | understand edge computing is good for reducing latency. If
           | you have to communicate with a non-edge database anyway, is
           | there any advantage from being on the edge?
        
             | martinald wrote:
             | Well you can cache stuff and also use read replicas. But
             | yes, you are correct. For 'write' it doesn't help as much
             | to say the least. But for some (most?) sites they are 99.9%
             | read...
        
       | orliesaurus wrote:
       | Good to see this! Cloudflare's cool, but those locked-in things
       | (KV, D1, etc.) always made it hard to switch. Offering open-
       | source alternatives is always good, but maintainign them is on
       | the community. Even without super-secure multi-tenancy, being
       | able to run the same code on your own stuff or a small VPS
       | without changing the storage is a huge dev experience boost.
        
       | nextaccountic wrote:
       | Any reason to abandon Deno?
       | 
       | edit: if the idea was to have compatibility with cloudflare
       | workers, workers can run deno
       | https://docs.deno.com/examples/cloudflare_workers_tutorial/
        
         | max_lt wrote:
         | Deno core is great and I didn't really abandon Deno - we
         | support 5 runtimes actually, and Deno is the second most
         | advanced one (https://github.com/openworkers/openworkers-
         | runtime-deno). It broke a few weeks ago when I added the new
         | bindings system and I haven't had time to fix it yet. Focused
         | on shipping bindings fast with the V8 runtime. Will get back to
         | Deno support soon.
        
       | victorbjorklund wrote:
       | Cool. I always liked CF workers but haven't shipped anything
       | serious with it due to not wanting vendor lock-in. This is
       | perfect for knowing you always got a escape hatch.
        
       | theknarf wrote:
       | Why would I want this over just sticking Node / Deno / Bun in a
       | Docker container?
        
         | m11a wrote:
         | Node in Docker doesn't have full isolation and 'sandbox'
         | escapes are possible. V8 is comparatively quite hardened
        
       | abalashov wrote:
       | What if we hosted the cloud... on our own computers?
       | 
       | I see we have entered that phase in the ebb and flow of cloud vs.
       | self-hosting. I'm seeing lots of echoes of this everywhere,
       | epitomised by talks like this:
       | 
       | https://youtu.be/tWz4Eqh9USc
        
         | locknitpicker wrote:
         | > What if we hosted the cloud... on our own computers?
         | 
         | The value proposition of function-as-a-service offerings is not
         | "cloud" buzzwords, but providing an event-handling framework
         | where developers can focus on implementing event handlers that
         | are triggered by specific events.
         | 
         | FaaS frameworks are the high-level counterpart of the low-pevel
         | message brokers+web services/background tasks.
         | 
         | Once you include queues in the list of primitives, durable
         | executions are another step in that direction.
         | 
         | If you have any experience developing and maintaining web
         | services, you'll understand that API work is largely comprised
         | of writing boilerplate code, controller actions, and background
         | tasks. FaaS frameworks abstract away the boilerplate work.
        
         | nine_k wrote:
         | It won't be a... cloud?
         | 
         | To me, the principal differentiator is the elasticity. I start
         | and retire instances according to my needs, and only pay for
         | the resources I've actually consumed. This is only possible on
         | a very large shared pool of resources, where spikes of use even
         | out somehow.
         | 
         | If I host everything myself, the cloud-like deployment tools
         | simplify my life, but I still pay the full price for my rented
         | / colocated server. This makes sense when my load is reasonably
         | even and predictable. This also makes sense when it's my home
         | NAS or media server anyway.
         | 
         | (It is similar to using a bus vs owning a van.)
        
       | utopiah wrote:
       | DX?
       | 
       | I'm quite ignorant on the topic (as I never saw the appeal of
       | Cloudflare workers, not due to technical problems but solely
       | because of centralization) but what does DX in "goal has always
       | been the same: run JavaScript on your own servers, with the same
       | DX as Cloudflare Workers but without vendor lock-in." mean? Looks
       | like a runtime or environment but looking at
       | https://github.com/drzo/workerd I also don't see it.
       | 
       | Anyway if the "DX" is a kind of runtime, in which actual contexts
       | is it better than the incumbents, e.g. Node, or the newer ones
       | e.g. Deno or Zig or even more broadly WASI?
        
         | lukevp wrote:
         | DX means Developer Experience, they're saying it lets you use
         | the same tooling and commands to build the workers as you would
         | if they were on CloudFlare.
        
         | locknitpicker wrote:
         | > Anyway if the "DX" is a kind of runtime, in which actual
         | contexts is it better than the incumbents, e.g. Node, or the
         | newer ones e.g. Deno or Zig or even more broadly WASI?
         | 
         | I'm not the blogger, I'm just a developer who works
         | professionally with Cloudflare Workers. To me the main value
         | proposition is avoiding vendor lock-in, and even so the logic
         | doesn't seem to be there.
         | 
         | The main value proposition of Cloudflare Workers is being able
         | to deploy workers at the edge and use them to implement edge
         | use cases. Meaning, custom cache logic, perhaps some
         | pauthorization work, request transformation and aggregation,
         | etc. If you remove the global edge network and cache, you do
         | not have any compelling reason to look at this.
         | 
         | It's also perplexing how the sales pitch is Rust+WASM. This
         | completely defeats the whole purpose of Cloudflare Workers. The
         | whole point of using workers is to have very fast isolates
         | handling IO-heavy workloads where they stay idling the majority
         | of the time so that the same isolate instance can handle a high
         | volume of requests. WASM is notorious for eliminating the
         | ability to yield on awaits from fetch calls, and is only
         | compelling if your argument is a lift-and-shift usecase. Which
         | this ain't it.
        
       | TZubiri wrote:
       | https://imgflip.com/i/agah04
        
         | TZubiri wrote:
         | https://imgflip.com/i/agah2y
        
       | mmastrac wrote:
       | I did a huge chunk of work to split deno_core from deno a few
       | years back and TBH I don't blame you from moving to raw rusty_v8.
       | There was a _lot_ of legacy code in deno_core that was
       | challenging to remove because touching a lot of the code would
       | break random downstream tests in deno constantly.
        
         | max_lt wrote:
         | Thanks for that work! deno_core is a beautiful piece of work
         | and is still an option for OpenWorkers:
         | https://github.com/openworkers/openworkers-runtime-deno
         | We maintained it until we introduced bindings -- at that point,
         | we wanted more fine-grained control over the runtime internals,
         | so we moved to raw rusty_v8 to iterate faster. We'll probably
         | circle back and add the missing pieces to the deno runtime at
         | some point.
        
       | keepamovin wrote:
       | Technically, and architecturally this is excellent. It's also an
       | excellent product idea. And I'm particularly a fan of the big-
       | ass-vendor-inversion-model where instead of the big ass vendor
       | ripping off an open source project and monetizing it, you look at
       | one of their projects and you rip it off inversely and open
       | source it -- this is the way.
        
       ___________________________________________________________________
       (page generated 2026-01-01 23:00 UTC)