[HN Gopher] Using Rust in non-Rust servers to improve performance
       ___________________________________________________________________
        
       Using Rust in non-Rust servers to improve performance
        
       Author : amatheus
       Score  : 326 points
       Date   : 2024-10-25 01:19 UTC (3 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | bebna wrote:
       | For me a "Non-Rust Server" would be something like a PHP
       | webhoster. If I can run my own node instance, I can possible run
       | everything I want.
        
         | bluejekyll wrote:
         | The article links to two PHP and Rust integration strategies,
         | WASM[1] or native[2].
         | 
         | [1] https://github.com/wasmerio/wasmer-php
         | 
         | [2] https://github.com/davidcole1340/ext-php-rs
        
       | eandre wrote:
       | Encore.ts is doing something similar for TypeScript backend
       | frameworks, by moving most of the request/response lifecycle into
       | Async Rust: https://encore.dev/blog/event-loops
       | 
       | Disclaimer: I'm one of the maintainers
        
         | internetter wrote:
         | What's your response to this? https://github.com/encoredev/ts-
         | benchmarks/issues/2
        
           | uncomplexity wrote:
           | not gp bot first time seeing this encore ts.
           | 
           | i've been a user of uwebsockets.js, uwebsockets is used
           | underneath by bun.
           | 
           | i hope encore does benchmark compared to encore, uwsjs, bun,
           | and fastify.
           | 
           | express is just so damn slow.
           | 
           | https://github.com/uNetworking/uWebSockets.js
        
             | eandre wrote:
             | We've published benchmarks against most of these already,
             | see https://github.com/encoredev/ts-benchmarks
        
           | eandre wrote:
           | I've published proper instructions for benchmarking Encore.ts
           | now: https://github.com/encoredev/ts-
           | benchmarks/blob/main/README..... Thanks!
        
       | isodev wrote:
       | This is a really cool comparison, thank you for sharing!
       | 
       | Beyond performance, Rust also brings a high level of portability
       | and these examples show just how versatile a pice of code can be.
       | Even beyond the server, running this on iOS or Android is also
       | straightforward.
       | 
       | Rust is definitely a happy path.
        
         | jvanderbot wrote:
         | Rust deployment is a happy path, with few caveats. Writing is
         | sometimes less happy than it might otherwise be, but that's the
         | tradeoff.
         | 
         | My favorite thing about Rust, however, is Rust dependency
         | management. Cargo is a dream, coming from C++ land.
        
           | krick wrote:
           | Everything is a dream, when coming from C++ land. I'm still
           | incredibly salty about how packages are managed in Rust,
           | compared to golang or even PHP (composer). crates.io looks
           | fine today, because Rust is still relatively unpopular, but 1
           | common namespace for all packages encourages name squatting,
           | so in some years it will be a dumpster worse than pypi, I
           | guarantee you that. Doing that in a brand-new package manager
           | was incredibly stupid. It really came late to the market,
           | only golang's modules are newer IIRC (which are really
           | great). Yet it repeats all the same old mistakes.
        
             | Imustaskforhelp wrote:
             | In my opinion , I like golang's way better because then you
             | have to be thoughtful about your dependencies and it also
             | prevents any drama (like rust foundation cargo drama)
             | (ahem) (if you are having a language that is so polarizing
             | , it would be hard to find a job in that )
             | 
             | I truly like rust as a performance language but I would
             | rather like real tangible results (admittedly slow is okay)
             | than imagination within the rust / performance land.
             | 
             | I don't want to learn rust to feel like I am doing
             | something "good" / "learning" where I can learn golang at a
             | way way faster rate and do the stuff that I like for which
             | I am learning programming.
             | 
             | Also just because you haven't learned rust doesn't make you
             | inferior to anybody.
             | 
             | You should learn because you want to think differently ,
             | try different things. Not for performance.
             | 
             | Performance is fickle minded.
             | 
             | Like I was seeing a native benchmark of rust and zig (rust
             | won) and then I was seeing benchmark of deno and bun (bun
             | won) (bun is written in zig and deno in bun)
             | 
             | The reason I suppose is that deno doesn't use actix and non
             | actix servers are rather slower than even zig.
             | 
             | It's weird .
        
               | jvanderbot wrote:
               | There are some influential fair comparisons of compiled
               | languages, but for the most part my feeling is that
               | people are moving from an extremely high level language
               | like Python or JS, and then going to Rust to get
               | performance, when any single compiled language would be
               | fine, and for 90% of them, Go would have been the right
               | choice (on backend or web-enabled systems apps), there
               | was just a hurdle to get to most other compiled
               | languages.
               | 
               | It's just Rust is somehow more accessible to them? Maybe
               | it's that pointers and memory just was an inaccessible /
               | overburdensom transition?
        
               | bombela wrote:
               | Not sure how much it weighs on the balance in those types
               | of decisions. But Rust has safe concurrency. That's
               | probably quite a big boost of web server quality if
               | anything else.
        
               | jvanderbot wrote:
               | Go's concurrency is unsafe? Rust's concurrency is
               | automatically safe?
               | 
               | I am not saying you're wrong, I just don't find it any
               | better than C++ concurrent code, you just have many
               | different lock types that correspond to the borrow-
               | checker's expectations, vs C++'s primitives / lock types.
               | 
               | Channels are nicer, but that's doable easily in C++ and
               | native to Go.
        
               | umanwizard wrote:
               | > Go's concurrency is unsafe? Rust's concurrency is
               | automatically safe?
               | 
               | Yes and yes...
               | 
               | Rust statically enforces that you don't have data races,
               | i.e. it's not possible in Rust (without unsafe hacks) to
               | forget to guard access to something with a mutex. In
               | every other language this is enforced with code comments
               | and programmer memory.
        
               | thinkharderdev wrote:
               | (Un)safe is a bit of an overloaded term but Rust's
               | concurrency model is safe in the sense that it statically
               | guarantees that you won't have data races. Trying to
               | mutate the same memory location concurrently is a
               | compile-time error. Neither C++ nor Golang prevent you
               | from doing this. Aside from that
        
               | umanwizard wrote:
               | Rust is the only mainstream language with an ergonomic
               | modern type system and features like exhaustive matching
               | on sum types (AFAIK... maybe I'm forgetting one). Yes
               | things like OCaml and Haskell exist but they are much
               | less mainstream than Rust. I think that's a big part of
               | the appeal.
               | 
               | In Go instead of having a value that can be one of two
               | different types, you have to have two values one of which
               | you set to the zero value. It feels prehistoric.
        
               | jvanderbot wrote:
               | That strikes me as an incredibly niche (and probably
               | transient) strength! But I will remember that.
        
               | umanwizard wrote:
               | It's not niche at all; it's extremely common to need
               | this. Maybe I'm not explaining it well. For example, an
               | idiomatic pattern in Go is to return two values, one of
               | which is an error:                 func f() (SomeType,
               | error) {               // ...       }
               | 
               | In Rust you would return one value:                 fn
               | f() -> anyhow::Result<SomeType> {           // ...
               | }
               | 
               | In Go (and similar languages like C) nothing enforces
               | that you actually set exactly one value, and nothing
               | enforces that you actually handle the values that are
               | returned.
               | 
               | It's even worse if you need to add a variant, because
               | then it's easy to make a mistake and not update some site
               | that consumes it.
        
               | timeon wrote:
               | > It's just Rust is somehow more accessible to them?
               | 
               | Going to lower level languages can be scary. What is
               | 'fighting the borrow-checker' for some, may be 'guard
               | rails' for others.
        
             | joshmarinacci wrote:
             | Progress. It doesn't have to be the best. It just has to be
             | better than C++.
        
             | guitarbill wrote:
             | I don't really understand this argument, and it isn't the
             | first time I've heard it. What problem other than name
             | squatting does it solve?
             | 
             | How does a Java style com.foo.bar or Golang style URL help
             | e.g. mitigate supply chain attacks? For Golang, if you
             | search pkg.go.dev for "jwt" there's 8 packages named that.
             | I'm not sure how they are sorted; it doesn't seem to be by
             | import count. Yes, you can see the URL directly, but
             | crates.io also shows the maintainers. Is
             | "github.com/golang-jwt/jwt/v5" "better" than
             | "golang.org/x/oauth2/jwt"? Hard to say at a glance.
             | 
             | On the flip side, there have been several instances where
             | Cargo packages were started by an individual, but later
             | moved to a team or adopted. The GitHub project may be
             | transferred, but the name stays the same. This generally
             | seems good.
             | 
             | I honestly can't quite see what the issue is, but I have
             | been wrong many a time before.
        
               | Thaxll wrote:
               | Go has more protections than Rust regarding supply chain
               | attack.
               | 
               | https://go.dev/blog/supply-chain
        
       | bhelx wrote:
       | If you have a Java library, take a look at Chicory:
       | https://github.com/dylibso/chicory
       | 
       | It runs on any JVM and has a couple flavors of "ahead-of-time"
       | bytecode compilation.
        
         | bluejekyll wrote:
         | This is great to see. I had my own effort around this that I
         | could never quite get done.
         | 
         | I didn't notice this on the front page, what JVM versions is
         | this compatible with?
        
           | evacchi wrote:
           | Java 11+ :)
        
             | bluejekyll wrote:
             | Perfect!
        
       | xyst wrote:
       | In my opinion, the significant drop in memory footprint is truly
       | underrated (13 MB vs 1300 MB). If everybody cared about
       | optimizing for efficiency and performance, the cost of computing
       | wouldn't be so burdensome.
       | 
       | Even self-hosting on an rpi becomes viable.
        
         | leeoniya wrote:
         | fwiw, Bun/webkit is much better in mem use if your code is
         | written in a way that avoids creating new strings. it won't be
         | a 100x improvement, but 5x is attainable.
        
         | jchw wrote:
         | It's a little more nuanced than that of course, a big reason
         | why the memory usage is so high is because Node.JS needs more
         | of it to take advantage of a large multicore machine for
         | compute-intensive tasks.
         | 
         | > Regarding the abnormally high memory usage, it's because I'm
         | running Node.js in "cluster mode", which spawns 12 processes
         | for each of the 12 CPU cores on my test machine, and each
         | process is a standalone Node.js instance which is why it takes
         | up 1300+ MB of memory even though we have a very simple server.
         | JS is single-threaded so this is what we have to do if we want
         | a Node.js server to make full use of a multi-core CPU.
         | 
         | On a Raspberry Pi you would certainly not need so many workers
         | even if you did care about peak throughput, I don't think any
         | of them have >4 CPU threads. In practice I do run Node.JS and
         | JVM-based servers on Raspberry Pi (although not Node.JS
         | software that I personally have written.)
         | 
         | The bigger challenge to a decentralized Internet where everyone
         | self-hosts everything is, well, everything else. Being able to
         | manage servers is awesome. _Actually managing servers_ is less
         | glorious, though:
         | 
         | - Keeping up with the constant race of security patching.
         | 
         | - Managing hardware. Which, sometimes, fails.
         | 
         | - Setting up and testing backup solutions. Which can be
         | expensive.
         | 
         | - Observability and alerting; You probably want some monitoring
         | so that the first time you find out your drives are dying isn't
         | months after SMART would've warned you. Likewise, you probably
         | don't want to find out you have been compromised after your ISP
         | warns you about abuse months into helping carry out criminal
         | operations.
         | 
         | - Availability. If your home internet or power goes out, self-
         | hosting makes it a bigger issue than it normally would be. I
         | love the idea of a world where everyone runs their own systems
         | at home, but this is by far the worst consequence. Imagine if
         | all of your e-mails bounced while the power was out.
         | 
         | Some of these problems are actually somewhat tractable to
         | improve on but the Internet and computers in general marched on
         | in a different more centralized direction. At this point I
         | think being able to write self-hostable servers that are
         | efficient and fast is actually not the major problem with self-
         | hosting.
         | 
         | I still think people should strive to make more efficient
         | servers of course, because some of us are going to self-host
         | anyways, and Raspberry Pis run longer on battery than large
         | rack servers do. If Rust is the language people choose to do
         | that, I'm perfectly content with that. However, it's worth
         | noting that it doesn't have to be the only one. I'd be just as
         | happy with efficient servers in Zig or Go. Or
         | Node.JS/alternative JS-based runtimes, which can certainly do a
         | fine job too, especially when the compute-intensive tasks are
         | not inside of the event loop.
        
           | bombela wrote:
           | > Imagine if all of your e-mails bounced while the power was
           | out.
           | 
           | Retry for a while until the destination becomes reachable
           | again. That's how email was originally designed.
        
             | jasode wrote:
             | _> Retry for a while until the destination becomes
             | reachable again. That's how email was originally designed._
             | 
             | Sure, the SMTP email protocol states guidelines for
             | "retries" but senders don't waste resources retrying
             | forever. E.g. max of 5 days:
             | https://serverfault.com/questions/756086/whats-the-usual-
             | re-...
             | 
             | So gp's point is that if your home email server is down for
             | an extended power outage (maybe like a week from a bad
             | hurricane) ... and you miss important emails (job interview
             | appointments, bank fraud notifications, etc) ... then
             | that's one of the risks of running an email server on the
             | Raspberry Pi at home.
             | 
             | Switching to a more energy-efficient language like Rust for
             | server apps so it can run on RPi still doesn't alter the
             | risk calculation above. In other words, many users would
             | still prioritize email reliability of Gmail in the cloud
             | over the self-hosted autonomy of a RPi at home.
        
               | umanwizard wrote:
               | Another probably even bigger reason people don't self-
               | host email specifically is that practically all email
               | coming from a residential IP is spam from botnets, so
               | email providers routinely block residential IPs.
        
               | jchw wrote:
               | Yeah, exactly this. The natural disaster in North
               | Carolina is a great example of how I envision this going
               | very badly. When you self-host at home, you just can't
               | have the same kind of redundancy that data centers have.
               | 
               | I don't think it's an obstacle that's absolutely
               | insurmountable, but it feels like something where we
               | would need to organize the entire Internet around solving
               | problems like these. My personal preference would be to
               | have devices act more independently. e.g. It's possible
               | to sync your KeepassXC with SyncThing at which point any
               | node is equal and thus only if you lose _all of your
               | devices simultaneously_ (e.g. including your mobile
               | computer(s)) are you at risk of any serious trouble. (And
               | it 's easy to add new devices to back things up if you
               | are especially worried about that.) I would like it if
               | that sort of functionality could be generalized and
               | integrated into software.
               | 
               | For something like e-mail, the only way I can envision
               | this working is if any of your devices could act as a
               | destination in the event of a serious outage. I suspect
               | this would be possible to accomplish to _some_ degree
               | today, but it is probably made a lot harder by two
               | independent problems (IPv4 exhaustion /not having
               | directly routable IPs on devices, mobile devices
               | "roaming" through different IP addresses) which force you
               | to rely on some centralized infrastructure _anyways_
               | (e.g. something like Tailscale Funnels.)
               | 
               | I for one welcome whoever wants to take on the challenge
               | of making it possible to do reliable, durable self-
               | hosting of all of my services without the pain. I would
               | be an early adopter without question.
        
           | pferde wrote:
           | While I agree with pretty much all you wrote, I'd like to
           | point out that e-mail, out of all the services one could
           | conceivably self-host, is quite resilient to temporary
           | outages. You just need to have another backup mail server
           | somewhere (maybe another self-hosting friend or in a
           | datacenter), and set up your DNS MX records accordingly. The
           | incoming mail will be held there until you are back online,
           | and then forwarded to your primary mail server. Everything
           | transparent to the outside word, no mail gets lost, no errors
           | shown to any outside sender.
        
           | wtetzner wrote:
           | Reducing memory footprint is a big deal for using a VPS as
           | well. Memory is still quite expensive when using cloud
           | computing services.
        
             | jchw wrote:
             | True that. Having to carefully balance responsiveness and
             | memory usage/OOM risk when setting up PHP-FPM pools
             | definitely makes me grateful when deploying Go and Rust
             | software in production environments.
        
         | echoangle wrote:
         | If every developer cared for optimizing efficiency and
         | performance, development would become slower and more expensive
         | though. People don't write bad-performing code because it's fun
         | but because it's easier. If hardware is cheap enough, it can be
         | advantageous to quickly write slow code and get a big server
         | instead of spending days optimizing it to save $100 on servers.
         | When scaling up, the tradeoff has to be reconsidered of course.
        
           | throwaway19972 wrote:
           | Yea but we also write the same software over and over and
           | over and over again. Perhaps slower, more methodical
           | development might enable more software to be written fewer
           | times. (Does not apply to commercially licensed software or
           | services obviously, which is straight waste.)
        
             | chaxor wrote:
             | This is a decent point, but in many cases writing software
             | over again can be a great thing, even in replaceing some
             | very well established software.
             | 
             | The trick is getting everyone to switch over and ensure
             | correct security and correctness for the newer software. A
             | good example may be openssh. It is very well established,
             | so many will use it - but it has had some issues over the
             | years, and due to that, it is actually _very_ difficult now
             | to know what the _correct_ way to configure it for the
             | best, modern, performant, and _secure_ operation. There are
             | hundreds of different options for it, almost all of them
             | existing for 'legacy reasons' (in other words no one should
             | ever use in any circumstance that requires any security).
             | 
             | Then along comes things like mosh or dropbear, which seem
             | like they _may_ improve security, but still basically do
             | the same thing as openssh, so it is unclear if they have a
             | the same security problems and simply don't get reported
             | due to lower use, or if they aren't vulnerable.
             | 
             | While simultaneously, things like quicssh-rs rewrite the
             | idea but completely differently, such that it is likely
             | far, far more secure (and importantly simpler!), but
             | getting more eyes on it for security is still important.
             | 
             | So effectively, having things like Linux move to Rust (but
             | as the proper foundation rather than some new and untrusted
             | entity) can be great when considering any 'rewrite' of
             | software, not only for removing the cruft that we now know
             | shouldn't be used due to having better solutions (enforce
             | using only best and modern crypto or filesystems, and so
             | on), but also to remodel the software to be more simple,
             | cleaner, concise, and correct.
        
           | devmor wrote:
           | Caring about efficiency and performance doesn't have to mean
           | spending all your time on it until you've exhausted every
           | possible avenue. Sometimes using the right tools and
           | development stack is enough to make massive gains.
           | 
           | Sometimes it means spending a couple extra minutes here or
           | there to teach a junior about freeing memory on their PR.
           | 
           | No one is suggesting it has to be a zero-sum game, but it
           | would be nice to bring some care for the engineering of the
           | craft back into a field that is increasingly dominated by
           | business case demands over all.
        
             | internet101010 wrote:
             | Exactly. Nobody is saying to min-max from the start - just
             | be a bit more thoughtful and use the right tools for the
             | job in general.
        
           | marcos100 wrote:
           | We all should think about optimization and performance all
           | the time and make a conscious decision of doing or not doing
           | it given a time constraint and what level of performance we
           | want.
           | 
           | People write bad-performing code not because it's easier,
           | it's because they don't know how to do it better or don't
           | care.
           | 
           | Repeating things like "premature optimization is the root of
           | all evil" and "it's cheaper to get a bigger machine than dev
           | time" are bad because people stop caring about it and stop
           | doing it and, if we don't do it, it's always going to be a
           | hard and time-consuming task.
        
             | toolz wrote:
             | Strongly disagree with this sentiment. Our jobs are
             | typically to write software in a way that minimizes risk
             | and best ensures the success of the project.
             | 
             | How many software projects have you seen fail because it
             | couldn't run fast enough or used too many resources?
             | Personally, I've never seen it. I'm sure it exists, but I
             | can't imagine it's a common occurrence. I've rewritten
             | systems because they grew and needed perf upgrades to
             | continue working, but this was always something the
             | business knew, planned for and accepted as a strategy for
             | success. The project may have been less successful if it
             | had been written with performance in mind from the
             | beginning.
             | 
             | With that in mind, I can't think of many things less
             | appropriate to keep in your mind as a first class concern
             | when building software than performance and optimization.
             | Sure, as you gain experience in your software stack you'll
             | naturally be able to optimize, but since it will possibly
             | never be the reason your projects fail and presumably your
             | job is to ensure success of some project, then it follows
             | that you should prioritize other things strongly over
             | optimization.
        
               | MobiusHorizons wrote:
               | I see it all the time, applications that would be very
               | usable and streamlined for users from a ui perspective
               | are frustrating and painful to use because every action
               | requires a multi second request. So the experience is
               | mostly reduced to staring at progress spinners.
        
               | noirscape wrote:
               | It also depends on where the code is running. To put it
               | simply; nobody cares how much RAM the server is using,
               | but they _do_ care if their clientside application isn 't
               | responsive. UI being performant and responsive should
               | have priority over everything else.
        
               | timeon wrote:
               | Sure but it seems like race to the bottom. Faster
               | development will beat better quality in the market.
               | Especially in unregulated industry like this.
        
             | 0cf8612b2e1e wrote:
             | It is even worse for widely deployed applications. To pick
             | on some favorites, Microsoft Teams and One Drive have lousy
             | performance and burn up a ton of cpu. Both are deployed to
             | tens/hundreds of millions of consumers, squandering battery
             | life and electricity usage globally. Even a tiny
             | performance improvement could lead to a fractional
             | reduction in global energy use.
        
               | oriolid wrote:
               | I doubt that it would be good business for Microsoft
               | though. The people who use them, and the people who buy
               | them and force others to use them are two separate
               | groups, and anyone who cares even a bit about user
               | experience and has power to make the decision has already
               | switched to something different. It's also the users, not
               | Microsoft who pays for the wasted power and lost
               | productivity.
        
               | hitradostava wrote:
               | I wish they would do this. But my experience is that
               | building efficient software is hard, and is very very
               | hard the larger the team gets or the longer the product
               | exsits.
               | 
               | Even zoom, used to be very efficient, but has gradually
               | got worse over time :-(
        
               | 0cf8612b2e1e wrote:
               | I would find this more compelling if we were not
               | discussing a trillion dollar company that employs tens of
               | thousands of programmers. The One Drive performance is so
               | bad I cannot imagine anyone has put any effort into
               | prioritizing efficiency. Naive, first effort attempt was
               | packaged up and never revisited.
        
               | hitradostava wrote:
               | While that is true, its really not easy to do without re-
               | writing from scratch and scrapping a load of features
               | which is organisationally difficult to do.
               | 
               | What large piece of software with a user interface do you
               | work with that is actually fast and stays fast? For me,
               | its probably just Chrome / Firefox. Everything else seems
               | to get slower over time.
        
             | OtomotO wrote:
             | Worse even: it's super bad for the environment
        
               | nicce wrote:
               | We have Electron and we don't get rid of it for a decade,
               | at least.
        
           | sampullman wrote:
           | I'm not so sure. I use Rust for simple web services now, when
           | I would have used Python or JS/TS before, and the development
           | speed isn't much different. The main draw is the
           | language/type system/borrow checker, and reduced
           | memory/compute usage is a nice bonus.
        
             | aaronblohowiak wrote:
             | Which framework? Do you write sync or async? I've AoC'd
             | rust and really liked it but async seems a bit much.
        
               | tayo42 wrote:
               | If he was OK with python performance limitations the rust
               | without async is more then enough
        
               | wtetzner wrote:
               | I have to agree, despite using it a lot, async is the
               | worst part of Rust.
               | 
               | If I had to do some of my projects over again, I'd
               | probably just stick with synchronous Rust and thread
               | pools.
               | 
               | The concept of async isn't that bad, but it's
               | implementation in Rust feels rushed and incomplete.
               | 
               | For a language that puts so much emphasis on compile time
               | checks to avoid runtime footguns, it's way too easy to
               | clog the async runtime with blocking calls and not
               | realize it.
        
               | dsff3f3f3f wrote:
               | Not the other poster but I moved from Go to Rust and the
               | main packages I use for web services are axum, askama,
               | serde and sqlx. Tokio and the futures crate are fleshed
               | out enough now that I rarely run into async issues.
        
               | sampullman wrote:
               | That's pretty much where I'm at, plus a few basic
               | packages for auth, caching, job/queue stuff. I can't
               | remember the last time I had to care about async, but it
               | does occasionally come up when dealing with things like
               | background tasks.
               | 
               | I'm not totally happy with sqlx and the logging
               | situation, but most issues that come up are the "solve
               | once and never worry about it again" type.
        
           | treyd wrote:
           | Code is usually ran many more times than it is written. It's
           | usually worth spending a bit of extra time to do something
           | the right way the first time when you can avoid having to
           | rewrite it under pressure only _after_ costs have ballooned.
           | This is proven time and time again, especially in places
           | where inefficient code can be so easily identified upfront.
        
             | manquer wrote:
             | Not all code is run high enough times for that trade off to
             | be always justified.
             | 
             | It is very hard know if your software is going to be
             | popular enough for costs to be factor at all and even if it
             | would be, it is hard to know whether you can survive as a
             | entity long enough for the extra delay, a competitor might
             | ship a inferior but earlier product or you may run out
             | money.
             | 
             | You rather ship and see with the quick and dirty and see if
             | there demand for it to worth the cleaner effort .
             | 
             | There is no limit to that, more optimization keeps becoming
             | a good idea as you scale at say Meta or Google levels it
             | makes sense to spend building your own ASICs for example we
             | won't dream of doing that today
        
           | Havoc wrote:
           | Tempted to say it's more the learning the language that takes
           | longer than the writing it part.
           | 
           | From my casual dabbling in python and rust they feel like
           | they're in similar ballpark. Especially if I want the python
           | code to be similarly robust as what rust tends to produce.
           | Edge cases in python are much more gnarly
        
           | jarjoura wrote:
           | Agreed. When a VC backed company is in hyper-growth, and
           | barely has resources to scale up their shaky MVP tech stack
           | so they can support 100+ million users, I doubt anyone thinks
           | its reasonable to give the engineers 6 months to stop and
           | learn Rust just to rewrite already working systems.
           | 
           | Adding Rust into your build pipeline also takes planning and
           | very careful upfront design decisions. `cargo build` works
           | great from your command line, but you can't just throw that
           | into any pre-existing build system and expect it to just
           | work.
        
         | marcosdumay wrote:
         | It's the result of the data isolation above anything else
         | attitude of Javascript.
         | 
         | Or, in other words, it's the unavoidable result of insisting on
         | using a language created for the frontend to write everything
         | else.
         | 
         | You don't need to rewrite your code in Rust to get that saving.
         | Any other language will do.
         | 
         | (Personally, I'm surprised all the gains are so small. Looks
         | like it's a very well optimized code path.)
        
           | adastra22 wrote:
           | There is no reason data isolation should cost you 100x memory
           | usage.
        
             | marcosdumay wrote:
             | There are plenty of reasons. They are just not intrinsic to
             | the isolation, instead they come from complications rooted
             | deeply on the underlying system.
             | 
             | If you rebuild Linux from the ground up with isolation in
             | mind, you will be able to do it more efficiently. People
             | are indeed in the process of rewriting it, but it's far
             | from complete (and moving back and forward, as not every
             | Linux dev cares about it).
        
               | btilly wrote:
               | Unless you can be concrete and specific about some of
               | those reasons, you're just replacing handwaving with more
               | vigorous handwaving.
               | 
               | What is it specifically about JavaScript's implementation
               | of data isolation that, in your mind, helps cause the
               | excessive memory usage?
        
               | marcosdumay wrote:
               | Just a day or two ago, there was an article here about
               | problems implementing a kind of read-only memory
               | constraint that Javacript benefited from in other OSes.
        
               | btilly wrote:
               | I must have missed that article. Can you find it?
               | 
               | Unless you can come up with a specific reference, it
               | seems unlikely that this would explain the large memory
               | efficiency difference. By contrast it is simple and
               | straightforward to understand why keeping temporary
               | garbage until garbage collection could result in tying up
               | a lot of memory while continually running code that
               | allocates memory and lets it go out of scope. If you
               | search, you'll find lots of references to this happening
               | in a variety of languages.
        
             | chipdart wrote:
             | > There is no reason data isolation should cost you 100x
             | memory usage.
             | 
             | It really depends on what you mean by "memory usage".
             | 
             | The fundamental principle of any garbage collection system
             | is that you allocate objects in the heap at will without
             | freeing them until you really need to, and when that time
             | comes you rely on garbage collection strategies to free and
             | move objects. What this means is that processes end up
             | allocating more data that the one being used, just because
             | there is no need to free it. Consequently, with garbage
             | collecting languages you configure processes with a
             | specific memory budget. The larger the budget, the rarer
             | these garbage collection strategies kick in.
             | 
             | I run a service written with a garbage collected language.
             | It barely uses more than 100MB of memory to handle a couple
             | hundred requests per minute. The process takes over as much
             | as 2GB of RAM before triggering generation 0 garbage
             | collection events. These events trigger around 2 or 3 times
             | per month. A simplistic critic would argue the service is
             | wasting 10x the memory. That critic would be manifesting
             | his ignorance, because there is absolutely nothing to gain
             | by lowering the memory budget.
        
               | nicoburns wrote:
               | > That critic would be manifesting his ignorance, because
               | there is absolutely nothing to gain by lowering the
               | memory budget.
               | 
               | Given that compute is often priced proportional to
               | (maximum) memory usage, there is potentially a lot to be
               | gained: dramatically cheaper hosting costs. Of course if
               | your hosting costs are small to be begin with then this
               | likely isn't worthwhile.
        
               | toast0 wrote:
               | > That critic would be manifesting his ignorance, because
               | there is absolutely nothing to gain by lowering the
               | memory budget.
               | 
               | Well, that depends on information you haven't provided.
               | Maybe your system does have an extra 900 MB of memory
               | hanging around; I've certainly seem systems where the
               | minimum provisionable memory[1] is more than what the
               | system will use for program memory + a full cache of the
               | disk. If that's the case, then yeah, there's nothing to
               | gain. In most systems though, 900 MB of free memory could
               | go towards caching more things from disk, or larger
               | network buffers, or _something_ more than absolutely
               | nothing.
               | 
               | Even with all that, lowering your memory budget might
               | mean more of your working memory fits in L1/L2/L3 cache,
               | which could be a gain, although probably pretty small,
               | since garbage isn't usually accessed. Absolutely nothing
               | is a pretty low barrier though, so I'm sure we could
               | measure something. Probably not worth the engineering
               | cost though.
               | 
               | There are also environments where you can get rather
               | cheap freeing by setting up your garbage to be easily
               | collected. PHP does a per-request garbage collection by
               | (more or less) resetting to the pre-request state after
               | the request is finished; this avoids accumulating garbage
               | across requests, without spending a lot of effort on
               | analysis. An Erlang system that spawns short lived BEAM
               | processes to handle requests can drop the process heap in
               | one fell swoop when the process dies; if you configure
               | the initial heap size so no GCs are triggered during the
               | lifetime of the process, there's very little processing
               | overhead. If something like that fits your environment
               | and model, it can keep your memory usage lower without a
               | lot of cost.
               | 
               | [1] Clouds usually have a minimum memory per vCPU; if you
               | need a lot of CPUs and not a lot of memory, too bad. I
               | don't think you can buy DDR4 SIMMs of less than 4GB, or
               | DDR5 of less than 8GB. Etc
        
           | jvanderbot wrote:
           | "Rust" really just means "Not javascript" as a recurring
           | pattern in these articles.
        
             | noirscape wrote:
             | It's also frankly kinda like comparing apples and oranges
             | as a language. JavaScript (and many of the "bad
             | performance" high level languages minus Rails; Rails is bad
             | and should be avoided for projects as much as possible
             | unless you have lots of legacy cruft) are also heavily
             | designed around rapid iteration. Rust is however very much
             | not capable of rapid iteration, the borrow checker will
             | fight you _heavily_ every step of the way to the point
             | where it demands constant refactors.
             | 
             | Basically the best place where Rust can work is one where
             | all variables, all requirements and all edgecases are known
             | ahead of time or cases where manual memory safety is a
             | necessity vis-a-vis accepting a minor performance hike from
             | things like the garbage collector. This works well in
             | _some_ spaces (notably; systems programming, embedded and
             | Browser Engines and I wouldn 't consider the latter a valid
             | target), but webserver development is probably one of the
             | furthest places where you are looking for Rust.
        
               | hathawsh wrote:
               | I have often thought that programmers can actually just
               | choose to make Rust easy by using a cyclic garbage
               | collector such as Samsara. [1] If cyclic GC in Rust works
               | as well as I think it can, it should be the best option
               | for the majority of high level projects that need fast
               | development with a trade-off of slightly lower
               | efficiency. I suspect we'll see a "hockey stick" adoption
               | curve once everyone figures this out.
               | 
               | [1] https://github.com/chc4/samsara
        
               | 0cf8612b2e1e wrote:
               | I am still waiting for a scripting language to be bolted
               | on top of Rust. Something that will silently Box all the
               | values so the programmer does not have to think about the
               | Rust specifics, but can still lean on all of the Rust
               | machinery and libraries. If performance/correctness
               | becomes a problem, the scripting layer could be replaced
               | piecemeal with real Rust.
        
               | dartos wrote:
               | And then we would've come full circle.
               | 
               | Beautiful
        
               | jvanderbot wrote:
               | The world is mad. After a decade of this, I give up. The
               | cycles never end.
        
               | hathawsh wrote:
               | I know. We're all just rediscovering Lisp in our own way.
               | 
               | ... And yet the fact that most of us _know_ we 're
               | reinventing Lisp, and still doing it anyway, says
               | something. I guess it says that we're just trying to get
               | our jobs done.
        
               | hathawsh wrote:
               | Perhaps you mean to say that you're waiting for a new
               | scripting language to be created that's designed to be
               | "almost Rust." That could be interesting! OTOH, the
               | bindings for existing languages have matured
               | significantly:                 - https://pyo3.rs/       -
               | https://github.com/neon-bindings/neon       -
               | https://github.com/mre/rust-language-bindings
        
               | 0cf8612b2e1e wrote:
               | I definitely am thinking of something more Rust-forward.
               | As Rusty as possible without having to worry about
               | lifetimes, the borrow checker, whatever. Huge performance
               | hit is acceptable, so long as it remains trivial to
               | intermix the Rust+scripting code. Something that gives a
               | smooth on-ramp to push the heavy bits into pure Rust if
               | required. The Python+C strategy in a more integrated
               | package.
        
               | Already__Taken wrote:
               | You're very much describing the powershell -> .Net -> C#
               | path so would be curious to hear your take there. There's
               | also the mad lad support rust in .net
               | https://github.com/FractalFir/rustc_codegen_clr/
        
               | worik wrote:
               | This is what async/await rust programmers need
               | 
               | They are comfortable with runtimes
        
               | sophacles wrote:
               | I found this to be untrue after I spent a little energy
               | learning to think about problems in rust.
               | 
               | In a lot of languages you're working with a hammer and
               | nail (metaphorically speaking) and when you move to a
               | different language its just a slightly different hammer
               | and nail. Rust is a screwdriver and screw though, and
               | once I stopped trying to pound the screw in with the
               | screwdriver, but rather use the one to turn the other, it
               | was a lot easier. Greenfield projects with a lot of
               | iteration are just as fast as doing it in python
               | (although a bit more front-loaded rather than debugging),
               | working new features into existing code - same thing.
        
               | echelon wrote:
               | > Rust is however very much not capable of rapid
               | iteration, the borrow checker will fight you heavily
               | every step of the way to the point where it demands
               | constant refactors.
               | 
               | Misconception.
               | 
               | You will encounter the borrow checker almost never when
               | writing backend web code in Rust. You only encounter it
               | the first time when you're learning how to write backend
               | code in Rust. Once you've gotten used to it, you will
               | _literally never hit it_.
               | 
               | Sometimes when I write super advanced endpoints that
               | mutate global state or leverage worker threads I'll
               | encounter it. But I'm intentionally doing stuff I could
               | never do in Python or Javascript. Stuff like tabulating
               | running statistics on health check information, batching
               | up information to send to analytics services, maintaining
               | in-memory caches that talk to other workers, etc.
        
               | materielle wrote:
               | To put this another way: the Rust borrow checker attempts
               | to tie memory lifetime to stack frames.
               | 
               | This tends to work well for most crud api servers, since
               | you allocate "context, request, and response" data at the
               | start of the handler function, and deallocate at the end.
               | Most helper data can also be tied to the request
               | lifecycle. And data is mainly isolated per-request.
               | Meaning there isn't much data sharing across multiple
               | request.
               | 
               | This means that the borrow checker "just works", and you
               | probably won't even need lifetime annotations or even any
               | special instructions for the borrow checkers. It's the
               | idealized use case the borrow checker was designed for.
               | 
               | This is also the property which most GC languages like
               | Java, Go, and C# exploit with generational garbage
               | collectors. The reason it "works" in Java happens to be
               | the same reason it works in Rust.
               | 
               | If your server does need some shared in-memory data, you
               | can start by just handing out copies. If you truly need
               | something more complicated, and we are talking about less
               | than 10% of crud api servers here, then you need to know
               | a thing or two about the borrow checker.
               | 
               | I'm not saying to rewrite web servers in Rust, or even
               | advocating for it as a language. I'm just pointing out
               | that a crud api server is the idealized use case for a
               | borrow checker.
        
               | echelon wrote:
               | Incredibly well said. This is precisely what makes it
               | work so well.
               | 
               | The language never set out to solve this problem. It
               | wasn't an intentional design goal. The language design
               | and problem space just happen to overlap more or less
               | perfectly.
               | 
               | Complete serendipity.
        
               | timeon wrote:
               | Writing server API'n'co is not unknown path that needs
               | rapid prototyping.
        
               | worik wrote:
               | > the borrow checker will fight you heavily every step of
               | the way to the point where it demands constant refactors.
               | 
               | No
               | 
               | Once you learn to surrender to the borrow checker it
               | becomes friend, not foe
               | 
               | You must submit
        
               | jvanderbot wrote:
               | You've described Rust's niche circumspectly: Systems,
               | which have strict requirements and are fragile
               | minefields.
               | 
               | The performance benefits of Rust were supposed to be a
               | non-penalty: "Look, you can please please use this where
               | you'd use C or C++ I promise it won't impact your
               | performance!" Performance and GC overhead was the
               | rejection de jure of every other C replacement.
               | 
               | But here we are: All friends are javascript front-enders-
               | turned-backenders and are wondering if they should pick
               | up Rust. It fits into a pattern of "new shiny" but it's
               | good, don't get me wrong, if everyone experiences
               | compiled languages and starts writing their headless code
               | in sensible languages.
               | 
               | Repeating myself, but I'm just wondering why not Go? Why
               | now?
        
               | dralley wrote:
               | >Rust is however very much not capable of rapid
               | iteration, the borrow checker will fight you heavily
               | every step of the way to the point where it demands
               | constant refactors.
               | 
               | If you have sufficient experience, that's not really the
               | case. Certainly compared to "comparable" languages like
               | C++ where that time fighting the borrow checker might
               | instead have been spent chasing random crashes.
        
               | jvanderbot wrote:
               | I have written both professionally for long enough to say
               | that there's not real comparable advantage to either. You
               | trade one complexity to fight for another when
               | refactoring or iterating.
        
             | IshKebab wrote:
             | Not exactly. It wouldn't help if you moved your JavaScript
             | to Python or Ruby or PHP... and anyway it's not really
             | feasible from an FFI perspective to move it to anything
             | other than Rust or C/C++ or maybe Zig. There's no good
             | reason to pick C/C++ over Rust in most of these cases...
             | 
             | So "Rust" means "Not JavaScript, and also a bunch of other
             | constraints that mean that Rust is pretty much the only
             | sensible choice."
        
               | marcosdumay wrote:
               | > It wouldn't help if you moved your JavaScript to Python
               | or Ruby or PHP...
               | 
               | Hum, no. The point is exactly that it would help a great
               | deal if you moved to Python or Ruby or PHP.
               | 
               | Of course, Rust will give you even better memory
               | efficiency. But Javascript is a particularly bad option
               | there, and almost anything else would be an improvement.
               | ("Almost", because if you push it enough and move to
               | something like MathLab, you'll get worse results.)
        
               | chrisldgk wrote:
               | This seems a bit unfair to JavaScript. There's a lot of
               | optimizations made to the language and its runtimes that
               | have made a more than viable choice for server side
               | applications over the years. The JavaScript that started
               | as a Webbrowser client side language is very different
               | from the ECMAScript that we have today. Depending on its
               | usage it can also be one of the fastest, only regularly
               | eclipsed by rust[1]. So no, JavaScript really isn't a bad
               | option for server side applications at all.
               | 
               | [1] https://www.techempower.com/benchmarks/#hw=ph&test=co
               | mposite...
        
               | jerf wrote:
               | If moving from JS to CPython would help, it might help
               | memory consumption, because JITs generally trade speed
               | for increased memory. But then you'd get slower
               | execution, because CPython is slower than the JS engines
               | we tend to use. PyPy might generally track JS on
               | performance (big, BIG "it depends" because the speed
               | profile of JITs are _crazy_ complicated, one of my least
               | favorite things about them) but then you 're back to
               | trading memory for speed, so it's probably net-net a
               | sideways move.
               | 
               | Also, I don't know what Node is doing exactly, but if you
               | take a lot of these dynamic languages and just fork them
               | into multiple processes, which they still largely need to
               | do to effectively use all the CPUs, you will generally
               | see high per-process memory consumption just like Node.
               | Any memory page that has a reference counter in it that
               | is used by your code ends up Copied-On-Write in practice
               | by every process in the steady state because all you need
               | to do to end up copying the page is _looking_ at any one
               | reference it happens to contain in such a language. At
               | least in my experience memory sharing gains were always
               | minimal to effectively zero in such cases.
        
               | acdha wrote:
               | > But then you'd get slower execution, because CPython is
               | slower than the JS engines we tend to use
               | 
               | I have not found this to be generally true. It depends
               | heavily on whether your code is limited by pure high
               | level language code[1] and culture makes comparisons
               | harder if you're not just switching languages but also
               | abstraction models and a big stack of optimizations. In
               | theory Java beats Python but in practice I've seen
               | multiple times where a Java program was replaced by
               | Python seeing whole number multiple improvements in
               | performance and reductions in memory consumption because
               | what was really happening is that a bunch of super
               | complicated, optimization-resistant Java framework code
               | was being replaced with much simpler code which was
               | easier to optimize. Node is closer to that side of Java
               | culturally, I think in both cases because people reacted
               | to the limited language functionality by building tons of
               | abstractions which are still there even after the
               | languages improved so even though it's possible to do
               | much better a lot of programmers are still pushing around
               | a lot of code with 2000s-era workarounds buried in the
               | middle.
               | 
               | 1. I'm thinking of someone I saw spend months trying to
               | beat Python in Go and eking out a 10% edge because the
               | bulk of the work devolved to stdlib C code.
        
               | jerf wrote:
               | I cite CPython specifically as CPython both to indicate
               | that I mean that specific interpreter, and that I mean
               | _Python_ code, not Python driving other languages.
               | 
               | While I fully believe that a Python program with a
               | superior O()-complexity class can beat Java (or, indeed,
               | any language), and that a simpler Python program can
               | hypothetically beat a Java program that is just too
               | complicated, it would also be the case that taking that
               | faster Python program and then porting _that_ into Java
               | would then see order of magnitude+ speed increases.
               | Python is _slow_. When comparing languages I generally
               | add the caveat  "with some non-zero and comparable amount
               | of time dedicated to optimization" to try to build a
               | reasonable comparison, because most programs that have
               | had no effort done on optimization at all will have their
               | performance dominated by _something_ stupid that the
               | programmer didn 't even realize they wrote.
               | 
               | The speed increases aren't relevant if the old Java was
               | "too slow" and the new Python is "fast enough". Every
               | program I've ever written _could_ be made faster... but
               | they 're all fast enough now.
               | 
               | Pure Python with some non-trivial optimization effort can
               | not beat a Java program with some non-trivial
               | optimization effort, and that's before the Java code
               | starts using multiple CPUs, if the problem is amenable to
               | that.
               | 
               | This is not cheerleading, dumping on Python, or promoting
               | Java, as if anything my personal biases are in fact the
               | other way (tbh I don't particularly like either at this
               | point but I'd much rather be using Python). This is just
               | engineering stuff that good engineers should know:
               | https://jerf.org/iri/post/2024/not_about_python/
        
               | acdha wrote:
               | I'm not saying that Java or even V8 shouldn't be able to
               | beat Python but rather that in many cases the
               | optimizations needed to beat it are to a first
               | approximation saying "stop using Spring/NextJS/etc." and
               | never happen. The gap between potential and actual speed
               | has been quite frustrating to see expanding over the
               | years.
        
               | kelnos wrote:
               | It depends, of course, on what you're doing. Re-using the
               | toy web API in the article, I expect Python would be
               | significantly faster. The QR code library you'd end up
               | using in Python is probably written in C, and the web-
               | serving portion should have comparable performance
               | characteristics as you'd get with nodejs.
               | 
               | My guess is that if you were to rewrite this same app in
               | straight Python (no Rust at all), it would probably
               | already give you "Tier 3" performance.
               | 
               | But sure, I bet there are a bunch of use cases where
               | nodejs would be faster than Python.
        
               | chipdart wrote:
               | > There's no good reason to pick C/C++ over Rust in most
               | of these cases...
               | 
               | What leads you to believe in that?
        
               | jrpelkonen wrote:
               | I'm not a big believer in absolutes like that, but unless
               | a person is already proficient in C or C++, or there's an
               | existing C++ library, etc., I find it hard to justify
               | using those over Rust. Rust has great tooling, good cross
               | compilation support, good quality standard library and
               | very good 3rd party ecosystem.
               | 
               | Also, it has so few footguns compared to C or C++ even
               | modestly experienced developers can safely use it.
        
               | acdha wrote:
               | The constant stream of CVEs caused by even experts
               | failing to use those languages correctly on the one side,
               | and the much better developer experience on the other.
               | C++ isn't horrible but it's harder to use, harder to find
               | good developers, and there are relatively few cases where
               | there's something easier to do in C++ than Rust which
               | would warrant picking it. In most cases, it'll be both
               | faster and safer if you use a modern language with good
               | tooling instead and take advantage of the easy C bindings
               | if there's a particular library you need.
        
               | IshKebab wrote:
               | Because except in rare cases Rust can do everything C++
               | can do with basically the same performance profile, but
               | it does it with modern tooling and without the security,
               | reliability and productivity issues associated with C++'s
               | pervasive Undefined Behaviour.
               | 
               | There are some cases where C++ makes sense:
               | 
               | * You have a large existing C++ codebase you need to talk
               | to via a large API surface (C++/Rust FFI is not great)
               | 
               | * You have a C++ library that's core to your project and
               | doesn't have a good Rust alternative (i.e. Qt)
               | 
               | * You don't like learning (and are therefore in
               | _completely_ the wrong industry!)
        
               | crabmusket wrote:
               | A host of a prominent C++ podcast expressed more or less
               | this sentiment recently (on an ep within the last year).
               | He was being a _little_ bit  "devil's advocate", and not
               | suggesting stopping working with C++ altogether. But he
               | could see most use cases of C++ being well satisfied by
               | Rust, and with more ergonomic features like Cargo making
               | the overall experience less of a chore.
        
           | nh2 wrote:
           | It's important to be aware that often it isn't the
           | programming language that has the biggest effect on memory
           | usage, but simply settings of the memory allocator and OS
           | behaviour.
           | 
           | This also means that you cannot "simply measure memory usage"
           | (e.g. using `time` or `htop`) without already having a
           | relatively deep understanding of the underlying mechanisms.
           | 
           | Most importantly:
           | 
           | libc / malloc implementation:
           | 
           | glibc by default has heavy memory fragmentation, especially
           | in multi-threaded programs. It means it will not return
           | `malloc()`ed memory back to the OS when the application
           | `free()`s it, keeping it instead for the next allocation,
           | because that's faster. Its default settings will e.g. favour
           | 10x increased RESident memory usage for 2% speed gain. Some
           | of this can be turned off in glibc using e.g. the env var
           | `MALLOC_MMAP_THRESHOLD_=65536` -- for many applications I've
           | looked at, this instantaneously reduced RES fro 7 GiB to 1
           | GiB. Some other issues cannot be addressed, because the
           | corresponding glibc tunables are bugged [2]. For jemalloc
           | `MALLOC_CONF=dirty_decay_ms:0,muzzy_decay_ms:0` helps to
           | return memory to the OS immediately.
           | 
           | Linux:
           | 
           | Memory is generally allocated from the OS using `mmap()`, and
           | returned using `munmap()`. But that can be a bit slow. So
           | some applications and programming language runtimes use
           | instead `madvise(MADV_FREE)`; this effectively returns the
           | memory to the OS, but the OS does not actually do costly
           | mapping table changes unless it's under memory pressure. As a
           | result, one observes hugely increased memory usage in `time`
           | or `htop`. [2]
           | 
           | The above means that people are completely unware what
           | actually eats their memory and what the actual resource usage
           | is, easily "measuring wrong" by factor 10x.
           | 
           | For example, I've seen people switch between Haskell and Go
           | (both directions) because they thought the other one used
           | less memory. It actually was just the glibc/Linux flags that
           | made the actual difference. Nobody made the effort to really
           | understand what's going on.
           | 
           | Same thing for C++. You think without GC you have tight
           | memory control, but in fact your memory is often not returned
           | to the OS when the destructor is called, for the above
           | reason.
           | 
           | This also means that the numbers for Rust or JS may easily be
           | wrong (in either direction, or both).
           | 
           | So it's quite important to measure memory usage also with the
           | tools above malloc(), otherwise you may just measure the
           | wrong thing.
           | 
           | [1]: https://sourceware.org/bugzilla/show_bug.cgi?id=14827
           | 
           | [2]: https://downloads.haskell.org/ghc/latest/docs/users_guid
           | e/ru...
        
           | btilly wrote:
           | Your claim makes zero sense to me. Particularly when I've
           | personally seen similar behavior out of other languages, like
           | Java.
           | 
           | As I said in another comment, the most likely cause is that
           | temporary garbage is not collected immediately in JavaScript,
           | while garbage is collected immediately in Rust. See
           | https://doc.rust-lang.org/nomicon/ownership.html for the key
           | idea behind how Rust manages this.
           | 
           | If you truly believe that it is somehow due to data
           | isolation, then I would appreciate a reference to where
           | JavaScript's design causes it to behave differently.
        
           | chipdart wrote:
           | > Or, in other words, it's the unavoidable result of
           | insisting on using a language created for the frontend to
           | write everything else.
           | 
           | I don't think this is an educated take.
           | 
           | The whole selling point of JavaScript in the backend has
           | nothing to do with "frontend" things. The primary selling
           | point is what makes Node.js take over half the world: it's
           | async architecture.
           | 
           | And by the way, benchmarks such as Tech Empower Web Framework
           | still features JavaScript frameworks that outperform Rust
           | frameworks. How do you explain that?
        
             | nicce wrote:
             | > The primary selling point is what makes Node.js take over
             | half the world: it's async architecture.
             | 
             | It is the availability of the developers who know the
             | language (JavaScript) (aka cheaper available workforce).
        
             | runevault wrote:
             | Rust has had async for a while (though it can be painful,
             | but I think request/response systems like APIs should not
             | run into a lot of the major footguns).
             | 
             | C# has excellent async for asp.net and has for a long time.
             | I haven't touched Java in ages so cannot comment on the JVM
             | ecosystem's async support. So there are other excellent
             | options for async backends that don't have the drawbacks of
             | javascript.
        
           | smolder wrote:
           | I rewrote the same web API in Javascript, Rust, C#, and Java
           | as a "bench project" at work one time. The Rust version had
           | smallest memory footprint _by far_ as well as the best
           | performance. So, no,  "any other language" [than JS] is not
           | all the same.
        
             | manquer wrote:
             | They are not saying every language will have same level of
             | improvement as Rust, they are saying you can most of the
             | improvements is available in most languages.
             | 
             | perhaps you get 1300MB to 20 MB with C# or Java or go, and
             | 13MB with rust . Rust's design is not the reason for bulk
             | of the reduction is the point
        
               | acdha wrote:
               | Sure, but until people actually have real data that's
               | just supposition. If a Java rewrite went from 1300MB to,
               | say, 500MB they'd have a valid point and optimizing for
               | RAM consumption is severely contrary to mainstream Java
               | culture.
        
             | jeroenhd wrote:
             | C# and Java are closer but not really on the level of Rust
             | when it comes to performance. A better comparison would be
             | with C++ or a similarly low-level language.
             | 
             | In my experience, languages like Ruby and Python are slower
             | than languages like Javascript, which are slower than
             | languages like C#/Java, which are slower than languages
             | like C++/Rust, which are slower than languages like C and
             | Fortran. Assembly isn't always the fastest approach these
             | days, but well-placed assembly can blow C out of the water
             | too.
             | 
             | The ease of use and maintainability scale in reverse in my
             | experience, though. I wouldn't want to maintain the
             | equivalent of a quick and dirty RoR server reimplemented in
             | C or assembly, especially after it's grown organically for
             | a few years. Writing Rust can be very annoying when you
             | can't take the normal programming shortcuts because of
             | lifetimes or the borrow checker, in a way that JIT'ed
             | languages allow.
             | 
             | Everything is a scale and faster does not necessarily mean
             | better if the code becomes unreadable.
        
               | Klonoar wrote:
               | I have written and worked on more than my fair share of
               | Rust web servers, and the code is more than readable.
               | This typically isn't the kind of Rust where you're
               | managing lifetimes and type annotations so heavily.
        
               | jandrewrogers wrote:
               | C and Fortran are not faster than C++, and haven't been
               | for a long time. I've used all three languages in high-
               | performance contexts. In practice, C++ currently produces
               | the fastest code of high-level languages.
        
               | smolder wrote:
               | My goal with the project was to compare higher
               | performance _memory safe_ languages to Javascript in
               | terms of memory footprint, throughput, latency, as well
               | as the difficulty of implementation. Rust was,
               | _relatively_ speaking, slightly more difficult: because
               | concurrently manipulated data needed to be explicitly
               | wrapped in a mutex, and transforming arbitrary JSON
               | structures (which was what one of the endpoints did) was
               | slightly more complex than in the others. But, overall,
               | even the endpoints that I thought might be tricky in Rust
               | weren 't really what I'd call difficult to implement, and
               | it wasn't difficult to read either. It seemed worth the
               | trade-off to me and I regret not having more
               | opportunities to work with it professionally in the time
               | since.
        
               | kelnos wrote:
               | > _A better comparison would be with C++ or a similarly
               | low-level language._
               | 
               | Right, but then I'd have to write C++. Shallow dismissal
               | aside (I _really_ do not enjoy writing C++), the bigger
               | issue is safety: I am almost certain to write several
               | exploitable bugs in a language like C++ were I to use it
               | to build an internet-facing web app. The likelihood of
               | that happening with Rust, Java, C#, or any other memory-
               | safe language is _much_ lower. Sure, logic errors can
               | result in security issues too, and no language can save
               | you from those, but that 's in part the point: when it
               | comes to the possibility of logic errors, we're in "all
               | things being equal" territory. When it comes to memory
               | safety, we very much are not.
               | 
               | So that pretty much leaves me with Rust, if I've decided
               | that the memory footprint or performance of Java or C#
               | isn't sufficient for my needs. (Or something like Go, but
               | I personally do not enjoy writing Go, so I wouldn't
               | choose it.)
               | 
               | > _Everything is a scale and faster does not necessarily
               | mean better if the code becomes unreadable._
               | 
               | True, but unreadable-over-time has not been my experience
               | with Rust. You can write some very plain-vanilla,
               | not-"cleverly"-optimized code in Rust, and still have
               | great performance characteristics. If I ever have to drop
               | into 'unsafe' in a Rust code base for something like a
               | web app, most likely I'm doing it wrong.
        
               | pdimitar wrote:
               | > _when it comes to the possibility of logic errors, we
               | 're in "all things being equal" territory. When it comes
               | to memory safety, we very much are not._
               | 
               | Very well summed. I'll remember this exact quote. Thank
               | you.
        
               | tialaramex wrote:
               | I'd even argue that idiomatic Rust is less prone to those
               | "logic errors" than C++ and the language design gives you
               | fewer chances to trip over yourself.
               | 
               | Even the basics, nobody is calling Rust's
               | [T]::unstable_sort without knowing it is an unstable
               | sort. Even if you've no idea what "stability" means in
               | this context you are cued to go find out. But in C++ that
               | is just called "sort". Hope you don't mind that it's
               | unstable...
        
               | neonsunset wrote:
               | C# and Java are languages with _very_ different
               | performance ceilings and techniques available for memory
               | management.
        
               | pdimitar wrote:
               | > _A better comparison would be with C++ or a similarly
               | low-level language._
               | 
               | You probably want the apples-to-apples comparison but
               | this looks an artificially limiting comparison; people
               | are shilling, ahem, sorry, advocating for their languages
               | in most areas, especially web / API servers. If somebody
               | is making grandiose claims about their pet language then
               | it's very fair to slap them with C++ or Rust or anything
               | else that's actually mega ultra fast.
               | 
               | So there's no "better" comparison here. It's a fair game
               | to compare everything to everything if people use all
               | languages for the same kinds of tasks. And they do.
        
             | materielle wrote:
             | I'm curious how Go stacks up against C# and Java these
             | days.
             | 
             | "Less languages features, but a better compiler" was
             | originally the aspirational selling point of Go.
             | 
             | And even though there were some hiccups, at least 10 years
             | ago, I remember that mainly being true for typical web
             | servers. Go programs did tend to use less memory, have less
             | GC pauses (in the context of a normal api web server), and
             | faster startup time.
             | 
             | But I know Java has put a ton of work in to catch up to Go.
             | So I wonder if that's still true today?
        
               | dartos wrote:
               | One of the big draws of go is ease of deployment. A
               | single self contained binary is easy to package and ship,
               | especially with containers.
               | 
               | I don't think Java has any edge when it comes to
               | deployment.
        
               | jerven wrote:
               | Java AOT has come a long way, and is not so rare as it
               | used to be. Native binaries with GraalVM AOT are becoming
               | more a common way to ship CLI tools written in JVM
               | languages.
        
               | neonsunset wrote:
               | Native image continues to be relegated to a "niche"
               | scenario with very few accommodations from the wider Java
               | ecosystem.
               | 
               | This contrasts significantly with effort and adoption of
               | NativeAOT in .NET. Well, besides CLI, scenarios where it
               | shines aren't those which Go is capable of addressing
               | properly in the first place like GUI applications.
        
               | neonsunset wrote:
               | Go compiler is by far the weakest among those three. GC
               | pause time is a little lie that leaves the allocation
               | throttling, pause frequency and write barrier cost out of
               | the picture. Go works quite well within its intended
               | happy path but regresses massively under heavier
               | allocation traffic in a way that just doesn't happen in
               | .NET or OpenJDK GC implementations.
        
               | materielle wrote:
               | That's why I specifically qualified my comment "within
               | the context of a typical crud api server".
               | 
               | I remember this being true 10 years ago. Java web servers
               | I maintained had a huge problem with tail latency. Maybe
               | if you were working on a 1 qps service it didn't matter.
               | But for those of us working on high qps systems, this was
               | a huge problem.
               | 
               | But like I said, I know the Java people have put a ton of
               | work in to try to close the gap with Go. So maybe this
               | isn't true anymore.
        
               | neonsunset wrote:
               | Typical CRUD API server is going to do quite a few
               | allocations, maybe use the "default" (underwhelming) gRPC
               | implementation to call third-parties and query a DB (not
               | to mention way worse state of ORMs in Go). It's an old
               | topic.
               | 
               | Go _tends to_ perform better at  "leaner" microservices,
               | but if you are judging this only by comparing it to the
               | state of Java many years ago, ignoring _numerous_
               | alternative stacks, it 's going to be a completely
               | unproductive way to look at the situation. Let's not move
               | the goalposts.
        
         | beached_whale wrote:
         | Im ok if it isnt popular. It will keep compute costs lower for
         | those using it as the norm is excessive usage
        
         | btilly wrote:
         | That's because you're churning temporary memory. JS can't free
         | it until garbage collection runs. Rust is able to do a lifetime
         | analysis, and knows it can free it immediately.
         | 
         | The same will happen on any function where you're calling
         | functions over and over again that create transient data which
         | later gets discarded.
        
         | throwitaway1123 wrote:
         | There are flags you can set to tune memory usage (notably V8's
         | --max-old-space-size for Node and the --smol flag for Bun). And
         | of course in advanced scenarios you can avoid holding strong
         | references to objects with weak maps, weak sets, and weak refs.
        
         | palata wrote:
         | > If everybody cared about optimizing for efficiency and
         | performance
         | 
         | The problem is that most developers are not capable of
         | optimizing for efficiency and performance.
         | 
         | Having more powerful hardware has allowed us to make software
         | frameworks/libraries that make programming a lot more
         | accessible. At the same time lowering the quality of said
         | software.
         | 
         | Doesn't mean that all software is bad. Most software is bad,
         | that's all.
        
       | dyzdyz010 wrote:
       | Make Rustler great again!
        
       | jchw wrote:
       | Haha, I was flabbergasted to see the results of the subprocess
       | approach, incredible. I'm guessing the memory usage being lower
       | for that approach (versus later ones) is because a lot of the
       | heavy lifting is being done in the subprocess which then gets
       | entirely freed once the request is over. Neat.
       | 
       | I have a couple of things I'm wondering about though:
       | 
       | - Node.js is pretty good at IO-bound workloads, but I wonder if
       | this holds up as well when comparing e.g. Go or PHP. I have run
       | into embarrassing situations where my RiiR adventure ended with
       | less performance against even PHP, which makes some sense: PHP
       | has tons of relatively fast C modules for doing some heavy
       | lifting like image processing, so it's not quite so clear-cut.
       | 
       | - The "caveman" approach is a nice one just to show off that it
       | still works, but it obviously has a lot of overhead just because
       | of all of the forking and whatnot. You can do a lot better by not
       | spawning a new process each time. Even a rudimentary approach
       | like having requests and responses stream synchronously and
       | spawning N workers would probably work pretty well. For
       | computationally expensive stuff, this might be a worthwhile
       | approach because it is so relatively simple compared to
       | approaches that reach for native code binding.
        
         | tln wrote:
         | The native code binding was impressively simple!
         | 
         | 7 lines of rust, 1 small JS change. It looks like napi-rs
         | supports Buffer so that JS change could be easily eliminated
         | too.
        
         | tialaramex wrote:
         | Caveman approach has several nice features - I think I'd be
         | tempted even if it didn't have better performance.
        
       | echelon wrote:
       | Rust is simply amazing to do web backend development in. It's the
       | biggest secret in the world right now. It's why people are
       | writing so many different web frameworks and utilities - it's
       | popular, practical, and growing fast.
       | 
       | Writing Rust for web (Actix, Axum) is no different than writing
       | Go, Jetty, Flask, etc. in terms of developer productivity. It's
       | super easy to write server code in Rust.
       | 
       | Unlike writing Python HTTP backends, the Rust code is so much
       | more defect free.
       | 
       | I've absorbed 10,000+ qps on a couple of cheap tiny VPS
       | instances. My server bill is practically non-existent and I'm
       | serving up crazy volumes without effort.
        
         | manfre wrote:
         | > I've absorbed 10,000+ qps on a couple of cheap tiny VPS
         | instances.
         | 
         | This metric doesn't convey any meaningful information.
         | Performance metrics need context of the type of work completed
         | and server resources used.
        
         | boredumb wrote:
         | I've been experimenting with using Tide, sqlx and askama and
         | after getting comfortable, it's even more ergonomic for me than
         | using golang and it's template/sql librarys. Having compile
         | time checks on SQL and templates in and of itself is a reason
         | to migrate. I think people have a lot of issues with the life
         | time scoping but for most applications it simply isn't
         | something you are explicitly dealing with every day in the way
         | that rust is often displayed/feared (and once you fully wrap
         | your head around what it's doing it's as simple as most other
         | language aspects).
        
         | JamesSwift wrote:
         | > Writing Rust for web (Actix, Axum) is no different than
         | writing Go, Jetty, Flask, etc. in terms of developer
         | productivity. It's super easy to write server code in Rust.
         | 
         | I would definitely disagree with this after building a micro
         | service (url shortener) in rust. Rust requires you to rethink
         | your design in unique ways, so that you generally cant do
         | things in the 'dumbest way possible' as your v1. I found myself
         | really having to rework my design-brain to fit rusts model to
         | please the compiler.
         | 
         | Maybe once that relearning has occurred you can move faster,
         | but it definitely took a lot longer to write an extremely
         | simple service than I would have liked. And scaling that to a
         | full api application would likely be even slower.
         | 
         | Caveat that this was years ago right when actix 2 was coming
         | out I believe, so the framework was in a high amount of flux in
         | addition to needing to get my head around rust itself.
        
           | collinvandyck76 wrote:
           | > Maybe once that relearning has occurred you can move faster
           | 
           | This has been my experience. I have about a year of rust
           | experience under my belt, working with an existing codebase
           | (~50K loc). I started writing the toy/throwaway programs i
           | normally write, now in rust instead of go halfway through
           | this stretch. Hard to say when it clicked, maybe about 7-8
           | months through this experience, so that i didn't struggle
           | with the structure of the program and the fights with the
           | borrow checker, but it did to the point where i don't really
           | have to think about it much anymore.
        
             | guitarbill wrote:
             | I have a similar experience. Was drawn to Rust not because
             | of performance or safety (although it's a big bonus), but
             | because of the tooling and type system. Eventually, it does
             | get easier. I do think that's a poor argument, kind of like
             | a TV show that gets better in season 2. But I can't
             | discount that it's been much nicer to maintain these tools
             | compared to Python. Dependency version updates are much
             | less scary due to actual type checking.
        
         | nesarkvechnep wrote:
         | It will probably never replace Elixir as my favourite web
         | technology. For writing daemons though, it's already my
         | favourite.
        
         | kstrauser wrote:
         | I've written Python APIs since about 2001 or so. A few weeks
         | ago I used Actix to write a small API server. If you squint and
         | don't see the braces, it looks an awful lot like a Flask app.
         | 
         | I had fun writing it, learned some new stuff along the way, and
         | ended up with an API that could serve 80K RPS (according to the
         | venerable ab command) on my laptop with almost no optimization
         | effort. I will absolutely reach for Rust+Actix again for my
         | next project.
         | 
         | (And I found, fixed, and PR'd a bug in a popular rate limiter,
         | so I got to play in the broader Rust ecosystem along the way.
         | It was a fun project!)
        
         | adamrezich wrote:
         | Disclaimer: I haven't ever written any serious Rust code, and
         | the last time I even tried to use the language was years ago
         | now.
         | 
         | What is it about Rust that makes it so appealing to people to
         | use for web backend development? From what I can tell, one of
         | the selling points of Rust is its borrow checker/lifetime
         | management system. But if you're making a web backend, then you
         | really only need to care about two lifetimes: the lifetime of
         | the program, and the lifetime of a given request/response. If
         | you want to write a web backend in C, then it's not too
         | difficult to set up a simple system that makes a temporary
         | memory arena for each request/response, and, once the response
         | is sent, marks this memory for reuse (and probably zeroes it,
         | for maximum security), instead of freeing it.
         | 
         | Again, I don't really have any experience with Rust whatsoever,
         | but how does the borrow checker/lifetime system help you with
         | this? It seems to me (as a naive, outside observer) that these
         | language features would get in the way more than they would
         | help.
        
       | voiper1 wrote:
       | Wow, that's an incredible writeup.
       | 
       | Super surprised that shelling out was nearly as good any any
       | other method.
       | 
       | Why is the average bytes smaller? Shouldn't it be the same size
       | file? And if not, it's a different alorithm so not necessarily
       | better?
        
         | pixelesque wrote:
         | > Why is the average bytes smaller? Shouldn't it be the same
         | size file?
         | 
         | The content being encoded in the PNG was different
         | ("https://www.reddit.com/r/rustjerk/top/?t=all" for the first,
         | "https://youtu.be/cE0wfjsybIQ?t=74" for the second example -
         | not sure whether the benchmark used different things?), so I'd
         | expect the PNG buffer pixels to be different between those two
         | images and thus the compressed image size to be a _bit_
         | different, even if the compression levels of DEFLATE within the
         | PNG were the same).
        
         | xnorswap wrote:
         | That struck me as odd too.
         | 
         | It may be just additional HTTP headers added to the response,
         | but then it's hardly fair to use that as a point of comparison
         | and treat smaller as "better".
        
           | loeg wrote:
           | I think your guess is spot on. The QRcode images themselves
           | are 594 and 577 bytes. The vast majority of the difference
           | must be coming from other factors (HTTP headers).
           | 
           | https://news.ycombinator.com/item?id=41973396
        
             | pretzelhammer wrote:
             | Author here. The benchmarking tool I used for measuring
             | response size was vegeta, which ignores HTTP headers in its
             | measurements. I believe the difference in size is indeed in
             | the QR code images themselves.
        
         | jyap wrote:
         | The article says:
         | 
         | Average response size also halved from 1506 bytes to 778 bytes,
         | the compression algo in the Rust library must be better than
         | the one in the JS library
        
         | loeg wrote:
         | I believe the difference is that the JS version specifies
         | compression strategy 3 (Z_RLE)[0][1], whereas the Rust crate is
         | using the default compression strategy[2]. Both otherwise use
         | the same underlying compression library (deflate aka zlib) and
         | the same compression level (9).
         | 
         | [0]: https://github.com/pretzelhammer/using-rust-in-non-rust-
         | serv...
         | 
         | [1]:
         | https://zlib.net/manual.html#Advanced:~:text=The%20strategy%...
         | 
         | [2]: https://github.com/rust-
         | lang/flate2-rs/blob/1a28821dc116dac1...
         | 
         | Edit: Nevermind. If you look at the actual generated files,
         | they're 594 and 577 bytes respectively. This is mostly HTTP
         | headers.
         | 
         | [3]: https://github.com/pretzelhammer/rust-
         | blog/blob/master/asset...
         | 
         | [4]: https://github.com/pretzelhammer/rust-
         | blog/blob/master/asset...
        
           | pretzelhammer wrote:
           | Author here. I believe I generated both of those images using
           | the Rust lib, they shouldn't be used for comparing the
           | compression performance of the JS lib vs the Rust lib.
        
             | loeg wrote:
             | Interesting, but neither lines up with the size from the
             | benchmarking? You would expect the Rust one to match?
        
               | pretzelhammer wrote:
               | Here's the list of my benchmark targets:
               | https://github.com/pretzelhammer/using-rust-in-non-rust-
               | serv...
               | 
               | Vegeta, the tool I used for benchmarking, iterates
               | through all those targets round-robin style while
               | attacking the server and then averages the results when
               | reporting the average response size in bytes (and it only
               | measures the size of the response body, it doesn't
               | include other things like headers).
               | 
               | Even using the same library and same compression
               | algorithm not all 200px by 200px QR code PNGs will
               | compress to the same size. How well they can be
               | compressed depends a lot on the encoded piece of text as
               | that determines the visual complexity of the generated QR
               | code.
        
               | loeg wrote:
               | I see. I misread the article as implying that only the
               | specified URLs were being benchmarked.
        
       | bdahz wrote:
       | I'm curious what if we replace Rust with C/C++ in those tiers.
       | Would the results be even better or worse than Rust?
        
         | Imustaskforhelp wrote:
         | also maybe checking out bun ffi / I have heard they recently
         | added their own compiler
        
         | znpy wrote:
         | It should be pretty much the same.
         | 
         | The article is mostly about exemplifying the various leve of
         | optimisation you can get by moving "hot code paths" to native
         | code (irrespective whether you write that code in rust/c++/c.
         | 
         | Worth noting that if you're optimising for memory usage, rust
         | (or some other native code) might not help you very much until
         | you throw away your whole codebase, which might not be always
         | feasible.
        
       | lsofzz wrote:
       | <3
        
       | Dowwie wrote:
       | Beware the risks of using NIFs with Elixir. They run in the same
       | memory space as the BEAM and can crash not just the process but
       | the entire BEAM. Granted, well-written, safe Rust could lower the
       | chances of this happening, but you need to consider the risk.
        
         | mijoharas wrote:
         | I believe that by using rustler[0] to build the bindings that
         | shouldn't be possible. (at the very least that's stated in the
         | readme.)
         | 
         | > Safety : The code you write in a Rust NIF should never be
         | able to crash the BEAM.
         | 
         | I tried to find some documentation stating how it works but
         | couldn't. I think they use a dirty scheduler, and catch panics
         | at the boundaries or something? wasn't able to find a clear
         | reference.
         | 
         | [0] https://github.com/rusterlium/rustler
        
       | pjmlp wrote:
       | And so what we were doing with Apache, mod_<pick your lang> and C
       | back in 2000, is new again.
       | 
       | At least with Rust it is safer.
        
       | ports543u wrote:
       | While I agree the enhancement is significant, the title of this
       | post makes it seem more like an advertisement for Rust than an
       | optimization article. If you rewrite js code into a native
       | language, be it Rust or C, of course it's gonna be faster and use
       | less resources.
        
         | baq wrote:
         | 'of course' is not really that obvious except for
         | microbenchmarks like this one.
        
           | ports543u wrote:
           | I think it is pretty obvious. Native languages are expected
           | to be faster than interpreted or jitted, or automatic-memory-
           | management languages in 99.9% of cases, where the programmer
           | has far less control over the operations the processor is
           | doing or the memory it is copying or using.
        
         | mplanchard wrote:
         | Is there an equivalently easy way to expose a native interface
         | from C to JS as the example in the post? Relatedly, is it as
         | easy to generate a QR code in C as it is in Rust (11 LoC)?
        
           | ports543u wrote:
           | > Is there an equivalently easy way to expose a native
           | interface from C to JS as the example in the post?
           | 
           | Yes, for most languages. For example, in Zig
           | (https://ziglang.org/documentation/master/#WebAssembly) or in
           | C (https://developer.mozilla.org/en-
           | US/docs/WebAssembly/C_to_Wa...)
           | 
           | > Relatedly, is it as easy to generate a QR code in C as it
           | is in Rust (11 LoC)?
           | 
           | Yes, there are plenty of easy to use QR-code libraries
           | available, for pretty much every relevant language. Buffer
           | in, buffer out.
        
           | AndrewDucker wrote:
           | It's that simple in Rust because it's using a library. C also
           | has libraries for generating QR codes:
           | https://github.com/ricmoo/QRCode
           | 
           | (Obviously there are other advantages to Rust)
        
             | mplanchard wrote:
             | nice, thanks for the link!
        
       | djoldman wrote:
       | Not trying to be snarky, but for this example, if we can compile
       | to wasm, why not have the client compute this locally?
       | 
       | This would entail zero network hops, probably 100,000+ QRs per
       | second.
       | 
       | IF it is 100,000+ QRs per second, isn't most of the thing we're
       | measuring here dominated by network calls?
        
         | munificent wrote:
         | It's a synthetic example to conjure up something CPU bound on
         | the server.
        
         | jeroenhd wrote:
         | WASM blobs for programs like these can easily turn into
         | megabytes of difficult to compress binary blobs once transitive
         | dependencies start getting pulled in. That can mean seconds of
         | extra load time to generate an image that can be represented by
         | maybe a kilobyte in size.
         | 
         | Not a bad idea for an internal office network where every
         | computer is hooked up with a gigabit or better, but not great
         | for cloud hosted web applications.
        
         | nemetroid wrote:
         | The fastest code in the article has an average latency of 14
         | ms, benchmarking against localhost. On my computer, "ping
         | localhost" has an average latency of 20 us. I don't have a lot
         | of experience writing network services, but those numbers sound
         | CPU bound to me.
        
       | demarq wrote:
       | I didn't realize calling to the cli is that fast.
        
       | jinnko wrote:
       | I'm curious how many cores the server the tests ran on had, and
       | what the performance would be of handling the requests in native
       | node with worker threads[1]? I suspect there's an aspect of being
       | tied to a single main thread that explains the difference at
       | least between tier 0 and 1.
       | 
       | 1: https://nodejs.org/api/worker_threads.html
        
         | pretzelhammer wrote:
         | As the article mentions, the test server had 12 cores. The
         | Node.js server ran in "cluster mode" so that all 12 cores were
         | utilized during benchmarking. You can see the implementation
         | here (just ~20 lines of JS):
         | https://github.com/pretzelhammer/using-rust-in-non-rust-serv...
        
         | tialaramex wrote:
         | Doesn't "the 12 CPU cores on my test machine" answer your
         | question ?
        
       | Already__Taken wrote:
       | Shelling out to a CLI is quite an interesting path because often
       | that functionality could be useful handed out as a separate
       | utility to power users or non-automation tasks. Rust makes cross-
       | platform distribution easy.
        
       ___________________________________________________________________
       (page generated 2024-10-28 23:00 UTC)