[HN Gopher] When Rust hurts
       ___________________________________________________________________
        
       When Rust hurts
        
       Author : nequo
       Score  : 250 points
       Date   : 2023-02-15 13:18 UTC (9 hours ago)
        
 (HTM) web link (mmapped.blog)
 (TXT) w3m dump (mmapped.blog)
        
       | dcow wrote:
       | I really resonate with the async complaints in the article. Rust
       | concurrency is _not_ fearless. I fear writing async code every
       | time I have to do it. I have to think so much harder which
       | ultimately means opportunity for errors.
       | 
       | Even though Rust does slightly skirt the colored function issue,
       | my code still bifurcates because truly correct async code needs
       | to use async variants of the standard lib for file access.
       | Knowing when something you call blocks, or grabs a lock, etc. is
       | something which the compiler doesn't help you with but should.
       | And then how the hell are you supposed to use any code sanely
       | when the entire community is using different async runtimes? Why
       | can't the runtime be part of the core language with hooks into
       | parts that might need to be customized in hyper special cases?
       | 
       | It really feels like we lost the steam behind async Rust and
       | we've got this partially complete thing that's neither high level
       | enough to be convenient nor low level enough to be fearlessly
       | correct. Async traits still aren't a thing and it's been _years_.
        
         | brightball wrote:
         | One thing that drew me to Rust was actually Elixir/Erlang
         | calling out to it for certain specialized needs. Within
         | Elixir/Erlang you get best of breed concurrency but exiting the
         | BEAM to run other code is unsafe. Calling out to Rust, however,
         | comes with great safety guarantees.
         | 
         | Managing concurrency outside of Rust and then calling Rust for
         | the more focused and specialized work is a good combination
         | IMO.
         | 
         | https://github.com/rusterlium/rustler
        
           | rapnie wrote:
           | New to Rust, but found this project via Elixir forum that
           | looks intriguing: https://lunatic.solutions
        
             | codethief wrote:
             | This does look intriguing! That being said:
             | 
             | > Lunatic's design is all about super lightweight
             | processes. Processes are fast to create, have a small
             | memory footprint and a low scheduling overhead. They are
             | designed for massive concurrency.
             | 
             | > Lunatic processes are completely isolated, with a per-
             | process stack and heap. If a process crashes, it won't
             | affect others in the runtime.
             | 
             | I don't understand. Is each Lunatic process an OS process
             | or not? If it is, it sounds anything but lightweight. If it
             | is not, it is not "completely isolated", at least not in
             | the usual sense of the word.
        
               | dcow wrote:
               | > If it is not, it is not "completely isolated", at least
               | not in the usual sense of the word.
               | 
               | Well that's where lunatic comes in. Lunatic provides the
               | isolation and abstraction so you can run various wasm
               | binaries side by side. If one of those binaries
               | segfaults, lunatic presumably catches the interrupt and
               | kills the corresponding wasm instance that caused it
               | rather than just giving up and crashing.
        
         | puffoflogic wrote:
         | Rust's "fearless concurrency" always referred to traditional
         | thread-based concurrency, which works very well for 99% of
         | usecases. Async is something that was bolted on to the language
         | long afterwards by folks coming from the JS/Node community who
         | didn't understand threads or the value of fearless concurrency.
        
         | necubi wrote:
         | It took me about of month of writing async rust (coming from a
         | solid sync rust background) to feel comfortable. In that first
         | month there were definitely a lot of inscrutable issues
         | (fortunately the tokio discord was very helpful).
         | 
         | But now it feels productive, and if you're totally in the async
         | world, it all works pretty well.
         | 
         | The lack of async traits is my main pain point today, but you
         | can mostly work around that with the async_trait crate.
         | Generally it's been slow progress on usability but I think it's
         | in a much better place than it was a year ago, and will clearly
         | be better in another year.
        
         | duped wrote:
         | > And then how the hell are you supposed to use any code sanely
         | when the entire community is using different async runtimes
         | 
         | It's not that big of a deal in practice. You pick a runtime.
         | Most of the library ecosystem doesn't care about the async
         | runtime used, and that which does uses tokio.
         | 
         | > Why can't the runtime be part of the core language with hooks
         | into parts that might need to be customized in hyper special
         | cases?
         | 
         | Because Rust ships a minimal runtime.
         | 
         | > Async traits still aren't a thing and it's been years.
         | 
         | They were blocked on generic associated types which landed
         | recently, and will land in stable in the near future. For
         | everything else there's `#[async_trait]`.
         | 
         | Also, "fearless concurrency" is a lower level kind of
         | "fearless" than "how do I use an async API." It's about memory
         | safety and program soundness in the presence of concurrent
         | execution. The fact that the only fears we have are about the
         | ecosystem and APIs is a testament to how Rust eliminated the
         | _deep_ fear of concurrent execution.
         | 
         | And if we're going to take a shot at async, let's take a shot
         | at how fucking hard it is to implement a correct async executor
         | and how bad the docs are. std Rust does next to nothing to
         | abstract or make your life easier.
        
           | dcow wrote:
           | I use Rust in practice. We use async_std. Someone recently
           | added reqwest. We now have two async runtimes. It's a
           | problem.
           | 
           | "minimal runtime" is semantics. Rust shipped async. IMO that
           | would expand the definition of what is minimal for Rust to
           | include in the runtime. I'm 100% certain no async anything
           | would need to make it into programs that don't use async so
           | the surface area of the runtime need not change for anybody
           | unduly if Rust supported async.
           | 
           | I'm not saying Rust hasn't made async things better. It has.
           | But it's still really messy. It doesn't give me the same
           | peace of mind that Rust does around memory safety.
           | 
           | And that's my gripe. I admit it's a rather privileged one.
           | But that's Rust's fault. Rust has shown me how nice it is to
           | fight with the complier till I want to scream if it means my
           | program isn't going to have dangling pointers everywhere. I
           | am a lazy dev and want the same for my concurrent code. Rust
           | currently doesn't deliver on that promise, IMO. And it surely
           | could with some continued work and slight philosophy
           | adjustment.
           | 
           | It's literally incorrect to use parts of Rust's stdlib from
           | async code. You can't tell me that's a good thing. C.f. Swift
           | where the async support is thoroughly cooked into the
           | language, runtime, and stdlib. Rust can do better here.
        
             | jamincan wrote:
             | FWIW, these are items that they are working to improve with
             | things like keyword generics and defining a clearer async
             | api so that it will be possible to swap out executors in
             | the future.
        
             | estebank wrote:
             | > We now have two async runtimes. It's a problem.
             | 
             | I agree that this _is_ a problem, which is why  "providing
             | an abstraction layer for runtimes" is something that comes
             | up often as a desired project, but if you zoom out I
             | believe the choice to not have a baked in runtime was the
             | right call. It allowed schedulers to experiment with their
             | APIs and internals in a way that anything in the stdlib
             | can't. It allowed for the existence of schedulers for
             | embedded devices. You can write a keyboard firmware using
             | async/await to deal with IO events! How cool is _that_?
             | 
             | Granted, if you're working in a single "vertical", like web
             | services, this sounds like "useless gold plating", but
             | precluding use cases like that one would go counter to
             | Rust's goals. If Rust didn't care about the language
             | scaling from embedded to HPC, then you could indeed have
             | "Easy Rust" that erases a bunch of sharp edges of the
             | language.
             | 
             | > Rust can do better here.
             | 
             | The team agrees. We're working on it. It was a _major_ area
             | of focus on 2021 and wg-async still exists and is making
             | progress. Some things take time be finished because there
             | 's a lot to do, not enough people and they are just plain
             | hard.
        
               | dcow wrote:
               | If Rust never wants to ship an async runtime, then fine.
               | But in place of that there needs to be a clear standard
               | async API that _everything_ uses so that as a user I can
               | just bring my own runtime as I please and not worry about
               | dependencies shipping their own runtimes. The problem
               | right now is that the api is provided by the runtime so
               | everything ends up depending on some flavor of async
               | standard functionality. As part of this async API, things
               | like async variants of functions that block (in the
               | commonly accepted definition of blocking, i.e.  "sleep or
               | do no work while waiting for an OS interrupt to wake
               | them, so locks and I/O") need to be clearly defined as
               | such so that the compiler can reassure me that if the
               | code compiles then I'm not calling any blocking functions
               | inappropriately. Part of that I'd think would also be an
               | attribute or syntax that would be attached to the
               | existing blocking stdlib functions so that calling code
               | that eventually calls them from an async context could be
               | flagged as an issue at compile time. These two things
               | would solve a lot of the pain I'm telegraphing.
               | 
               | Anyway, I don't keep up day-to-day on Rust happenings.
               | I'm just a user of the language who sometimes needs to
               | dive into the issue tracker when I run into bumps. If all
               | this stuff is being worked on then awesome, simply take
               | my critique as validation that you're working on the
               | right stuff. At the end of the day, though, my experience
               | and feedback is driven by what's currently available to
               | me, not what's in the works but hasn't shipped yet. And
               | part of my critique is that it takes quite awhile for
               | Rust to move on issues that really only exist because
               | something was shipped in a partially completed state.
               | Perhaps the community could be more aggressive with the
               | notion of a version 2.0 and that would unlock changes
               | that are really difficult and annoying and time consuming
               | to make in a compatible way but would be much more
               | palatable if allowed to break a few things here and
               | there. It's great that we have nightly Rust, but perhaps
               | it's about time for a rust-next where people can start
               | staging breaking changes.
        
         | bjackman wrote:
         | There were some popular comments on HN recently saying that
         | manual threading has got much easier in Rust lately and it's
         | now easier to just not use async.
         | 
         | I have no idea about this myself, it's just a bit of public
         | opinion that felt worth noting in the back of my mind!
        
           | vore wrote:
           | Unfortunately async is the only straightforward way you can
           | do cancellation so there's still a lot of scenarios where you
           | can't just use threading easily.
           | 
           | I'm not sure why people are disagreeing with this: how would
           | you cancel UdpSocket::recv without an async runtime? If
           | you're using just the sync UdpSocket, you will need to go set
           | a read timeout and repeatedly poll and insert your own
           | cancellation points, or otherwise jury-rig your own mechanism
           | up with syscalls to select/epoll/whatever, and be careful the
           | whole time you're not leaking anything. However, if you're
           | using an async runtime, you only have to drop the future and
           | the recv is cancelled and the FD is dropped from the
           | scheduler's waitset and everyone is happy.
        
           | wongarsu wrote:
           | Imho this has always been true. Multithreading is very mature
           | in rust, and generally a pleasure to work with (especially if
           | you lean towards channels over mutexes). Async is still a bit
           | tacked on and has a number of footguns and UX problems.
           | 
           | The main reason to use async in rust is that seemingly every
           | network library uses async. And of course servers that would
           | have to start a huge number of threads without async, though
           | from a performance standpoint many servers could honestly
           | live without it.
        
             | brundolf wrote:
             | > The main reason to use async in rust is that seemingly
             | every network library uses async
             | 
             | Isn't it easy though to use an async function as a blocking
             | function, by polling in a loop? (though maybe that still
             | requires an async runtime?)
        
               | [deleted]
        
             | sliken wrote:
             | I'm coming from go, what did you mean by "if you lean
             | towards channels"? Did you mean std::sync::mpsc? Seems like
             | a rather crippled channel if you can only have a single
             | consumer.
        
               | fnord123 wrote:
               | Crossbeam channels are mpmc.
               | 
               | https://docs.rs/crossbeam/latest/crossbeam/channel/index.
               | htm...
        
               | necubi wrote:
               | Crossbeam (https://docs.rs/crossbeam/latest/crossbeam/cha
               | nnel/index.htm...) provides a high performance and
               | ergonomic MPMC channel. As of the latest release of Rust
               | it's now the basis for the MPSC channel in the stdlib.
        
             | 2h wrote:
             | > seemingly every network library uses async
             | 
             | still some options that don't require async:
             | 
             | attohttpc
             | 
             | oxhttp
             | 
             | ureq
        
             | Animats wrote:
             | _The main reason to use async in rust is that seemingly
             | every network library uses async._
             | 
             | Yes. I was calling that "async contamination" a few years
             | back when that trend started. There's a strong lobby for
             | doing web backend stuff in Rust, and they want it to work
             | like Javascript.
             | 
             | On the thread side, deadlocks can be a problem. You don't
             | get any help from the language in keeping your lock
             | hierarchy consistent. There are systems that can detect at
             | compile time that one path locks A before B, and another
             | path locks B before A. Rust needs such analysis.
             | 
             | It turns out that the built in mutexes are not "fair".
             | Actually, they're worse than not fair. If you have two
             | thread loops that do "get item, lock, handle item, unlock",
             | one of them can be starved out completely. This happens
             | even though there's a moment when the mutex is unlocked and
             | something is waiting on it. It's a consequence of an
             | optimization in std::sync::Mutex for the fast case.
             | 
             | The "parking_lot" crate has fair mutexes, but it doesn't
             | have panic poisoning and is not unwind safe, so catching
             | failed threads to get a clean shutdown is hard. This is an
             | issue for GUI programs, because failing to catch panics
             | means the user experience of a failure is that the window
             | closes with no message.
        
               | jerf wrote:
               | "On the thread side, deadlocks can be a problem. You
               | don't get any help from the language in keeping your lock
               | hierarchy consistent."
               | 
               | The 98% solution to this problem is to dodge it entirely.
               | Communicate with things like channels, mailboxes, and
               | actors, or other higher-level primitives. This is how Go
               | programs hold together in general despite not even having
               | as much support as Rust does for this sort of analysis.
               | 
               | The rule I've given to my teams is, never take more than
               | one lock. As soon as you think you need to, move to an
               | actor-ownen-resource or something. The "real" rule (as
               | I'm sure you know, but for others) with multiple locks is
               | "always take them in the same deterministic order" (and
               | even _that 's_ a summary, it really also ought to discuss
               | how only mutexes that can ever be taken at the same time
               | have to be considered in order, and heck for all I know
               | it gets complicated beyond that too; at this point I bug
               | out entirely) but this is, in my opinion, the true source
               | of "thread hell" as written about in the 90s. I don't
               | think it was threading that was intrinsically the
               | problem; yes, threads are more complicated than a single
               | thread, but they're managable with some reasonable guard
               | rails. What was _insane_ was trying to do everything with
               | locks.
               | 
               | In theory, this rule breaks down at some point because of
               | some transactionality need or another; in practice, I've
               | gotten very far with it between a combination of being
               | able to move transactionality into a single actor in the
               | program or offload it to a transactionally-safe external
               | DB. For which, much thanks to the wizards who program
               | those, test those, and make them just a primitive I can
               | dip into when I need it. Life savers, those folk.
        
               | Animats wrote:
               | Yes, I might have been better off with an actor model.
               | I'm writing a high-performance metaverse client, which
               | has both the problems of a MMO game and of a web browser.
               | Must maintain the frame rate, while content you don't
               | control floods in.
        
           | edflsafoiewq wrote:
           | What's changed to make it easier?
        
             | jamincan wrote:
             | They are probably referring to the introduction of scoped
             | threads last year.
        
             | CUViper wrote:
             | Probably the addition of `thread::scope`.
        
         | kouteiheika wrote:
         | > And then how the hell are you supposed to use any code sanely
         | when the entire community is using different async runtimes?
         | 
         | Honestly, and this might be a hot take, but I think that's easy
         | to solve. Just pretend that tokio is in the standard library
         | and use it. And refuse to support anything else. Tokio is
         | stable, fast, well maintained and supported. There's really no
         | reason to use anything else in 98% of cases.
        
           | dcow wrote:
           | The problem is that this.. preference.. isn't made clear or
           | justified anywhere. I use `async_std` because its value prop
           | is "the stdlib but for async". I don't want some weird thing
           | called `tokio` in my codebase with extra bells and whistles I
           | don't need. async-std is a minimal simple fast async runtime
           | for Rust. I am aligned with their vision and love using it.
           | async-std works flawlessly and only really lacks more
           | comprehensive stream support which I augment with the futures
           | crate as is pretty standard.
           | 
           | If yall want everyone to use Tokio then tell me why it's
           | better than async_std. Tell me why it's the de-facto
           | community standard. Because "popular web framework" uses it?
           | Why are all the cool kids using it? Because the others are?
           | 
           | No. Either I want some group with authority to bless a core
           | async runtime implementation that aligns with Rust's
           | philosophy and value prop and bring it under their wing so
           | that it can be made to serve the goals of the project which
           | might include being configurable enough to deploy in all of
           | the myriad scenarios and environments where people use Rust
           | today.
           | 
           | Or in lieu of that, provide the necessary abstraction layer
           | so everyone can bring their own executor like it's supposed
           | be.
           | 
           | The entire premise of Rust not providing a blessed
           | implementation is so that people could pick and choose based
           | on their preferences and use cases. That's the argument. The
           | reality is that the goal is not realized because Rust has not
           | done the work to enable everyone to simply bring their own
           | runtime and call it day. So Rust has not delivered on the
           | core reason for not shipping a runtime in the first place.
           | 
           | The solution of "just use a de-facto blessed runtime" really
           | seems like a concession if Rust's argument for not having one
           | is to be respected.
        
             | hdevalence wrote:
             | > If yall want everyone to use Tokio then tell me why ...
             | it's the de-facto community standard. > I want some group
             | with authority to bless a core async runtime
             | implementation...
             | 
             | Hmm, if Tokio is the de-facto community standard, isn't
             | that saying that the authority of "people who write and run
             | async code for actual production systems" is blessing it as
             | a preferred option?
             | 
             | > The reality is that the goal is not realized because Rust
             | has not done the work to enable everyone to simply bring
             | their own runtime
             | 
             | Or, another possibility is that it hasn't been realized
             | because building a robust, production-ready, well-
             | maintained runtime is itself a lot of work, and Tokio is
             | the only project that has actually done it.
        
         | someusername123 wrote:
         | We actively ban Rust's async /await syntax from our codebase
         | for a few reasons, though primarily it's been an issue with
         | it's lack of maturity. If feels incredibly painful to use in
         | all but the most trivial cases. However; even if were to be
         | much more user friendly, it still would be the wrong model for
         | most of our code since we care significantly more about
         | parallelism then about concurrency.
         | 
         | Having use async/await in production a bit, I wouldn't really
         | recommend it to anybody: the async runtimes are large
         | dependencies typically that are difficult to roll your own,
         | understand or debug. Code built on top of them has numerous
         | performance issues that I uncovered when running it through
         | it's paces. Much as I'm a fan of Rust, a fan of async await, I
         | am not.
        
           | dcow wrote:
           | Are there any web server frameworks built on the now
           | standardized mspc concurrency model?
        
         | adql wrote:
         | I really feel that's this is just inherent to async/await
         | model, you can only put so much lipstick on a pig.
         | 
         | Light threads + messaging (whether proper like Erlang or
         | bastardized like Go) feels like far superior model for
         | concurrency in near-all cases and also easier to write and
         | reason about (series of "functions" sending messages looks very
         | similar to serial code) but I'd imagine it would be very hard
         | to make convenient in GC-less language
        
           | Verdex wrote:
           | > I really feel that's this is just inherent to async/await
           | model, you can only put so much lipstick on a pig.
           | 
           | I've been coming to the same conclusion after working with C#
           | async/await significantly in the last few years. I suspect
           | that async/await in javascript was better than callbacks, so
           | it entered the industry zeitgeist as absolutely good as
           | opposed to relatively good.
        
         | jakear wrote:
         | > Even though Rust does slightly skirt the colored function
         | issue [...] knowing when something you call blocks, or grabs a
         | lock, etc. is something which the compiler doesn't help you
         | with but should.
         | 
         | Are not colored functions _precisely_ the compiler helping you
         | identify when the code has the potential to block? If I write a
         | block of JS in a non-async function context, I can be 100%
         | guaranteed that no other task will interfere with my function
         | 's execution. If I write a block of JS in an async context, I
         | can be 100% guaranteed that the _only_ times when any other
         | task may interfere with my functions execution are explicitly
         | marked with `await`. I love this! I can 't imagine writing
         | serious code without it (barring some go-like messaging model),
         | and I have no idea why people hate it so.
        
           | adql wrote:
           | The point is that having to think about color of the function
           | is disadvantage and needless complexity
           | 
           | > If I write a block of JS in an async context, I can be 100%
           | guaranteed that the only times when any other task may
           | interfere with my functions execution are explicitly marked
           | with `await`. I love this!
           | 
           | You only need to think about it because runtime isn't
           | parallel. If you want ordering in execution you should be
           | using synchronization primitives anyway.
           | 
           | In Go doing "async" via channels could possibly start running
           | the code the instant you instantiate it, on however cores
           | there are available. Go have problem of that requiring a
           | bunch of boilerplate (as there is no macros), but with
           | generics now it is manageable and I can just say "here is
           | function, run it on all elements of the slice in parallel"
           | very easily and it all be pretty reasonabe, like "out =
           | ParallelMapSlice(mapFunc,8,in)".
           | 
           | All done async without thinking about async.
        
           | adql wrote:
           | The point is that having to think about color of the function
           | is disadvantage and needless complexity
           | 
           | > If I write a block of JS in an async context, I can be 100%
           | guaranteed that the only times when any other task may
           | interfere with my functions execution are explicitly marked
           | with `await`. I love this!
           | 
           | You only need to think about it because runtime isn't
           | parallel. If you want ordering in execution you should be
           | using synchronization primitives anyway.
           | 
           | In Go doing "async" via channels could possibly start running
           | the code the instant you instantiate it, on however cores
           | there are available. Go have problem of that requiring a
           | bunch of boilerplate (as there is no macros), but with
           | generics now it is manageable and I can just say "here is
           | function, run it on all elements of the slice in parallel"
           | very easily and it all be pretty reasonabe, like
           | out = ParallelMapSlice(mapFunc,8,in)
           | 
           | All done async without thinking about async.
        
           | dcow wrote:
           | I think you misunderstand my point slightly. Here's the
           | difference: in JS everything that needs to be async is
           | actually marked async. So when you use a browser api or
           | library function you know for certain. I agree I like that
           | and want it for Rust. In Rust the standard lib is not async
           | aware so it cant be used safely from async code, period. And
           | tons of Rust code uses the stdlib and it's not super
           | practical to just not use it so you're stuck asking whether
           | something is safe to call from async code or not with zero
           | help from the compiler. So we agree not having the compiler
           | do this sucks.
           | 
           | (What I meant by partially skirting the colored function
           | issue is that you don't have to call await on the result of a
           | function if you dint want to. So you can call a blue function
           | from a red function just fine and do stuff with the result.
           | You just can't await it. This adds back lots of flexibility
           | that some async impls in other languages don't allow for.)
        
             | jakear wrote:
             | > you can call a blue function from a red function just
             | fine and do stuff with the result. You just can't await it
             | 
             | By this do you mean you can choose between waiting for a
             | value in a way that halts the entire rest of the runtime
             | while it resolves and one that will attempt to perform
             | other tasks while waiting for the resolution?
        
               | dcow wrote:
               | Essentially yes.
               | 
               | Any time you call an async function you get a Future. The
               | future does not resolve until you await it. So you can
               | call an async function from a non-async function normally
               | and do things with the future. You just cant call
               | future.await to yeild until the result is is ready unless
               | you're in an async context. You can ask the runtime to
               | resolve the future if you want, but that's a blocking
               | call so it's up to you to make sure it's safe to block.
        
               | jakear wrote:
               | Oh, well that's exactly the same as JS. You can await it
               | to get a value but you don't have to, you could instead
               | fire-and-forget, batch it up over time and await them all
               | later, or attach a function to run with the resolved
               | value whenever it resolves while letting the current
               | execution move along unhampered. If you don't want to
               | `await` it, your function stays "red" or whatever.
        
               | ithkuil wrote:
               | The main difference between rust futures and JS promises
               | is that rust futures don't get scheduled if you don't
               | poll them (we high await does under the hood). Promises
               | otoh get to run immediately and can run to completion
               | even if you never touch the promise ever againt
        
               | tijsvd wrote:
               | You get the same with e.g. tokio::spawn, which runs the
               | future concurrently and returns something that you can
               | await and get the future's output. Or you can forget that
               | something and the future will still run to completion.
               | 
               | Directly awaiting a future gives you more control, in a
               | sense, as you can defer things until they're actually
               | needed.
        
           | mike_hearn wrote:
           | Some downsides.
           | 
           | The term blocking isn't well defined. How slow does an
           | operation have to be to be considered blocking? There are no
           | rules. Fetching google.com/robots.txt is by universal
           | agreement a blocking operation. Reading a file from the
           | filesystem is considered blocking in JS but may not be in
           | other languages, especially given that very fast disks like
           | Optane or modern NVMe Flash can be only an order of magnitude
           | or two slower than RAM. Taking a lock may or may not be
           | considered blocking. Computing pi to a trillion digits would
           | usually _not_ be considered blocking because it doesn 't make
           | any async OS API calls, even though it might cause long
           | delays before it returns, but if the author chose to
           | implement it as an async iterator then it could be considered
           | blocking.
           | 
           | Worse, async vs not async is about latency requirements of
           | the _caller_ , but the function _author_ can 't know that. If
           | you're in a UI thread doing 60fps animation maybe even
           | reading a file would be too slow, but if you're in a script
           | then it doesn't matter at all and you'd rather have the
           | simpler code. You're calling the same function, the only
           | thing that changed is the performance requirements of the
           | user.
           | 
           | So "potential to block" is to some extent a makework problem
           | that occurs in JS because V8 isn't thread safe, and V8 isn't
           | thread safe because JS isn't, because it was designed for the
           | web which is a kind of UI toolkit. Such toolkits are almost
           | never thread safe because it's too much hassle to program a
           | thread safe renderer and would make it too slow, so that
           | browser-specific implementation constraint propagates all
           | over the JS ecosystem.
           | 
           | Final _major_ problem: some languages use async hype as an
           | excuse to not properly support threads at all, but that 's
           | often the only way to fully exploit the capabilities of
           | multicore hardware. If your language only does async you'll
           | always be restricted to a single core and you have to hope
           | your use case parallelizes embarrassingly well at the top
           | level.
           | 
           | If you have threads then this whole problem just goes away.
           | That's the core argument that Java makes with Project Loom.
           | Instead of introducing colored functions everywhere with
           | poorly defined and inconsistent rules over which color to
           | use, just make threads work better. Suddenly the binary
           | async/not async distinction becomes irrelevant.
           | 
           | Still, there's a good reason most languages don't do the Loom
           | approach. It's a massive PITA to implement. The Java guys are
           | doing it because of their heavy investment in advanced VMs,
           | and because the Java ecosystem only rarely calls out to
           | native code. It'd be a lot harder in an ecosystem that had
           | less of a focus on "purity".
        
             | jakear wrote:
             | > The term blocking isn't well defined. How slow does an
             | operation have to be to be considered blocking? There are
             | no rules.
             | 
             | The rule is very simple IMO, I already described it: if
             | other execution can run while you wait for the value, it's
             | blocking. (Edit: to be specific, other _observable_
             | execution, so in your thread with write access to state you
             | can see)
             | 
             | > Worse, async vs not async is about latency requirements
             | of the caller,
             | 
             | I disagree. To me, async vs not async is about
             | interruptibility. As you already mentioned, the code that
             | computes a trillion digits of Pi is async if its
             | interruptible, and not async if it isn't. If you're in a UI
             | thread doing 60fps animation, async vs not async isn't
             | really all that relevant (barring the slight inherent
             | overhead in calling an async function). What you need is a
             | profiler to tell you where you have a non-interruptible
             | block consuming your 16ms. The compiler can't know this a
             | priori, unless you make significant restrictions to the
             | type of code you write (no unbounded loops, for instance).
             | 
             | > Such toolkits are almost never thread safe because it's
             | too much hassle to program a thread safe renderer and would
             | make it too slow, so that browser-specific implementation
             | constraint propagates all over the JS ecosystem.
             | 
             | From here on you seem to not be aware of the fantastic
             | recent developments in WebWorkers, it is now quite easy to
             | spin up worker threads in JS, which communicate with the
             | main thread via message passing.
        
               | adql wrote:
               | > The rule is very simple IMO, I already described it: if
               | other execution can run while you wait for the value,
               | it's blocking. (Edit: to be specific, other observable
               | execution, so in your thread with write access to state
               | you can see)
               | 
               | is gmtime() blocking ? Technically it needs to ask kernel
               | to get the time so it has same potential to block as any
               | other call
               | 
               | > As you already mentioned, the code that computes a
               | trillion digits of Pi is async if its interruptible, and
               | not async if it isn't.
               | 
               | Very bad definition as either
               | 
               | * every single code is because kernel can always
               | reschedule thread to do something else * none of the
               | compute code is if language runtime can't interrupt it
               | (old versions of Go had that behaviour for tight loops
               | IIRC) * every of compute code is if language can
               | arbitrarily interrupt it.
               | 
               | > From here on you seem to not be aware of the fantastic
               | recent developments in WebWorkers, it is now quite easy
               | to spin up worker threads in JS, which communicate with
               | the main thread via message passing.
               | 
               | That's entirely worse approach as sharing any data
               | structures is impossible and where mutex would be enough
               | you're now shoveling tons of data around. It's workaround
               | at _best_
               | 
               | It's like saying "my language is multithreaded because we
               | started many processess and communicate via redis"...
        
               | jakear wrote:
               | I'd assume you missed my edit if you hadn't directly
               | quoted it :) The kernel interrupting the thread doesn't
               | meet the criteria of "other execution _in your thread_
               | with write access to _state you can see_ ".
        
             | twic wrote:
             | Given the existence of mmap and NFS, arbitrary reads from
             | memory can end up being network calls. I don't think it's
             | possible to make those async without something like kernel
             | scheduled entities / scheduler activations, which are now a
             | long-forgotten dream.
        
         | tipsytoad wrote:
         | It's coming!
         | 
         | https://blog.rust-lang.org/inside-rust/2022/11/17/async-fn-i...
         | 
         | Although I'm still not sure how they return the unsized Future
         | without the whole pin box method
        
           | SpaghettiCthulu wrote:
           | My understanding is that `async fn` in a trait is desugared
           | to `fn() -> impl Future` just like it would be anywhere else,
           | so there's no unsized types involved.
        
             | pitaj wrote:
             | It's actually desugared to a generic associated type:
             | trait Foo {             type bar_Returns<'s>: Future<Output
             | = Result<u64, ()>>             where Self: 's;
             | fn bar<'s>(&'s self) -> Self::bar_Returns<'s>;         }
        
         | MuffinFlavored wrote:
         | > I fear writing async code every time I have to do it.
         | 
         | Devil's advocate: can you give an example of something
         | complicated you ran into that wasn't "simply" solved by
         | Arc<Mutex<T>>?
        
           | dxuh wrote:
           | Probably any IO heavy workload, which usually requires async
           | if it should be performant (like in all other programming
           | languages too).
        
           | dcow wrote:
           | Everything in my example. Scratching my head about whether I
           | can call this part of my code from an async function because
           | it might do filesystem operations or make a blocking network
           | request. If I can't, jumping through hoops to spawn an async
           | task that can. Oh, now everything has to be static. Oh god
           | async block syntax. I run into this a lot working on an API
           | server written in Rust. It's part of a larger project that
           | has a CLI and lib crates. So I've run into everything I
           | mentioned from poor language support for expressing async
           | traits, to mixing and matching async runtimes and calling
           | conventions.
        
             | spoiler wrote:
             | > Scratching my head about whether I can call this part of
             | my code from an async function because it might do
             | filesystem operations or make a blocking network request.
             | If I can't, jumping through hoops to spawn an async task
             | that can.
             | 
             | If you have code that needs to wait on IO (regardless of
             | how fast or slow it could be, the point is that it's not
             | predictable), and want to call it from an async function,
             | why not make that code an async function too?
             | 
             | If it's library code, than yeah I agree it can suck. I tend
             | to wrap those in my own API instead of inline spawning. At
             | least it localises the uglyness, and that approach tends to
             | compose well for me in practice. It does require a bit more
             | code though, but my business logic code and plumbing code
             | end up being separated and as a result easie to maintain
             | too.
             | 
             | > Oh, now everything has to be static.
             | 
             | I will grant you that the static lifetime is a bit
             | confusing until it clicks, but the issue is often that
             | people have a wrong intuition about what a static lifetime
             | bound means because it's introduced a bit clumsily and
             | glossed over in the book.
             | 
             | I agree that async blocks feel a bit "smelly" syntax wise
             | (for lack of better word).
             | 
             | I feel like some of these are just async woes that aren't
             | trivial to solve, but I'd be surprised if the language
             | teams weren't aware of this already and working towards
             | solutions for these things. Async is very much MVP-ish in
             | Rust anyway
        
               | dcow wrote:
               | The core of the problem from my angle is that rust's
               | stdlib is not async safe. It's literally incorrect to
               | call parts of the Rust stdlib from async code and the
               | language tells you nothing about this. It doesn't even
               | say "this function blocks are you sure you want to call
               | it?". So yeah I've got a bunch of my own library code
               | (and others') that I wrote normally and then later wanted
               | to call it from async code and it's a big mess.
               | 
               | c.f. Swift's async/await impl. Swift solved a lot of
               | these woes because it ships a stdlib and runtime that
               | supports async code.
        
               | brundolf wrote:
               | > rust's stdlib is not async safe. It's literally
               | incorrect to call parts of the Rust stdlib from async
               | code and the language tells you nothing about this.
               | 
               | Can you get more specific about what goes wrong when you
               | call certain parts of the standard lib from async code?
               | This is the first I'm hearing about this, and I'm not
               | fully clear on what you mean by "not async safe"
        
               | herbstein wrote:
               | Not OP, but it won't yield to the runtime. This blocks
               | the executing thread until the operation completes.
               | That's the only sense in which the standard library isn't
               | async safe.
               | 
               | Additionally, according to a core Tokio developer, the
               | time between .await calls should be in the order of 100s
               | of microseconds at most to ensure the best throughput of
               | the runtime.
        
               | jamincan wrote:
               | Isn't that a bit different from it not being safe,
               | though? Ill-advised perhaps, but not necessary unsafe.
        
               | dcow wrote:
               | If you make too many of these calls your program will
               | stop executing until I/O becomes available because you've
               | exhausted your execution threads. If I/O never comes,
               | which is very possible especially over a network
               | connection, then you're stalled. Same outcome as being
               | deadlocked, essentially.
               | 
               | I'd argue this is very much unsafe because it's not
               | correct async programming and can cause really difficult
               | to debug problems (which is the same reason we care about
               | memory safety--technically I can access any part of the
               | pages mapped to my process, it's just not advisable to
               | make assumptions about the contents of regions that other
               | parts of the program might be concurrently modifying).
               | 
               | The fact that people have never heard of this or don't
               | understand the nuance kinda proves the point (=. You
               | should, ideally, have to opt in to making blocking calls
               | from async contexts. Right now it's all too easy to
               | stumble into doing it and the consequences are gnarly in
               | the edge cases, just like memory corruption.
        
               | brundolf wrote:
               | Hmm. In the worst-case though, wouldn't it just be
               | equivalent to a non-async version of the program? I.e.
               | "blocking" is what every normal function already does.
               | Blocking for IO might take longer, but again, that's what
               | normal Rust programs already do. So to me it sounds like
               | "not async-optimized" rather than "not async-safe"
               | 
               | But none of that sounds anything like a deadlock, so
               | maybe I'm missing some other aspect here
               | 
               | As an aside: "safe" means a very specific thing in Rust
               | contexts, so even if this is analogous to that
               | (preventing a footgun that could have catastrophic
               | consequences), it might be best to use a different word
               | or at least disambiguate. Otherwise people will argue
               | with you about it :)
        
               | hdevalence wrote:
               | Correct, there's no actual problem here, just a potential
               | performance pitfall, but that pitfall is no worse than a
               | naive synchronous implementation would be.
               | 
               | There's no big problem here, just write code and run it.
               | If it is a really big problem it will show up in a
               | profiler (if you care enough about performance to care
               | about this issue, you are using a profiler, right...
               | right?)
        
       | Aissen wrote:
       | This is a quite nice article. Do not stop at the first example
       | that shows something passed by value that would probably better
       | be passed by reference instead of always cloning.
       | 
       | I really like how the author used green/red backgrounds to show
       | code that doesn't compile.
        
       | gardaani wrote:
       | I have been hurt by issues similar to inc(&mut self) in the
       | "Functional abstraction" section so many times. I have been
       | considering using macros as a workaround, but so far I have just
       | written the same code multiple times. DRY isn't easy in Rust.
        
         | edflsafoiewq wrote:
         | Me too. You can make the references at the caller and pass them
         | individually to the function, like inc(&mut self.x), but that's
         | also not very nice. It's weird how this is at odds with the
         | purpose of grouping variables into a logical struct in the
         | first place.
        
       | hardwaregeek wrote:
       | This is a very nice article! And a very nicely designed blog.
       | I've run into almost all of these issues myself. Only in Rust
       | have I had to respond to someone's PR feedback with "yes, that
       | would be better but the compiler won't let me for xyz reasons".
       | 
       | I do wonder if someone could design a language with a little more
       | built in logic that would be able to know that an Arc won't be
       | moved, or that can understand String can be matched to a str, if
       | that could function as the often requested "higher level Rust"
        
         | estebank wrote:
         | > or that can understand String can be matched to a str
         | 
         | That's called "deref patterns" and it's been made to work on
         | nightly only for String for now, but 1) the implementation is
         | currently a huge hack, 2) development stalled over the holidays
         | but there's multiple interested parties to see this implemented
         | and stabilized and 3) the idea is to make these "just work"
         | (without any extra sigils or keywords) for specific Deref
         | impls, maybe with a new marker trait DerefPure, which might or
         | might not be available for end users to (similar to trait Try).
         | DerefMut is IIRC still an open question.
        
         | swsieber wrote:
         | There are a few crates that abstract the unsafe setup you need
         | for a self referential struct, but I'm not sure which or their
         | status. And they might employ "Pin" for their abstraction, I'm
         | not sure. Pin is basically a wrapper type to keep an object
         | from being moved. The interesting thing is that Pin doesn't
         | receive special compiler treatment for its implementation. It's
         | implemented in normal rust (IIRC).
        
       | tipsytoad wrote:
       | Also, leaking memory with reference cycles from shared ownership
        
         | naasking wrote:
         | There are local cycle collection algorithms, but Rust doesn't
         | expose enough metadata to write them in Rust itself. They're
         | also not cheap to run.
        
         | dcow wrote:
         | This seems to be a problem that even Swift can't solve. The
         | only languages that avoid this issue are ones with active GCs.
         | Not that I don't agree and wouldn't love to see efforts made
         | here.
        
           | swsieber wrote:
           | It'd be possible to add an active GC to Rust. There are some
           | library attempts at that:
           | 
           | - https://lib.rs/crates/gc
           | 
           | - https://github.com/withoutboats/shifgrethor
           | 
           | But they don't look particularly usable.
           | 
           | Beyond that, yeah, it sucks.
        
       | netr0ute wrote:
       | The part about the filesystem is completely ironic, because the
       | C++ version of the same API uses no pointers at all.
        
         | ynik wrote:
         | I think you misunderstood the article. That section is not
         | talking about actual pointers. Filenames are _like_ pointers,
         | opening a file is like dereferencing the filename to get the
         | actual file. Any other process could  "free your pointer" (by
         | deleting the file) at any point in time. That's true no matter
         | which language you use.
        
           | charcircuit wrote:
           | >Any other process could "free your pointer" (by deleting the
           | file) at any point in time.
           | 
           | Files are garbage collected. There is no use after free if
           | someone deletes your file because the OS sees that you are
           | still using it
        
             | [deleted]
        
             | hot_gril wrote:
             | Yeah, common confusion to be out of free space, rm some
             | huge file, and still be out of space because something has
             | the file open.
        
       | topaz0 wrote:
       | I have mainly read about rust, not worked with it, but the
       | section about:                   let x = compute_x();
       | f(x.clone());         g(x);
       | 
       | was confusing/surprising to me. Isn't the whole point of default-
       | immutable that you can use x without owning a copy? Forgive me if
       | this is a stupid question.
        
         | hawski wrote:
         | I'm no Rust expert, but it would be ok if you would do borrows
         | there.                 let x = compute_x();       f(&x);
         | g(&x);
         | 
         | Otherwise you have moves there. Now it depends on the API, it
         | may want the value and not the reference (a borrow).
        
           | topaz0 wrote:
           | I see. Is it not common practice for functions to expect
           | references when they can (i.e. when they don't need to modify
           | the argument)?
        
           | [deleted]
        
       | zamalek wrote:
       | Some of these are more "When a linear type system hurts:"
       | 
       | > Common expression elimination
       | 
       | I feel as though there should be a warning when a parameter that
       | could be a reference is not. I _usually_ run into this when a
       | function that I 've written is incorrectly taking ownership of a
       | parameter.
       | 
       | > Newtype abstraction
       | 
       | The Rust compiler could implicitly call `into` and `inner` (and
       | then possibly optimize those away), but this (explicit or
       | implicit) is required for linear types. What could work is:
       | `Hex(&bytes)`.
        
       | wheelerof4te wrote:
       | And this is why Python, Java and Javascript rule modern day
       | programming landscape.
       | 
       | Those languages let you _just do the frickin ' work_ TM.
        
       | Ciantic wrote:
       | For intro and info about Software Transactional Memory which
       | allows _real_ fearless concurrency:
       | 
       | "Beyond Locks: Software Transactional Memory" by Bartosz Milewski
       | 
       | https://bartoszmilewski.com/2010/09/11/beyond-locks-software...
       | 
       | I wonder how far it can be implemented in Rust? There are people
       | trying:
       | 
       | https://github.com/aakoshh/async-stm-rs (updated just days ago)
        
       | mjb wrote:
       | > Remember that paths are raw pointers, even in Rust. Most file
       | operations are inherently unsafe and can lead to data races (in a
       | broad sense) if you do not correctly synchronize file access.
       | 
       | This is a reality that our operating systems have chosen for us.
       | It's bad. The way Unix does filesystem stuff is both too high
       | level for great performance, and too low level for developer
       | convenience and to make correct programs easy.
       | 
       | What would it look like to go higher level? For example:
       | 
       | - Operating systems could support START TRANSACTION on filesystem
       | operations, allowing multiple operations to be performed
       | atomically and with isolation. No more having to reason carefully
       | about which posix file operations are atomic, no more having to
       | worry about temp file TOCTOU etc.
       | 
       | - fopen(..., 'r') could, by default, operate on a COW snapshot of
       | a file rather than on the raw file on the filesystems. No more
       | having to worry about races with other processes.
       | 
       | - Temp files, like database temp tables, could by default only be
       | visible to the current process (and optionally its children) or
       | even the current thread. No more having to worry about temp file
       | races and related issues.
       | 
       | That sort of thing. Maybe implemented like DBOS: https://dbos-
       | project.github.io/. Or, you know, just get rid of the whole idea
       | of processes for server-side applications. Was that really a good
       | idea to begin with?
       | 
       | > However, if the locks you use are not re-entrant (and Rust's
       | locks are not), having a single lock is enough to cause a
       | deadlock.
       | 
       | I'm a fan of Java's synchronized, and sometimes wish that Rust
       | had something higher-level object-level synchronization primitive
       | that was safer than messing with raw mutexes (which never seems
       | to end well).
        
         | jstimpfle wrote:
         | > Temp files, like database temp tables, could by default only
         | be visible to the current process (and optionally its children)
         | or even the current thread. No more having to worry about temp
         | file races and related issues.
         | 
         | This is one area where the Unix design is much closer to good
         | compared to other historic approaches. The separation of file
         | identity from file paths (not sure if it is a a Unix invention)
         | pretty much allows temp files to be implemented, in fact I
         | believe O_TMPFILE in Linux allows that.
         | 
         | > START_TRANSACTION
         | 
         | can be implemented on top. Transactions are quite involved, so
         | I wouldn't blame a random file system for not implementing
         | them. Use a database (yes, most of them skip the buffering and
         | synchronisation layer of the filesystem they are stored on, and
         | use e.g. O_DIRECT).
         | 
         | > fopen(..., 'r') could, by default, operate on a COW snapshot
         | 
         | Something like that would require the definition of safe states
         | in the first place (getting an immutable view is pointless if
         | the current contents aren't valid), so we're almost back at
         | transactions.
         | 
         | I think Unix file systems are fine and usable as a common
         | denominator to store blobs, like object stores do, but also
         | easily allow single writers to modify and update them. As ugly
         | and flawed and bugged as most file systems are, most of the
         | value comes from the uniform interface. You need better
         | guarantees than that, you probably have to lose that interface
         | and move to a database.
        
         | eska wrote:
         | The filesystem is a database.
         | 
         | Databases like Postgres implement such a transaction not
         | through a lock, but by keeping multiple versions of the
         | data/file until the transactions using them closed.
         | 
         | Yet other databases operate on the stream of changes and the
         | current state of data is merely the application of all changes
         | until a certain time, allowing multiple transactions to use
         | different snapshots without blocking each other (you can parse
         | a file while somebody else edits it and you won't be
         | interrupted).
         | 
         | I read about various filesystems offering some of these
         | features, but not in IO APIs.
        
           | adql wrote:
           | Then you user wonders why they see 40GB of files but 70GB of
           | used space
           | 
           | > Yet other databases operate on the stream of changes and
           | the current state of data is merely the application of all
           | changes until a certain time, allowing multiple transactions
           | to use different snapshots without blocking each other (you
           | can parse a file while somebody else edits it and you won't
           | be interrupted).
           | 
           | Databases have features in-place to merge those parallel
           | strings of changes and at worst, abort transaction. Apps
           | using that handle that.
           | 
           | Now try to explain to user why their data evaporated coz they
           | had it open in 2 apps...
        
           | eqvinox wrote:
           | > The filesystem is a database.
           | 
           | Nah. The file system is a slightly improved block storage
           | management tool that isolates your database implementation
           | from dealing directly with mass storage devices.
           | 
           | Thinking that the filesystem is viable as a database when it
           | isn't -- that's exactly the problem here :D
        
           | naasking wrote:
           | > The filesystem is a database
           | 
           | A totally shit database designed before we knew anything
           | about databases. Well past time to retire them.
        
             | wongarsu wrote:
             | They are ok databases, optimized for very different use
             | cases than normal databases. If you treat files as blobs
             | that can only be read or written atomically, then SQLite
             | will outperform your datasystem. But lots of applications
             | treat files as more than that: multiple processes appending
             | on the same log file, while another program runs the
             | equivalent of `tail -f` on the file to get the newest
             | changes; software changing small parts in the middle of
             | files that don't fit in memory, even mmapping them to treat
             | them like random access memory; using files as a poor man's
             | named pipe; Single files that span multiple terabytes; etc.
        
               | naasking wrote:
               | None of those other uses are outside the scope of a real
               | database:
               | 
               | > multiple processes appending on the same log file,
               | while another program runs the equivalent of `tail -f` on
               | the file to get the newest changes
               | 
               | Not a problem with SQLite. In fact it ensures the
               | atomicity needed to avoid torn reads or writes.
               | 
               | > software changing small parts in the middle of files
               | that don't fit in memory
               | 
               | This is exactly an example of something you don't need to
               | worry about if you're using a database, it handles that
               | transparently for _any_ application, instead of every
               | application having to replicate that logic when it 's
               | needed. Just do your reads and writes to your structured
               | data and the database will keep only the live set in
               | memory based on current resource constraints.
               | 
               | > using files as a poor man's named pipe
               | 
               | Even better, use a shared SQLite database instead, and
               | that even lets you shared structured data.
               | 
               | > Single files that span multiple terabytes; etc.
               | 
               | SQLite as it stands supports databases up to 140 TB.
               | 
               | > even mmapping them to treat them like random access
               | memory
               | 
               | This is pretty much the only use I can think of that
               | isn't supported by SQLite out of the box. No reason it
               | can't be extended for this case if needed.
        
           | giovannibonetti wrote:
           | I remember a discussion a while ago to use SQlite as a
           | filesystem engine. I imagine that not needing a server /
           | daemon would make it more reliable for one of the first
           | things needed on boot.
           | 
           | However, I don't know what's the recommended way to handle
           | concurrent writes with SQlite. In the end we have a single
           | process handling all the persistence logic, which becomes
           | essentially a server just like Postgres?
        
         | eqvinox wrote:
         | Like many other things, the file system is a layer that, after
         | some passing of time, is proving a bit raw in the abstraction
         | it provides. And thus layers are created above it.
         | 
         | The same thing is happening pretty much everywhere in computer
         | engineering. Our network protocols provide higher level
         | abstractions (encryption, RPC calls, CRUD, ...). Higher level
         | graphics rendering libraries proliferate. Programming languages
         | provide additional layers and safety guarantees.
         | 
         | > What would it look like to go higher level?
         | 
         | File systems aren't "bad" and don't need to be changed or
         | replaced -- we just need to use the higher abstractions _that
         | already exist_ much more. Use a proper database when it 's
         | appropriate. Or some structured object storage system, maybe
         | integrated into your programming language.
         | 
         | Ultimately, accessing the file system should more and more
         | become akin to opening a raw block device, a TCP socket without
         | an SSL layer, or drawing individual pixels. Which is to say:
         | there are absolutely good reasons to do so, but it shouldn't be
         | your default. And it should raise a flag in reviews to check if
         | it was the appropriate layer to pick to solve the problem at
         | hand.
         | 
         | (added to clarify:)
         | 
         | This all is just to say: it's more helpful to proliferate
         | existing layers above the file system, than to try to change or
         | extend the semantics of the existing FS layer. Leave it alone
         | and put it in the "low-level tools" drawer, and put other tools
         | on your bench for quick reach.
        
           | mjb wrote:
           | > This all is just to say: it's more helpful to proliferate
           | existing layers above the file system, than to try to change
           | or extend the semantics of the existing FS layer. Leave it
           | alone and put it in the "low-level tools" drawer, and put
           | other tools on your bench for quick reach.
           | 
           | Yes! But it's easier said than done when one of these things
           | is in the stdlib and the other isn't.
        
             | eqvinox wrote:
             | > one of these things is in the stdlib and the other isn't
             | 
             | Oh it's much worse than that. One of these things is what
             | the user sitting in front of their computer has a nice
             | integrated UI to view and search... if a photo editing
             | applications starts storing my photos in a database that I
             | can't _easily and simply_ copy some photo out of, I 'll be
             | rather annoyed. And each application having their own UI to
             | do this isn't the solution either, really.
             | 
             | [EDIT: there was some stuff here about shell extensions &
             | co. It was completely besides the point. The problem is
             | that the file system has become and unquestionably is the
             | common level of interchange for a lot of things.] ...didn't
             | Plan 9 have a very interesting persistence concept that did
             | away with the entire notion of "saving" something -- very
             | similar to editing a document in a web app nowadays, except
             | locally?
             | 
             | Either way I don't know jack shit about where this is going
             | or should go. I'm a networking person, all I can tell you
             | for sure is to use a good RPC or data distribution library
             | instead of opening a TCP socket ;).
        
               | MisterTea wrote:
               | > ...didn't Plan 9 have a very interesting persistence
               | concept ...
               | 
               | I think that was Oberon which influenced the Plan 9 Acme
               | editor.
        
               | pjmlp wrote:
               | It was, you can see it on an Oberon emulator.
               | 
               | https://schierlm.github.io/OberonEmulator/
        
           | crabbone wrote:
           | > File systems aren't "bad" and don't need to be changed or
           | replaced
           | 
           | The complaint is specifically about _UNIX_ filesystem. And
           | yeah, it 's bad. It needs to be _replaced_ , not wrapped in
           | another layer of duct tape.
           | 
           | The examples of layers you described come not from any kind
           | of sound design that evolved to be better. It was bad, design
           | without foresight and much design at all. Historically, it
           | won because it was first to be ready, and the audience was
           | impatient. And it stayed due to network effect.
           | 
           | The consistency guarantees, the ownership model, the
           | structure of the UNIX filesystem objects are conceptually
           | bad. Whatever you build on top of that will be a bad solution
           | because your foundation will be bad. The reason these things
           | aren't replaced is tradition and backwards compatibility.
           | People in filesystem business knew for decades that the
           | conceptual model of what they are doing isn't good, but the
           | fundamental change required to do better is just too big and
           | too incompatible with the application layer to ever make the
           | change happen.
        
         | naasking wrote:
         | > The way Unix does filesystem stuff is both too high level for
         | great performance, and too low level for developer convenience
         | and to make correct programs easy.
         | 
         | Indeed. Imagine if on program start, each process gets its own
         | sqlite database rather than a file system. So many issues would
         | be solved. A user shell is then just another sqlite database
         | that indexes process databases so you can inspect them using a
         | standard interface.
        
         | wongarsu wrote:
         | > Operating systems could support START TRANSACTION on
         | filesystem operations, allowing multiple operations to be
         | performed atomically and with isolation
         | 
         | Windows supports that since Windows Vista [1], and at least
         | back then it was used by Windows Update and System Restore [2].
         | But programming languages usually only expose the lowest common
         | denominator of file system operations in easy APIs, so
         | approximately nobody uses it. Also, it's deprecated by now.
         | 
         | But maybe someone here has deeper insights in what went right
         | and wrong with that implementation, because on the surface it
         | looks like it would erradicate entire classes of security bugs
         | if used correctly.
         | 
         | 1: https://learn.microsoft.com/en-
         | us/windows/win32/fileio/about...
         | 
         | 2:
         | https://web.archive.org/web/20080830093028/http://msdn.micro...
        
           | crabbone wrote:
           | I don't know the answer, but I'd suspect the following:
           | nobody in storage business cares about what Microsoft is
           | doing beside Microsoft themselves. From storage perspective,
           | supporting Microsoft is always a huge pain. Most of those who
           | do provide support try to limit it to exposing SMB server. I
           | had a misfortune to try to expose iSCSI portal to Windows
           | Server. Luckily, the company was in its relatively early
           | stages where they could decide what things they want to
           | support, and after couple months of struggle they just
           | decided to forget Windows existed.
           | 
           | So, I think, this feature, just like WinFS, and probably ReFS
           | after it, will just end up being cancelled / forgotten. The
           | kind of users who use Windows aren't sophisticated enough to
           | want advanced features from their storage, but support and
           | development of this stuff is costly and demanding in other
           | ways. If they cannot sell this as a separate product that
           | runs independently of Windows, there's no real future for it.
           | The only real future for Windows seems to be Hyper-V, but
           | hypervisors, generally, have very different requirements to
           | their storage than desktops. Simpler in some ways, but more
           | demanding in other ways.
           | 
           | So, bottom line: it probably didn't see any use because the
           | audience wasn't sophisticated enough to want that, and they
           | couldn't sell it to the audience that was sophisticated
           | enough, but was also smart enough not to buy from Microsoft.
        
           | dragonwriter wrote:
           | > Also, it's deprecated by now.
           | 
           | This is why programming languages don't rush to support hyped
           | new platform features without some combination of wide
           | support and/or a very compelling use case in their core
           | design or standard libraries.
        
           | AceJohnny2 wrote:
           | > _Also, it 's deprecated by now._
           | 
           | Aaaand there it is.
           | 
           | Such paradigms need years, at least a decade, to percolate
           | into the general programming culture. Microsoft (and I
           | daresay companies in general) just doesn't have the stability
           | to shepherd such changes.
        
             | mattgreenrocks wrote:
             | It's kind of sad how programming improvements are
             | essentially gated by the median skill level of developer.
        
               | fpoling wrote:
               | I do not see how it follows. A smart developer will
               | choose a database, not a file system, if data integrity
               | is important. So it can be simply the case that the
               | transactional API with a lot of its limitations were
               | simply not flexible enough for people to bother.
        
               | [deleted]
        
               | AceJohnny2 wrote:
               | It's not gated by median skill.
               | 
               | Rather, it's the level of effort required to standardize,
               | carry over, or reproduce a particular feature in the
               | disparate environments we use.
               | 
               | Say Windows maintained that filesystem transactional
               | feature. It'd be exposed in their C++ (?) API. What if
               | you wanted to write your program in Python? What if you
               | wanted to port it to Mac? Linux?
               | 
               | It just takes time for such platform features to
               | percolate to the various environments that could use
               | them.
        
               | bfrog wrote:
               | And you just described why Rust is such a great choice
               | for a open source project over C/C++.
        
         | jcranmer wrote:
         | Moving to a fully transactional model for filesystems sounds
         | like it will create more pain than help, especially given the
         | historical tendency of filesystem implementations to cheat on
         | semantics every opportunity they get.
         | 
         | > - fopen(..., 'r') could, by default, operate on a COW
         | snapshot of a file rather than on the raw file on the
         | filesystems. No more having to worry about races with other
         | processes.
         | 
         | You can go a few steps further. If you have some sort of
         | O_ATOMIC mode on creating a file descriptor, that operates on a
         | separate snapshot in both read and write mode (and in the case
         | of open for write, will atomically replace the existing file
         | with the newly written copy on close), that would have more
         | correct semantics than existing POSIX I/O for most
         | applications. You can't make it default at the syscall level,
         | but most language I/O abstractions could probably make it
         | default. Notably, this can be implemented in the OS kernel
         | without actually touching filesystem implementations
         | themselves, just the way the kernel arbitrates between
         | userspace and the filesystem.
         | 
         | Another interesting file mode that fixes many use cases is
         | having some sort of "atomic append" for files that handles the
         | use case of simultaneous readers and writers of log files. Set
         | up the API so that you can guarantee that multiple writers each
         | get to add their own atomic log message (without other
         | synchronization), and that readers can never see just part of a
         | log message.
         | 
         | The difficult part of the filesystem to reason about is the
         | file metadata and directory listings. I've seen suggestions in
         | the past to use file descriptors instead of strings as the
         | basis for handling directory queries in language
         | implementations. If you don't do at least that much, it's
         | basically impossible to solve TOCTOU issues, but I'm not sure
         | there's a lot you can do with COW snapshotting of directories
         | that still manage to be acceptable in performance.
         | 
         | > Temp files, like database temp tables, could by default only
         | be visible to the current process (and optionally its children)
         | or even the current thread. No more having to worry about temp
         | file races and related issues.
         | 
         | Creating unnamed temporary files is a solved problem. Named
         | temporary files is more difficult, but create-only-if-
         | doesn't-exist exists on all platforms, and most language APIs
         | have some way of getting to that (it's "x" flag in C11's
         | fopen).
        
           | gpderetta wrote:
           | >Another interesting file mode that fixes many use cases is
           | having some sort of "atomic append" for files that handles
           | the use case of simultaneous readers and writers of log files
           | 
           | My understanding is that O_APPEND gives exactly this
           | guarantee. Readers can read partial records, but with proper
           | record marking it shouldn't be an issue.
        
             | Filligree wrote:
             | That assumes your data can be written in a single write(2)
             | call.
             | 
             | I don't know what size record is guaranteed to be writable
             | that way, if any, but I wouldn't assume it's infinity.
             | Short writes aren't illegal. Perhaps you were pointing at
             | this, but then you need to work around the limitations of
             | the OS, and...
             | 
             | Why can't we just have transactions?
        
               | eqvinox wrote:
               | > Short writes aren't illegal.
               | 
               | They aren't, but in practice I don't think they can
               | happen in the scenarios I looked at (= BSD and Linux
               | writes to "normal" file systems backed by mass storage) -
               | unless you run out of disk space or your disk has some
               | low-level error.
               | 
               | But, again, the problem is that the API does not provide
               | this guarantee. I _think_ it 's generally true in
               | practice, but... you can't rely on it.
               | 
               | (My context was a thread-capable lock-free logging
               | backend; we just call writev() with the entire log
               | message at once. Of course there's still locking to do
               | this in the kernel, but that's always there -- rather
               | deal with one lock than two.)
        
               | Filligree wrote:
               | Honestly that's the worst of all worlds: Behaviour that's
               | reliable, but not guaranteed.
               | 
               | People will definitely start to rely on it, and then you
               | can't change it later when it makes sense, which means
               | the relevant APIs can't be reworked to better treat disks
               | as the asynchronous network devices they nowadays are.
               | Even though the APIs are flexible enough (per
               | documentation) to manage that.
               | 
               | On the flip side, when you're the developer, you can't
               | rely on the behaviour without feeling dirty, and/or
               | needing separate codepaths per OS because this _isn 't_
               | specified, and it's a bad idea to rely on if you aren't
               | absolutely sure you're on Linux.
               | 
               | And then there's the fact that, while write(2) on
               | O_APPEND is certainly atomic in most cases... I simply
               | can't trust that's true for arbitrarily large writes,
               | because it certainly isn't specified; it might do short
               | writes above 64k, or 64M, or something. So I'll need an
               | error-handling path of some sort anyway.
        
               | gpderetta wrote:
               | I don't think that write on a actual file can randomly
               | return a partial write, unless an actual error has
               | occurred (out of disk space, segfault on the buffer, i/o
               | error, or an interrupt). These are all things that are
               | either under the application control or would need to
               | handled anyway.
        
               | benmmurphy wrote:
               | I thought it could happen if you received a signal but
               | apparently on linux signals won't interrupt file i/o.
               | which brings back memories of processes getting stuck
               | doing NFS I/O.
        
         | AtNightWeCode wrote:
         | File systems should never be transactional. That is plain
         | incorrect. You or your tech should solve that if needed. The
         | easiest way to see that this is incorrect is that you need to
         | handle the error anyway. Meaning, adding something that can
         | cause another error for an error is not viable. GO got this
         | totally right.
        
         | rowanG077 wrote:
         | > I'm a fan of Java's synchronized, and sometimes wish that
         | Rust had something higher-level object-level synchronization
         | primitive that was safer than messing with raw mutexes (which
         | never seems to end well).
         | 
         | I'm not a Java expert. But from looking it up it seems like
         | Java's synchronized is worse in every way to Rusts Mutex(And
         | you most of the time shouldn't even be using mutexes in Rust).
         | You forget a synchronized annotation? Too bad. Whereas a mutex
         | in Rust protects the access to a variable. You literally can't
         | modify it (well modulo unsafe) unless you lock the mutex.
        
         | adql wrote:
         | Personally I think keeping it low level but predictable would
         | be far saner approach. App knows what it want to do with data,
         | filesystem can't. Just trimming all of the inconsitencies
         | (especially the "fun" around sync and cached access..) and it's
         | already much better. Of course probably very hard to do without
         | fucking up existing apps ;/
         | 
         | > Operating systems could support START TRANSACTION on
         | filesystem operations, allowing multiple operations to be
         | performed atomically and with isolation. No more having to
         | reason carefully about which posix file operations are atomic,
         | no more having to worry about temp file TOCTOU etc.
         | 
         | And your performance goes absolutely to trash once you start
         | trying to make filesystem act as transactional database.
         | Especially if you then try to use the filesystem for a
         | database.
         | 
         | > fopen(..., 'r') could, by default, operate on a COW snapshot
         | of a file rather than on the raw file on the filesystems. No
         | more having to worry about races with other processes.
         | 
         | Another "performance goes to trash" option. Sure, now more
         | palatable with NVMes getting everywhere, but still a massive
         | footgun
         | 
         | > Temp files, like database temp tables, could by default only
         | be visible to the current process (and optionally its children)
         | or even the current thread. No more having to worry about temp
         | file races and related issues.
         | 
         | Already an option O_TMPFILE
         | 
         | > O_TMPFILE (since Linux 3.11) > Create an unnamed temporary
         | regular file. The pathname argument specifies a directory; an
         | unnamed inode will be created >in that directory's filesystem.
         | Anything written to the resulting > file will be lost when the
         | last file descriptor is closed, unless the file is given a
         | name. ... > Specifying O_EXCL in conjunction with O_TMPFILE
         | prevents a temporary file from being linked into the filesystem
         | in the >above manner. (Note that the meaning of O_EXCL in this
         | case is differ- > ent from the meaning of O_EXCL otherwise.)
        
           | bsder wrote:
           | > > fopen(..., 'r') could, by default, operate on a COW
           | snapshot of a file rather than on the raw file on the
           | filesystems. No more having to worry about races with other
           | processes.
           | 
           | > Another "performance goes to trash" option. Sure, now more
           | palatable with NVMes getting everywhere, but still a massive
           | footgun
           | 
           | Only because the filesystems aren't built that way. ZFS could
           | handle this just fine.
           | 
           | The fact that adding SQLite into your application is often
           | _faster performance_ than doing raw disk accesses yourself
           | says that our abstractions are _wrong_.
        
           | crabbone wrote:
           | You are thinking about it wrong.
           | 
           | I'm not the author of the above proposal, but I'd imagine the
           | implementation to be more in the same spirit as uring_io or
           | EBPF: batch many operations together and execute them all at
           | once, without interruption, without crossing multiple times
           | between user-space and kernel-space, without resorting to
           | implement the same guarantees on the side of the application
           | which, inevitably, are going to be less efficient than those
           | provided by the platform.
           | 
           | You fall into the same trap as many others who benchmark
           | individual tiny features w/o seeing the larger picture where
           | these features combined to accomplish the end goals.
           | Applications want sane correctness guarantees, and if they
           | try to accomplish those goals using existing tools they will
           | have to use multiple features. Perhaps, individually, those
           | features perform well, but in a useful (from the application
           | standpoint) combination, they don't. And that combination is
           | the one you need to benchmark.
        
             | josephg wrote:
             | Yeah my expectation is that this sort of thing would
             | dramatically improve performance. Doing atomic operations
             | on filesystems right now involve an awful lot of
             | fdatasync() calls and some creative use of checksumming to
             | avoid torn writes. A better designed API wouldn't need all
             | that guff, saving me (the application developer) from a lot
             | of headaches and bugs. The kernel could almost certainly do
             | a better job of all of that work internally, with less
             | overhead than I can from user space.
             | 
             | My much more modest feature request from the kernel is a
             | write barrier. If I'm using double-buffered pages or a
             | write-ahead log, right now my code says write(); fsync();
             | write(); in order to ensure the log is written before
             | updating the data itself. I don't need the operating system
             | to flush everything immediately - I just need to ensure the
             | first write happens entirely before the second write
             | begins. Like, I need to ensure order. A write barrier
             | command is all I need here.
             | 
             | The posix filesystem apis are a sloppy mess. I have no
             | doubt that the right apis would make my life better _and_
             | improve the performance of my software.
        
           | thefaux wrote:
           | > Already an option O_TMPFILE
           | 
           | This is not portable.
        
             | karatinversion wrote:
             | Well obviously, this is a discussion of the limitations of
             | posix.
             | 
             | (Of course, as we see time and again, the real portability
             | solution is for users to standardise on a single platform)
        
         | elij wrote:
         | same issue with networking (specifically per packet heuristics)
         | and even Java hasn't got that right either.
        
         | surajrmal wrote:
         | Per application isolated storage is becoming more and more
         | popular. Docker containers, flatpaks, android apps, etc are all
         | providing isolated storage by default.
         | 
         | Recursive locks are a nightmare. You cannot reason about lock
         | acquisition order and have to invent new ways to make it safe.
        
         | pclmulqdq wrote:
         | The "file" abstraction sucks. However, it's so deeply ingrained
         | in everything we do that it's nearly impossible to do anything
         | about it. At the language level, if your files don't behave the
         | way Unix files do, then you will have questions from your
         | users, and things like ported databases will not work.
         | 
         | Mutexes and locks are also a really tricky API, but making them
         | re-entrant has performance and functionality costs - you
         | basically need to store the TID/PID of the thread that holds
         | the lock inside the lock.
         | 
         | I'm sure there's a crate for a mutex-wrapped datatype, trading
         | speed for ease of use, but if not, it's likely a very easy
         | crate to put together.
        
           | dietrichepp wrote:
           | I agree that it sucks but I do think that it's possible to
           | solve most of the problems.
           | 
           | On Linux, there are some new APIs you see crop up every once
           | in a while. For apps running in the data center, you can use
           | network storage with whatever API you want, instead of just
           | using files. And on the Mac, there have been a couple shifts
           | in filesystem semantics over the years, with some of the
           | functionality provided in higher-level userspace libraries.
           | 
           | Solving the problems for databases seems really hard, just
           | based on all the complicated stuff you see inside SQLite or
           | PostgreSQL, and the scary comments you occasionally come
           | across in those codebases. But a lot of what we use files for
           | is just "here is the new data for this file, please replace
           | the file with the new data", and that problem is a lot more
           | tractable. Another use case is "please append this data to
           | the log file". Solving the use cases one-by-one with
           | different approaches seems like a good way forward here.
        
       | Georgelemental wrote:
       | Nice article.
       | 
       | - The "functional abstraction" issues could some day be adressed
       | with view types:
       | https://smallcultfollowing.com/babysteps//blog/2021/11/05/vi...
       | 
       | - "Newtype abstraction" can be made nicer with third-party
       | crates, or maybe in the future with project safe transmute
       | 
       | - "Views and bundles" I run into this one all the time, it's
       | annoying
       | 
       | - "Deref patterns" could allow pattern matching to see through
       | boxes one day, but there are unresolved design issues
        
       | bornfreddy wrote:
       | I only wrote a short "helloworld.rs" in Rust, but it looks way
       | more complex than any other language I have tried. Which is a
       | shame because I really like the idea of safe(r) C. Is there
       | really no simpler way to achieve memory safety without GC? I'm
       | not talking about the borrower, but the whole language... In
       | other words, is the complexity of Rust a consequence of memory
       | handling or of the design decisions?
        
         | mmastrac wrote:
         | Are you doing more than this?                  pub fn main() {
         | println!("Hello, world!");        }
         | 
         | If you're hitting the borrow checker and you haven't found
         | yourself at that point because a profiler or benchmark
         | suggested it, you might just be trying to access too much of
         | Rust's power for what you need.
         | 
         | You can always just throw everything inside of Rc<RefCell<...>>
         | if you're using a single thread, or
         | Arc<Mutex<...>>/Arc<RwLock<...>> if you're doing multi-
         | threading. It _looks_ expensive, but what you end up with is
         | approx. the same accounting you find in higher-level languages,
         | but the same guarantees.
        
           | mr_00ff00 wrote:
           | Isn't it kinda a catch-22 with Arc? That's considered a form
           | of garbage collection.
           | 
           | It's like, if you can use Arc than you can use garbage
           | collection, in which case why battle the borrow checker and
           | just use Elixir or something which better guarantees and an
           | easier time.
        
       | pornel wrote:
       | Haskell also solves data races and use-after-free (with
       | immutability and GC), so it's not surprising the author isn't
       | impressed by Rust's fearless concurrency. But debugging deadlocks
       | in threaded code is a piece of cake compared to the hell of data
       | races (attach debugger, and stack traces will show what is
       | deadlocking vs races happening randomly, often impossible to
       | reproduce in a debugger).
        
       | hkalbasi wrote:
       | Among things mentioned in the article, pattern matching of
       | strings is already available in the nightly.
        
       | mmastrac wrote:
       | I think this article is missing a mention of
       | #[repr(transparent)], which is supposed to help bridge the
       | problem of orphan rules, etc. I'm not sure why there isn't a
       | guarantee that Rust will treat a wrapped type with a single field
       | in the same way as a naked type, but I suspect it's because they
       | Just Haven't Gotten To It Yet.
       | 
       | https://www.reddit.com/r/rust/comments/kbyb6z/what_is_the_pu...
       | 
       | Also, the "reference" issue described in the article later is
       | painful, but can be solved w/the Yoke or Rental crates right now.
       | I hope those make it into the stdlib, however.
        
         | tialaramex wrote:
         | > I'm not sure why there isn't a guarantee that Rust will treat
         | a wrapped type with a single field in the same way as a naked
         | type, but I suspect it's because they Just Haven't Gotten To It
         | Yet.
         | 
         | What do you mean by "treat it in the same way" ?
         | 
         | #[repr(transparent)] does promise you something with identical
         | layout and alignment. If whatever is in there is actually the
         | same kind of thing then the transmute will work, but the
         | problem is - what if it's not?
         | 
         | Take nook::BalancedI8 and std::num::NonZeroI8 these both have
         | the same representation as Rust's i8 built-in type, they're one
         | byte signed integers. But, wait a minute Option<BalancedI8> and
         | Option<NonZeroI8> also have the same representation.
         | 
         | Now, if that single byte is the value 9, we're fine, all five
         | of these types agree that's just 9, no problem. Same for -106
         | or 35 or most other values.
         | 
         | But, for 0 it's a problem. i8, BalancedI8 and
         | Option<BalancedI8> all agree that's zero. However NonZeroI8
         | can't conceive of this value, we've induced Undefined
         | Behaviour, and Option<NonZeroI8> says this is None.
         | 
         | Similarly at -128, i8, NonZeroI8 and Option<NonZeroI8> all
         | agree that's -128, their minimum possible value. However the
         | minimum value of BalancedI8 is -127 so this representation is
         | Undefined Behaviour, and in Option<BalanacedI8> that's None.
        
       | retrocat wrote:
       | > We know that moving `Arc<Db>` will not change the location of
       | the `Db` object, but there is no way to communicate this
       | information to rustc.
       | 
       | There is! `std::pin`[0] does just this (and `Arc<T>` even
       | implements `Unpin`, so the overall struct can be `Unpin` if
       | everything else is `Unpin`). The page even includes a self-
       | referential struct example, though it does use `unsafe`.
       | 
       | [0]: https://doc.rust-lang.org/std/pin/index.html
        
       | swsieber wrote:
       | I believe there is a group working on "safe transmute", which
       | would address the newtype issues... safely. Not just memory
       | safety, but invariant safe, to prevent transmutes in safe rust
       | when the newtype wrapper wants to protect its own invariants.
        
         | shirogane86x wrote:
         | This sounds really interesting (and also eerily similar to
         | haskell's Coercible and roles), is there any material on the
         | matter somewhere? I'd love to see what solution they're
         | thinking of working towards, seeing as this is a pain point
         | that I encounter somewhat frequently.
        
           | burntsushi wrote:
           | RFC PR: https://github.com/rust-lang/rfcs/pull/2981
           | 
           | RFC text: https://github.com/jswrenn/rfcs/blob/safer-
           | transmute/text/00...
           | 
           | I don't know what the status quo is, but it is very exciting
           | work.
           | 
           | There are also crates in this domain that work today. The two
           | I'm aware of are https://docs.rs/zerocopy/latest/zerocopy/
           | and https://docs.rs/bytemuck/latest/bytemuck/
        
       | alcover wrote:
       | fn f<T: Into<MyType>>(t: T) -> MyType { t.into() }
       | 
       | Picturing a group of nuclear-winter survivors trying to restart a
       | coal plant. Found the control room. Inside, a working diesel
       | generator, a computer and a Rust manual.
        
         | nequo wrote:
         | The binary would most likely work correctly so they could
         | restart the coal plant, and read the Rust manual later to
         | understand the source code.
        
           | badrequest wrote:
           | And who wouldn't want to run a binary that compiles to some
           | unknown effect after a devastating apocalypse?
        
           | alcover wrote:
           | I agree my fiction is leaky, but it sounds like a nice
           | prompt.
        
         | Klonoar wrote:
         | Are you just being snarky or are you actually assuming that is
         | code you'd write?
        
           | alcover wrote:
           | Being gently snarky. The code is from the article and frankly
           | looks an irradiated still-born.
        
             | tialaramex wrote:
             | I find Rust way more readable than the similarly powerful
             | C++.                 fn f<T: Into<MyType>>(t: T) -> MyType
             | { t.into() }
             | 
             | We've got an introducer keyword 'fn'. We're defining a fn
             | function named f. Cool, in C++ I might have gotten a long
             | way before I found out what the function's name was.
             | Hopefully this is an example name, obviously don't really
             | just name your functions "f"
             | 
             | <T: Into<MyType>> means there's a type parameter here, our
             | f function can work with different types, T, however, it
             | also constrains the type parameter by saying it must
             | implement a Trait, named Into<MyType>. In Rust we're
             | allowed to also write these type constraints separately,
             | and if they're complicated it's bad style to put them here,
             | but this constraint was very simple.
             | 
             | (t: T) means the f function takes a single parameter, named
             | t (again, please give things better names in real life
             | people) of type T, that T is the one we just met, it's some
             | type which implements Into<MyType>
             | 
             | -> MyType means the function returns a value of MyType.
             | 
             | { t.into() } is the entire function definition.
             | 
             | This function just calls the method into() on the parameter
             | t. How does it know it can call into() ? If you're a C++
             | programmer you might assume it's just duck typed, C++ just
             | does text replacement here which is how you get those
             | horrible multi-page scrolling template errors, but nope, in
             | Rust it knew because the thing we know about t is that its
             | type is T, and the only thing we know about T is that it
             | implements Into<MyType> which is a trait with exactly this
             | into() method defined, where the return type of that into()
             | method will be MyType. We actually didn't know how to do
             | almost anything else with t _except_ call into()
        
               | alcover wrote:
               | I wasn't expecting this. Thank you very much.
               | 
               | Beyond my snark, I recognize it's no easy task to allow
               | these abstractions into a friendly syntax.
        
       | ufo wrote:
       | What's the idiomatic way to move the self.x+=1 into a separate
       | function? (From the "Function Abstraction" example)
        
         | duped wrote:
         | https://play.rust-lang.org/?version=stable&mode=debug&editio...
         | 
         | But to be honest that's bad code.
        
         | edflsafoiewq wrote:
         | Construct references to the fields of self inc needs at the
         | call site and pass those instead of all of self, eg. inc(&mut
         | self.x).
        
         | winstonewert wrote:
         | I've found success in such cases by splitting the struct in
         | question. This example is a bit contrived, but you could do
         | something like:                   self.counter.inc();
         | 
         | Where `self.counter` is a new struct that holds `x`. Because
         | `counter` is its own struct field, I can call mutable methods
         | on it while holding mutable references to other fields.
         | 
         | What I've found is that often the current "object" naturally
         | decomposes into different parts each of which can have their
         | own methods that only need to work with their particular subset
         | of the fields.
        
       ___________________________________________________________________
       (page generated 2023-02-15 23:02 UTC)