[HN Gopher] WinBtrfs - an open-source btrfs driver for Windows
       ___________________________________________________________________
        
       WinBtrfs - an open-source btrfs driver for Windows
        
       Author : jiripospisil
       Score  : 273 points
       Date   : 2024-04-06 21:59 UTC (1 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | rkagerer wrote:
       | Awesome!
       | 
       | Anyone using this long term or in production who can comment on
       | how it's been working?
       | 
       | I see TRIM is supported. Is RETRIM also (or whatever is needed
       | during drive optimization to release areas that didn't get
       | TRIMmed the first time due to a full command queue).
       | 
       | Could this serve as an effective NTFS replacement with data
       | parity for those who don't like ReFS?
       | 
       | How mature is it compared to ZFS on Windows?
        
         | jiggawatts wrote:
         | ReFS with Storage Spaces already serves this purpose and is
         | integrated and fully supported.
         | 
         | From what I've heard, BTRFS has a crazy long list of defects
         | where it'll lock up or corrupt data if you so much as look at
         | it wrong.[1]
         | 
         | Using something that is unreliable at best on its native OS
         | shoehorned into Windows is madness. Fun for a lark, sure, but I
         | would never ever entrust this combination with any actual data.
         | 
         | [1] "It works for me on my two disk mirror" is an anecdote, not
         | data.
        
           | rkagerer wrote:
           | Thanks.
           | 
           | I tried ReFS when it first came out and it was terribly slow
           | (with data parity on), and Storage Spaces was obscure to set
           | up and manage. Has the landscape improved?
        
             | mgerdts wrote:
             | On WS2022 without patches I noticed that Storage Spaces was
             | only queueing one IO per NVMe device. With current patches
             | queuing is fixed and performance is much better. I think
             | this was fixed sometime in 2023. I'm pretty sure both NTFS
             | and ReFS were affected.
        
               | jiggawatts wrote:
               | Ah, that would explain the absurdly bad I/O performance I
               | was seeing in Azure VMs that had the new NVMe virtual
               | disk controllers!
               | 
               | I had spoken with some of the teams involved and they
               | were rather cagey about the root cause, but at least one
               | person mentioned that there were some fixes in the
               | pipeline for Windows Server 2022 NVMe support. I guess
               | this must have been it!
        
           | hsbauauvhabzb wrote:
           | > [1] "It works for me on my two disk mirror" is an anecdote,
           | not data.
           | 
           | While that statement might well be correct that the quote is
           | in fact an anecdote, the following is also an anecdote: 'From
           | what I've heard, BTRFS has a crazy long list of defects where
           | it'll lock up or corrupt data if you so much as look at it
           | wrong.'
        
             | nisa wrote:
             | Made the mistake using btrfs for a Hadoop cluster at
             | university in kernel 4.x times after reading that SLES uses
             | it and after reading an interview on lwn with someone
             | important, I think the maintainer at that time - that
             | deemed it stable. This must be 10 or 12 years ago or so and
             | it was a wild ride - crashes, manual recovery on 200
             | machines using clusterssh to get the partitions to mount
             | again. Got out of disk space errors on a 16tb raid 1 (which
             | is not a real raid1) with 5% usage - lot's of sweat I'd
             | rather avoid. Should have just used ext4 in hindsight.
             | 
             | For me I decided to not touch it anymore after that
             | experience. I'm sure there is a name for that bias but I
             | don't care. Got burned badly. Lots of people had probably
             | similar experiences and that's were that coming from.
             | Reading the mailing list archives at that time might also
             | be useful to convince yourself that it was more than
             | anecdote.
        
               | hsbauauvhabzb wrote:
               | I'm not disputing any discourse relating to the factual
               | in/correctness of the anecdote, I'm pointing out that gp
               | is providing an anecdote while disputing anecdotes that
               | they don't agree with.
               | 
               | Provide actual data that's recent. Linux 4.x was what, 10
               | years ago? Cars are substantially safer now than they
               | were 10/20/50 years ago, so whose to say your experience
               | with a file system would be different?
        
               | Haemm0r wrote:
               | Same could be said about cars: Why ever buy a [insert
               | brand] again after you've been burned by its reliabity or
               | other issues?
               | 
               | You probably just don't, as the alternatives are good and
               | plenty.
        
               | jiripospisil wrote:
               | > You probably just don't, as the alternatives are good
               | and plenty.
               | 
               | Which cannot be said about file systems on Linux which
               | support metadata+data checksum and repair though. As far
               | as I'm aware the only file systems which could
               | realistically be used are btrfs and zfs (bcachefs looks
               | promising but not there yet). Zfs is not even a part of
               | the kernel and you have to compile it yourself and hope
               | it does actually compile against your kernel due to API
               | changes.
        
               | Haemm0r wrote:
               | True dat.. :-)
               | 
               | Just wanted to point out, that it is "normal" for people
               | to avoid the thing that did not work out for them in the
               | past.
        
               | grumpyprole wrote:
               | It's unfortunately a very common anecdote over the last
               | 10 years (and a similar experience to my own). And to be
               | honest, it's a red flag with how this critical system
               | component is being developed.
        
               | yau8edq12i wrote:
               | What changed with respect to car safety compared to 2014?
               | If anything, the recent trend of putting every control in
               | touchscreen interfaces has made cars less safe.
        
             | yarg wrote:
             | Witnessing defects means that they exist; witnessing no
             | defects does not mean they don't.
        
               | hsbauauvhabzb wrote:
               | If that's the case, prove the giant Flying Spaghetti
               | Monster doesn't exist.
        
               | nailer wrote:
               | I think you've misread the parent comment. Witnessing no
               | FSM does not mean there is no FSM.
        
               | hsbauauvhabzb wrote:
               | Assuming FSM is referring to defects, witnessing no
               | defects increases confidence that no defects exist in the
               | observed state.
               | 
               | Conversely, witnessing defects does not itself prove
               | defects exist if the test cases were not scientific, it
               | increases the confidence that defects exist but there
               | exists some probability that an unrelated defect (bad
               | ram, kernel error, hardware failure, solar flares) could
               | have caused the issue.
               | 
               | But there's also a lot of evidence to suggest Brtfs has
               | had a lot of defects resolved in recent years, so it's
               | also important to note that as time moves forward, the
               | amount of existing and likely rate of introducing new
               | defects is likely to decrease.
               | 
               | I should add ive had minimal skin in this game until
               | yesterday. I chose brtfs for two systems for snapshot
               | support, but that's in addition to regular backups on
               | another host, because it's silly to trust any single
               | compute node regardless of file system.
        
               | nailer wrote:
               | > Assuming FSM is referring to defects
               | 
               | No. It is referring to the Flying Spaghetti Monster, but
               | is an analogy for anything including defects. This is
               | discussion about epistemology not filesystems.
        
               | hsbauauvhabzb wrote:
               | Replace 'defects' with 'miracles' and/or 'science'
               | depending on which makes more sense.
        
               | viraptor wrote:
               | Yeah, an interesting scenario is that many people compare
               | the btrfs behaviour to "never had issues with extfs".
               | When in practice it's "extfs couldn't have told me about
               | this issue even if it existed".
        
             | unixhero wrote:
             | Nope. It works perfectly on both my striped arrays raid 0
             | and mirrored raid 1.
        
           | gerdesj wrote:
           | One place where ReFS is rather decent is "reflinks" - that's
           | where identical blocks are stored once and in the background
           | the rest are simply links to the one block.
           | 
           | That is rather useful in backup systems.
           | 
           | XFS also supports reflinks amongst other things and is way
           | older than ReFS and hence considered out of beta (which ReFS
           | isn't, by me)
           | 
           | I don't trust data to RefS yet - its a fun project that will
           | no doubt prove itself one day. For now, Windows boxes run
           | NTFS and Linux runs ext4 or XFS.
        
             | Gabrys1 wrote:
             | What happens if the very important directory you copied 11
             | times (just to be sure) ends up producing the same block
             | and doesn't indeed get duplicated as you expected? And now,
             | that block gets corrupted...
        
               | defrost wrote:
               | Back in the day if I copied (geophysical air) survey data
               | 11 times and put all the copies in the same walk in fire
               | proof safe (in the hanger), that offered no real
               | additional security in the event of a direct hit by an
               | aircraft and explosion while the door was open.
               | 
               | If you're going to make 11 copies, they _have_ to go to
               | different physical locations, different devices at least,
               | geographically different places to be sure, or it 's
               | pointless.
               | 
               | In this instance, block de-duping on a single device
               | makes sense .. expecting mutiple copies on the same
               | device (with or without duplicat block reuse) to offer
               | any additional safety does not.
        
           | Dylan16807 wrote:
           | ReFS only got put back into normal Windows 11 a few months
           | ago. That's a good sign for the future, but it was looking
           | bad for a long time.
           | 
           | Also if you turn on data checksums, my understanding is it
           | will delete any file that gets a corrupted sector. And you
           | can only override this behavior on a per-file basis. Unless
           | this changed very recently?
        
             | MarkSweep wrote:
             | Oh, is it no longer exiled to Windows Pro for Workstations?
             | This feature comparison chart still has this there:
             | 
             | https://www.microsoft.com/en-us/windows/business/compare-
             | win...
             | 
             | For what it's worth, regular Windows 10 & 11 Pro (and other
             | editions maybe?) have supported reading and writing ReFS
             | this whole time. It's just the option to create a new
             | volume that's been disabled.
        
               | marwis wrote:
               | It still sort of is but you can create Dev Drive which is
               | based on ReFS
        
           | jiripospisil wrote:
           | > From what I've heard, BTRFS has a crazy long list of
           | defects where it'll lock up or corrupt data if you so much as
           | look at it wrong
           | 
           | The list cannot be _crazy_ long if Synology uses it for their
           | NASes.
        
             | yjftsjthsd-h wrote:
             | > The list cannot be crazy long if Synology uses it for
             | their NASes.
             | 
             | Synology uses a hybrid BTRFS+mdadm arrangement specifically
             | to deal with reliability problems with BTRFS RAID:
             | https://kb.synology.com/en-
             | us/DSM/tutorial/What_was_the_RAID...
        
               | wolletd wrote:
               | Which is kind of the point. BTRFS only has issues with
               | RAID5/6 configurations. Using it as a filesystem for a
               | single disk or partition should be totally fine.
        
               | Dalewyn wrote:
               | Everything I've read about btrfs's RAID5/6 deficiencies
               | is that it can't tolerate sudden losses of power (aka
               | write hole problem), which I think is fine so long as you
               | are aware of it and implement appropriate safety measures
               | such as a UPS or APU.
               | 
               | And besides, if you are doing RAID you are probably
               | concerned with the system's uptime which probably means
               | you will have implemented such measures anyway.
               | 
               | Note that, yes, I'm aware most home users either aren't
               | aware (nobody RTFM) or are too lazy/cheap to buy a UPS
               | from Office Depot. So perhaps btrfs is warning people to
               | save them from themselves.
        
               | dark-star wrote:
               | A UPS will not much improve the reliability against
               | sudden power loss. At least here in Europe it is much
               | more likely that a PSU or other component fails than that
               | the power line is suddenly interrupted.
               | 
               | And lost writes are a problem that all filesystems have.
               | I recommend reading the paper "Parity Lost and Parity
               | Regained" by Krioukov at USENIX 08...
        
               | amaccuish wrote:
               | Kernel panic too...
        
               | Mister_Snuggles wrote:
               | Anecdotally, this is untrue.
               | 
               | Personally, BTRFS is the only filesystem that has ever
               | caused me any data loss or downtime. I was using a single
               | disk, so it should have been the perfect path. At some
               | point the filesystem got into a state where the system
               | would hang when mounting it read/write. I was able to
               | boot off of a USB stick and recover my files, but I was
               | unable to get the filesystem back into a state where it
               | could be mounted read/write.
               | 
               | At work, we used to run BTRFS on our VMs as that was the
               | default. Without fail, every VM would eventually get into
               | a state where a regular maintenance process would
               | completely hang the system and prevent it from doing
               | whatever task it was supposed to be doing. Systems that
               | wrote more to their BTRFS filesystems experienced this
               | sooner than ones that didn't write very much, but
               | eventually every VM succumbed to this. Eventually the
               | server team had to rebuild every VM using ext4.
               | 
               | I know that anecdotes aren't data, but my experience with
               | BTRFS will keep me from using it for anything even
               | remotely important.
        
               | grumpyprole wrote:
               | Unfortunately you got what you payed for! :) No one in
               | the Linux world appears to be seriously investing in
               | _engineering_ a robust and reliable filesystem, with e.g.
               | correctness proofs. We have only hobby projects.
        
               | Mister_Snuggles wrote:
               | At work, this all happened on a commercial Linux
               | distribution which we do pay for. As far as I recall,
               | their support was unable to resolve the issue, hence
               | rebuilding all those VMs. I'm not on the server team, so
               | I don't know many details, but I was affected by this
               | issue and it caused a lot of grief across the
               | organization.
               | 
               | So no, I don't think we got what we paid for.
        
               | grumpyprole wrote:
               | Are you sure brtfs is supported in production by your
               | commercial Linux distribution? I would be surprised if
               | this is true. RedHat and Ubuntu do not support it.
        
               | Mister_Snuggles wrote:
               | It was at the time, it may not be now.
        
               | yjftsjthsd-h wrote:
               | Facebook literally uses it in production. There are
               | plenty of insults we can use, but hobby project is not
               | one of them.
        
               | Mister_Snuggles wrote:
               | I honestly find it weird when I hear about companies like
               | Facebook and Synology using it.
               | 
               | Facebook could easily work around failures, they've
               | surely got every part of their infrastructure easily
               | replaceable, and probably automated at some level. I'm
               | sure they wouldn't tolerate excessive filesystem
               | failures, but they definitely have the ability to deal
               | with some level of it.
               | 
               | But Synology deploys thousands of devices to a wide
               | variety of consumers in a wide variety of environments.
               | What's their secret sauce to make BTRFS reliable that my
               | work's commercial Linux distribution doesn't have? Surely
               | there's more to it than just running it on top of md.
               | 
               | Maybe in the years since I was burned by it things have
               | greatly improved. Once bitten, twice shy though - I don't
               | want to lose my data, so I'm going to stick to things
               | that haven't caused me data loss.
        
               | grumpyprole wrote:
               | Facebook presumably uses xz in production too and that is
               | a hobby project (as we all recently found out). My
               | understanding is that development of Btrfs was not
               | sponsored by any company and was entirely a "community
               | effort". It certainly would explain why it's perpetually
               | unfinished.
        
               | jiripospisil wrote:
               | I know but that's only one important part of what a file
               | system does. If the file system was otherwise totally
               | broken, they wouldn't use it.
        
             | tbyehl wrote:
             | Notably, Synology's agent-based backup software requires
             | BTRFS but will not back up BTRFS.
             | 
             | https://kb.synology.com/tr-
             | tr/DSM/help/ActiveBackup/activeba...
        
           | yjftsjthsd-h wrote:
           | Weirdly, it's _possible_ that this version could be more
           | stable /reliable/safe than the Linux version, since it's
           | apparently a wholly independent reimplementation. I suppose
           | it depends on whether BTRFS's problems stem from the
           | underlying data format or the actual code as written for the
           | Linux driver.
        
           | dark-star wrote:
           | ReFS is terrible. We have seen so many customers lose data on
           | ReFS that I started strongly advising everyone against using
           | it.
           | 
           | One example: If you (accidentally or on purpose) attach a
           | ReFS disk or LUN to a newer Windows version, it will be
           | silently upgraded to a new ReFS version without any feedback
           | (or chance to prevent it) for the user. No way of attaching
           | the disk on an older Windows version afterwards. But that is
           | not the real problem. The real problem is that the upgrade
           | runs as a separate (user-space) process. If this process
           | crashes or your PC crashes or reboots while it runs, your
           | data is gone. There is no feedback how long it still has to
           | run (we've seen multiple days on large volumes)
           | 
           | So yeah, maybe avoid ReFS for a few more years...
        
           | temac wrote:
           | Advising ReFS is a little bit insane though. I would
           | certainly not entrust it for my data either.
        
         | nyanpasu64 wrote:
         | One time I accidentally ran a Visual Studio build in a btrfs
         | git clone rather than my main NTFS drive. By the time I noticed
         | and cancelled the build, there were two folders with an
         | identical name but different contents, which I had to delete
         | the folder name twice. I'd say the driver has issues with
         | concurrency.
        
           | Kwpolska wrote:
           | I once ran a `git clone` from WSL1 on the C: drive, and tried
           | to build a C++ project in VS. It complained that "EXAMPLE.H"
           | was not found. An "example.h" file did exist in the repo, and
           | my code asked for "example.h". Turns out WSL1 set some
           | obscure bit not known in Win32 land (but enforced by NTFS)
           | that makes the file names case-sensitive, while VS's path
           | normalisation expects a case-insensitive file system. Perhaps
           | this was related to your issue?
        
             | nyanpasu64 wrote:
             | In a separate occasion, I also got that issue (but worse).
             | I once marked a NTFS folder as case-sensitive to help root
             | out all case mismatch bugs (to get a C++ project eventually
             | building on Linux), but then Visual Studio and CMake
             | started spitting out "file not found" errors _even for the
             | correct case_! I had somehow produced a  "cursed" folder
             | that could not be used for building code until I copied
             | (not moved) its contents to a regular case-insensitive NTFS
             | folder.
        
         | summermusic wrote:
         | I have run this casually on my main machine for a few years
         | now. I have a Windows partition, a Linux partition (btrfs on
         | LUKS), and a third btrfs partition where I kept my files.
         | 
         | I don't use it often, but when I do I don't even notice it.
         | It's as if Windows could just natively read btrfs all along.
         | This was without any "advanced" usage beyond simply accessing,
         | modifying, or deleting files.
        
         | Ciantic wrote:
         | Heads up, installing both WinBTRFS and OpenZFS on Windows may
         | have problems:
         | 
         | "Win OpenZFS driver and WinBtrfs driver dont play well with
         | each other"
         | 
         | https://github.com/openzfsonwindows/openzfs/issues/364
        
       | fsiefken wrote:
       | Would this make it possible to boot Windows10 and 11 from a btrfs
       | formatted windows usb stick?
        
         | Cu3PO42 wrote:
         | Not in its own. You also need a different boot loader. The
         | author has an implementation called Quibble [0] that also
         | supports btrfs.
         | 
         | [0] https://github.com/maharmstone/quibble
        
         | Modified3019 wrote:
         | You can use Rufus to install 10/11 on a usb SATA/NVMe drive
         | enclosure as "Windows To Go".
         | 
         | In practice it works out pretty decently in my experience using
         | it with windows 10 daily for a while, with a few caveats:
         | 
         | 1. You need a stable usb connection
         | 
         | 2. You need a usb drive enclosure with a controller chip that
         | is stable/doesn't overheat
         | 
         | 3. Your drive should be powerless resistant. Unfortunately
         | there's no resource I know of that evaluates power loss
         | handing. Some drives will have a bad time having power suddenly
         | cut. I've had good experience with Intel enterprise sata SSD's
         | and NVMe drives in a Dockcase with capacitor. If your drive
         | stops showing up, a power cycle might help:
         | https://dfarq.homeip.net/fix-dead-ssd/
         | 
         | 4. Have automatic backups setup.
         | 
         | Very useful for performance testing and hardware firmware
         | updates that are windows only.
         | 
         | When switching between computers, I'll often have to boot,
         | windows gets confused and then reboot. After that it works.
         | 
         | However, I have no experience trying to make use of WinBTRFS or
         | the separate bootloader project, which is apparently currently
         | broken since a few months ago.
         | 
         | Ventoy booting a windows VHD file might also be a decent option
        
           | ajolly wrote:
           | Does windows to go give you the same benefits of doing it on
           | vhdx files (you can do snapshots and roll back state easily)
           | 
           | I've got a testing install of Windows on my main hard drive
           | that boots from VHDX for these reasons
        
       | gertop wrote:
       | I recommend that everybody reads the README.
       | 
       | The author answers all the questions I've had - and much more!
        
       | develatio wrote:
       | How come this has "basic and advanced" (whatever that means ??)
       | RAID 5/6, while BTRFS itself doesn't?
       | (https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid5...)
        
         | dark-star wrote:
         | They call RAID0/1/10 "basic" RAID and RAID5/6 "advanced" RAID.
         | I have no idea why. Maybe because the former doesn't require
         | "advanced" parity calculations or something.
        
         | viraptor wrote:
         | That's not quite right. Linux btrfs supports raid5 in general,
         | but has known edge cases which make it not safe to use.
         | Basically it's "available, but experimental, for developers
         | only".
         | 
         | Winbtrfs only says the raid5 mode is one of the features, but
         | doesn't really address how well it works. The questions in a
         | related issue (https://github.com/maharmstone/btrfs/issues/293)
         | have been closed without real answers. I wouldn't risk raid 5/6
         | on it without getting good answers about the status / testing
         | from the developers first.
        
           | hsbauauvhabzb wrote:
           | I was under the impression that brtfs didn't support raid,
           | but could be deployed on top of software raid?
        
             | _flux wrote:
             | Well, that's the wrong impression.
             | 
             | Here's some Arch documentation about it, basically you just
             | create a btrfs on top of multiple devices and it works (to
             | some extent with raid5/raid6 as well):
             | https://wiki.archlinux.org/title/Btrfs#Multi-
             | device_file_sys... . Raid1 apparently works fine.
             | 
             | So if you want raid5/6, deploying on top of md is the
             | better option.
        
           | hnlmorg wrote:
           | I wouldn't risk this Windows driver on anything important
           | regardless of whether you use raid 5/6 or not.
           | 
           | I'm not taking anything away from the effort that has gone
           | into producing this. Just being realistic about the amount of
           | effort that is required to create a production ready file
           | system driver.
        
           | cesarb wrote:
           | > Linux btrfs supports raid5 in general, but has known edge
           | cases which make it not safe to use. Basically it's
           | "available, but experimental, for developers only".
           | 
           | I recall reading somewhere recently, that the Linux btrfs
           | developer intends to fix these edge cases through a on-disk
           | layout change (IIRC, adding one more btree to the
           | filesystem). So unless this driver already has that on-disk
           | layout change, it's unlikely that these edge cases have been
           | addressed.
        
       | dang wrote:
       | Related:
       | 
       |  _WinBtrfs - A Windows driver for the next-generation Linux
       | filesystem Btrfs_ - https://news.ycombinator.com/item?id=15177002
       | - Sept 2017 (100 comments)
       | 
       |  _WinBtrfs v0.7_ - https://news.ycombinator.com/item?id=12794214
       | - Oct 2016 (1 comment)
        
       | BirAdam wrote:
       | It's really awesome that this was a complete reimplementation
       | with no Linux code, and it's additionally awesome that this is
       | available for both XP/2k3 and ReactOS. I will have to try it out
       | on one of my older machines :-)
        
         | userbinator wrote:
         | _and it 's additionally awesome that this is available for both
         | XP/2k3 and ReactOS_
         | 
         | ReactOS is supposed to be API-compatible with Windows, so
         | that's not too surprising.
        
           | pxc wrote:
           | It's not surprising, but it is really nice that this gives
           | ReactOS a nice, modern CoW filesystem.
        
         | jauntywundrkind wrote:
         | One of the interesting patterns happening in Rust is io-less
         | libraries. I'm not sure where best to link this phenomenon. It
         | here s a open issue for an io-less quic library, from 2019,
         | https://github.com/aiortc/aioquic/issues/4
         | 
         | It'd be so fracking sweet to see filesystems follow this
         | pattern. If we could re-use the file system logic, but apply it
         | to windows or fuse or Linux or wasm linearly-addressed-storage,
         | that would allow such intensely cool forms of portability/reuse
         | & bending/hacking.
        
           | unshavedyak wrote:
           | How is this implemented in practice? Special care to keep io
           | on the outermost layers? Never thought about software in this
           | way. Seems really tough, but interesting
           | 
           | Wonder how well it scales to larger applications. Ie is there
           | a codesize where io-less becomes too difficult? Perhaps
           | performance concerns? Hmm
        
             | Arnavion wrote:
             | It's not really an "application" thing. It's meant to be a
             | design for libraries that implement protocols of some sort.
             | All the library API acts on byte buffers and leaves the
             | network socket etc stuff to the library user. So when the
             | library needs to write data to a socket, the API instead
             | returns a byte buffer to the caller, and the caller writes
             | it to the network socket. When the library needs to read
             | data from a socket, it instead expects the caller to do
             | that and then give the populated byte buffer to a library
             | function to ingest it.
             | 
             | Also, quite the opposite, it's *easier* to design a library
             | this way because it's strictly less code the library needs
             | to contain. Specifically in Rust it also has the advantage
             | that the library becomes agnostic to sync vs async I/O
             | since that's handled by the library user. Correspondingly,
             | it is slightly harder for the library user to use such a
             | library, but it's usually just a matter of writing a tiny
             | generic wrapper around the network socket type to connect
             | it to the library functions.
        
               | mikepurvis wrote:
               | Nice from a testing standpoint too, since you can
               | trivially mock out the hardware.
               | 
               | That said, a lot of what's actually hard about IO is the
               | error/fault handling, imposing timeouts and backoffs and
               | all that jazz. At a certain point I'd wonder if
               | extracting this out to a separate interface might obscure
               | the execution flow in some of these scenarios.
        
               | rav wrote:
               | > That said, a lot of what's actually hard about IO is
               | the error/fault handling, imposing timeouts and backoffs
               | and all that jazz.
               | 
               | Application-level timeout/backoff handling is always
               | scary to me, because I don't know how to make robust
               | tests for it. I wonder if you couldn't use the same
               | I/O-less approach, and split the logic out into pure
               | functions that take the time passed/error state/... as
               | value arguments, instead of measuring the physical time
               | using OS APIs. It's probably not something for reusable
               | libraries, but it could still be a nice benefit to be
               | able to unit test in detail.
        
               | blegr wrote:
               | Split the re-triable action into one function, make a
               | wrapper function that re-tries if needed, and use a third
               | function that makes the decision to re-try and how long
               | to back off.
               | 
               | Then you can test the decision function trivially, the
               | re-try function by mocking the action and decision, and
               | the action function itself without back off interfering.
               | 
               | That's what you suggested, just saying that I did that in
               | a Python API client with the backoff library and the
               | result is pretty neat.
        
               | mikepurvis wrote:
               | I love the idea of it all being totally abstract but in
               | my experience this stuff is usually tied in with
               | application level behaviours too, so you could end up
               | with a pretty messy API between the layers.
        
           | Arnavion wrote:
           | It's called sans-io in Python land, which is where I heard it
           | first.
           | 
           | https://sans-io.readthedocs.io/
           | 
           | I did it for one of my Rust projects back in 2018 https://git
           | hub.com/Arnavion/k8s-openapi/commit/9a4fbb718b119... , and
           | it's older than that in Python land.
        
             | dloss wrote:
             | The above sans-io page links to this PyCon 2016 talk:
             | 
             | Cory Benfield - Building Protocol Libraries The Right Way
             | https://youtu.be/7cC3_jGwl_U
        
           | endgame wrote:
           | Seems like a rediscovery of "pure functions" from the FP
           | world?
        
             | qrobit wrote:
             | Well, only "pure" in the sense no IO effect happens. I
             | doubt mentioned library neglects state or global variables
        
           | atq2119 wrote:
           | If the property of "io-lessness" becomes something statically
           | verifiable as part of dependency handling, it also seems
           | potentially beneficial as a guard against supply-chain
           | attacks.
        
             | GrayShade wrote:
             | A compromised IO-less file system library can still
             | synthetize malware files on a volume.
        
               | atq2119 wrote:
               | ... but only on the volume it is explicitly given access
               | to. So, if the library was IO-less (and didn't use unsafe
               | code), you could embed it in some tool, e.g. for
               | forensics, and not have to worry about it compromising
               | the security of the "host" system.
        
           | sideeffffect wrote:
           | I don't mean to be snarky in any way. I think this is
           | actually great development.
           | 
           | But isn't this just good old inversion of control, modularity
           | with maybe some inspiration from Functional Programming. Or
           | even more generally, good Software architecture and
           | engineering?
           | 
           | Anyway, I'm very happy to see this, the more code is
           | architected this way, the better for all our industry.
        
       | westurner wrote:
       | > _See also Quibble, an experimental bootloader allowing Windows
       | to boot from Btrfs, and Ntfs2btrfs, a tool which allows in-place
       | conversion of NTFS filesystems._
       | 
       | The chocolatey package for WinBtrfs:
       | https://community.chocolatey.org/packages/winbtrfs
        
       | rustcleaner wrote:
       | Should have been ZFS. :*^(
        
         | Fnoord wrote:
         | Exists! [1]
         | 
         | ZFS seems to be the most cross-platform of the modern
         | filesystems. Although there's a Paragon driver for APFS for
         | Windows and a FOSS driver for native Linux APFS as well as one
         | for FUSE.
         | 
         | Personally, I keep track of bcachefs which got merged in Linux
         | 6.7. But it won't be cross-platform.
         | 
         | [1] https://github.com/openzfsonwindows/openzfs
         | 
         | [2] https://bcachefs.org
        
       | graphe wrote:
       | What is the purpose of using this in production? I thought people
       | just ssh into Linux if you need it to just work. For my own
       | purposes I used to use an ext3 driver on Win7, never failed on
       | me, just switched to Linux.
        
         | yjftsjthsd-h wrote:
         | Some people want to access the same data volume from Linux and
         | Windows (see the person dual-booting upthread).
        
           | Zambyte wrote:
           | You can also just pass the partition to a VM and access the
           | VM storage however you want. I would trust that a lot more
           | than this to be honest. Nothing against this project in
           | particular, I just don't find the idea of using a filesystem
           | driver on Windows to access a filesystem that Windows doesn't
           | normally support. I don't really trust Windows to handle that
           | well :P
        
       | minroot wrote:
       | I tried to use it it few weeks ago on a btrfs hard drive. But i
       | couldn't make it work. Then i used wsl to access it. Worked for a
       | few run but then things just started to fail. It wouldn't even
       | get mounted. Then I realized i can just boot a live iso of linux
       | and copy/move files to windows drive and to the btrfs drive. That
       | what i am doing now, using Fedora Workstation live iso on USB
       | drive with ventoy.
        
         | josteink wrote:
         | Sounds like an authentic experience. Now you can lose data to
         | btrfs on Windows too :-D
        
       | qwerty456127 wrote:
       | Why still use hardware RAID nowadays when we have BTRFS and ZFS?
        
         | lm411 wrote:
         | Performance, reliability, and BBU or CacheVault.
         | 
         | Hardware RAID is worth the money when uptime and performance
         | are important. I've seen good RAID cards keep a server running
         | where native direct SATA would have brought the server down.
        
           | Brian_K_White wrote:
           | As someone who used to use both in production for many years,
           | I do not miss the days of hardware raid. No contest. Would
           | never go back. The fancier and more expensive the worse.
           | 
           | The advantages were always theoretical and the disadvantages
           | were always real. It caused way more problems than it solved
           | or prevented, and the problems are worse because you are
           | essentially powerless to address them.
           | 
           | With software raid you have the control to address problems
           | even when you fall off the happy path, and infinite
           | flexibility wrt hardware and emergency recovery.
           | 
           | In macro, hardware raid is less reliable, not more.
           | Performance is no better than a draw, not better.
        
           | ndsipa_pomu wrote:
           | The problems with "hardware" RAID are the proprietary disk
           | formats and the need to buy multiple RAID cards in case one
           | fails, as you may need to have the same vendor/version for it
           | to be a straight swap and still keep all your data. There can
           | also be issues with drivers, especially if the "hardware"
           | RAID is partly implemented by the driver. I've had issues in
           | the past with needing to put hardward RAID drivers into the
           | initramfs of Linux boxes just so that they can boot.
           | 
           | With software RAID, you can just plug the disks into other
           | servers without those kinds of problems.
        
       | mgaunard wrote:
       | I feel like btrfs has been in development since forever and
       | getting no adoption at all.
       | 
       | When is the year of the btrfs file system coming?
        
         | 0dayz wrote:
         | It's already the default on opensuse and fedora while Ubuntu,
         | Debian arch and Gentoo all support it so it's not that hard to
         | adopt.
         | 
         | The issue is that btrfs still does not run perfectly rock solid
         | on raid other than 0 and 1.
         | 
         | Plus it's compression and performance is still far off what
         | alternatives can provide (but this is also getting better).
        
         | int_19h wrote:
         | It's the default on all current Synology NAS boxes, and has
         | been for quite a while.
        
       | poisonborz wrote:
       | Wanted to use it for a while but a glance at the github issues
       | was enough to nope out. BSODs, lockups, usage spikes, corruption.
       | I so much wish for a stable btrfs/zfs driver, I'd gladly throw my
       | credit card at it. I don't get why these things don't get more
       | traction.
        
         | cies wrote:
         | Because it's not supported by MS, and the driver makers cannot
         | read the code if window's kernel. So even when the problems you
         | mentioned are fixed, you will probably still not be able to
         | boot windows from btrfs.
         | 
         | When in Rome do as the Romans do. Maybe we should only run
         | windows virtualized :)
        
         | beeboobaa3 wrote:
         | > I'd gladly throw my credit card at it
         | 
         | https://github.com/maharmstone/btrfs?tab=readme-ov-file#dona...
        
       | jcd000 wrote:
       | I dual boot and have been using this for a while now (the older
       | version). It does work, but some problems are to be expected.
       | While impressive, it is not production-level. For me that's fine
       | since I boot windows pretty rarely, but probably not for
       | everyone.
       | 
       | I would love to see that the new version works with fewer
       | problems.
        
       ___________________________________________________________________
       (page generated 2024-04-07 23:01 UTC)