[HN Gopher] Btrfs in Linux 6.2 brings Performance Improvements, ...
___________________________________________________________________
Btrfs in Linux 6.2 brings Performance Improvements, better RAID 5/6
Reliability
Author : pantalaimon
Score : 45 points
Date : 2022-12-12 22:06 UTC (53 minutes ago)
(HTM) web link (www.phoronix.com)
(TXT) w3m dump (www.phoronix.com)
| herpderperator wrote:
| I've been managing a raid6 ext4 array with mdadm for 10 years.
| Started with 4 x 4TB disks and kept adding, up to 11 disks now.
| It works reliably and as designed. Had a few disk failures and
| replaced them without issues. That's one of the nice things about
| mdadm vs ZFS: you can add and remove disks from the array as you
| see fit, rather than being forced to upgrade all disks if you
| want to increase the size of your array.
| dale_glass wrote:
| But is the RAID handling remotely sane yet? See
| https://arstechnica.com/gadgets/2021/09/examining-btrfs-linu...
|
| It has gems such as:
|
| * It won't boot on a degraded array by default, requiring manual
| action to mount it
|
| * It won't complain if one of the disks is stale
|
| * It won't resilver automatically if a disk is re-added to the
| array
|
| I think the first is the killer. RAID is a High Availability
| measure. Your system is not Available if it fails to boot.
| klysm wrote:
| > * It won't boot on a degraded array by default, requiring
| manual action to mount it
|
| That by itself is a complete deal breaker.
| dekhn wrote:
| Many folks I know who manage storage don't make the boot volume
| RAID (redundant)- instead, it's some rapidly duplicatable thing
| like an NMVE flash containing the root filesystem, and there's
| a replacement handy. Then you can bring up and bring the full
| power of userspace to bear on the RAID repair.
| alschwalm wrote:
| I'm curious how the benchmarks are for Btrfs on 6.1, given the
| improvements that (I think) landed in it:
| https://www.phoronix.com/news/Linux-6.1-Btrfs
| nix23 wrote:
| Oh yeah, i like reliable filesystems like xfs or zfs ;)
| formerly_proven wrote:
| extraneous files seroed
|
| Though I've used XFS a lot over the years. Mostly because the
| Debian installer gave 12yo me the choice between ext2, ext3 and
| xfs, so XFS it was because it sounded cooler.
| kcb wrote:
| Btrfs let me down one time and for a file system one time is
| too many.
| warmwaffles wrote:
| I've been let down by ZFS once before when I wanted to add
| more drives to an existing pool.
| sliken wrote:
| Did you lose data?
| warmwaffles wrote:
| No, and I haven't lost data with BTRFS in RAID6 either.
| mberning wrote:
| Wonder if this will make it into Synology DSM 7.2. Seems unlikely
| based on the timing.
| imhoguy wrote:
| Unlikely, I am on DSM 7.1 and it is 4.4.180+, although I know
| heavily patched. I have read DSM 7.2 will land 5.10.
| fetzu wrote:
| Aren't Kernel versions tied to device model for Synology? My
| DS918+ returns "4.4.180+" as its kernel version. That's
| pretty.. old?
|
| Do/can they downstream some of the changes without changing the
| Kernel version?
| walrus01 wrote:
| I wonder how this compares to just using mdadm block device level
| raid5 or raid6. And then a normal filesystem on top.
| warmwaffles wrote:
| The flexibility of software RAID is nice that you can mix and
| match hard drive manufacturers and generally have zero issues.
| For hardware RAID, I've always been told to stick to one drive
| family from one manufacturer and not to mix and match.
| loeg wrote:
| mdadm is software raid; it's just at the block device layer,
| rather than part of the filesystem.
| warmwaffles wrote:
| Oh that's interesting. I've never used mdadm before. At
| least not knowingly.
| reisse wrote:
| The biggest problem for hardware RAID is controller
| compatibility. If the controller dies, chances are the whole
| array is dead, if you couldn't find the exactly same model.
| candiddevmike wrote:
| Maybe I'm just cynical but I think the ship has sailed for BTRFS
| RAID 5/6. It's now part of the global mindshare that BTRFS RAID
| 5/6 == data loss, no one wants to be the guinea pig that proves
| it works.
|
| Better to direct resources towards bcachefs or ZFS IMO.
| viraptor wrote:
| Ideally we'd be working based on actual information rather than
| global mindshare. There will be guinea pigs to test it. There
| are already quite a few people running it despite the warnings.
|
| As much as I want to see a wide use of bcachefs, it's still
| years away. As someone who actually wants to store data - why
| would you direct resources to bcachefs which is known
| experimental, rather than btrfs which plainly documented raid5
| as not ready and now may decide to change it to ready... if it
| is?
| gjs278 wrote:
| denkmoon wrote:
| Do Meta not develop btrfs for their internal use? I don't think
| community sentiment is a big factor for them.
| [deleted]
| seanw444 wrote:
| Glad to see Btrfs getting continual updates. It's my favorite
| filesystem for my personal machines (work and home PCs). The
| feature set is just awesome. I just hope it doesn't get
| completely abandoned as its development seems to have slowed
| significantly.
|
| The only thing it's missing before I consider it full-featured is
| stable RAID 5/6. But it looks like that hasn't been forgotten.
| warmwaffles wrote:
| > The only thing it's missing before I consider it full-
| featured is stable RAID 5/6. But it looks like that hasn't been
| forgotten.
|
| I've been running BTRFS RAID6 since 2016 and have only had one
| issue (arch kernel needed to be rolled back) and never suffered
| anything catastrophic. It's perfectly happy humming along with
| a 15x8TB raid array.
___________________________________________________________________
(page generated 2022-12-12 23:00 UTC)