[HN Gopher] Building a Budget Homelab NAS Server
___________________________________________________________________
Building a Budget Homelab NAS Server
Author : mtlynch
Score : 280 points
Date : 2022-05-29 13:04 UTC (9 hours ago)
(HTM) web link (mtlynch.io)
(TXT) w3m dump (mtlynch.io)
| planb wrote:
| Thanks for including energy usage in the article. I carry USB-C
| SSDs around the house for backups and storage of archived files.
| Of course this is a bit of a hassle and I played with the idea of
| either buying or building a NAS. My current standby consumption
| for all computer stuff (access points, router, switches, a dozen
| or so microcontrollers and various SmartHome stuff, but not TVs,
| running computers or gaming consoles) is already above 100w and I
| would really like to bring this number down. An extra 30-60w
| makes it really hard to justify the purchase of a NAS ( that I
| don't really need). I thought at least the synonogies would use
| way less power when not in use, so thanks for making me aware of
| this.
| mtlynch wrote:
| Thanks for reading!
|
| Yeah, I've never thought much about power consumption, but I've
| done a few write-ups of previous builds, and I received a lot
| of questions about power draw, so I decided to measure it on
| this one. I was surprised at how much power the system
| consumed, and it will be something I think about more up-front
| on future builds.
| KaiserPro wrote:
| There are number of ways around this.
|
| If you are not after speed, then you can do a redundant array
| of cheap nodes. Instead of using raid, just shove in an 8-12tb
| disk in a number of thin clients.
|
| The key is that they spend most of the time turned off.
| Spooky23 wrote:
| Check out used Linux thin clients on eBay. They are in a
| raspberry pi price point but more performant and unusual.
|
| Energy use is very low.
| memcg wrote:
| I agree and plan to buy a Kill A Watt P4460 meter. My HPE Gen 9
| servers were free, but I still would like to know the operating
| cost of a single server.
| bombcar wrote:
| Some of that off-lease equipment idles pretty low - be sure
| to check the bios for all "energy saving" options.
| farmerstan wrote:
| I think using 1 disk redundancy is a mistake. It's not only
| physical failure you're worried about, it's an error upon rebuild
| when you lose a drive. Bit rot on your remaining drives can occur
| which wouldn't be detected until rebuild time when you lose a
| drive, and that could cause you to lose your entire volume. Bit
| rot can be checked for but you can't always be sure and with
| larger and larger sets of data it gets slower to do.
|
| I use raid 6 and also backup my data externally to another nas as
| well as backup to a static usb drive. Backup requires multiple
| different types since failures are so catastrophic and can occur
| in ways you don't expect.
| kalleboo wrote:
| > _Bit rot on your remaining drives can occur which wouldn't be
| detected until rebuild time_
|
| ZFS can perform periodic scrubs to detect and repair bit rot,
| and I'm pretty sure TrueNAS is configured to do this by default
| bobcostas55 wrote:
| Doesn't ZFS have a mechanism for periodically checking for and
| correcting bit rot?
| loeg wrote:
| Yes.
| herpderperator wrote:
| > I chose raidz1. With only a handful of disks, the odds of two
| drives failing simultaneously is fairly low.
|
| Is this how the math works? Does having more drives mean the
| individual drives themselves are more likely to fail? Is running
| 4 drives safer than 100?
| branon wrote:
| Depends on how you look at it I suppose. The lifespan of a
| singular disk is likely rather long, but put a dozen of them in
| the same place and you'll see a failure or two every few years.
|
| Of course, we know that having a larger sample size and seeing
| more failures doesn't _actually_ mean that groups of disks are
| less reliable, but it could seem that way if you don't think
| too hard about it.
| ScottEvtuch wrote:
| The number of parity drives is often fixed, so the odds of the
| number of failures being higher than the number of parity
| drives goes up as you increase drive count.
| UmbertoNoEco wrote:
| > Based on Backblaze's stats, high-quality disk drives fail at
| 0.5-4% per year. A 4% risk per year is a 2% chance in any given
| week. Two simultaneous failures would happen once every 48 years,
| so I should be fine, right?
|
| Either I misunderstood or there are some typos but this math
| seems all kind of wrong.
|
| A 4% risk per year (assuming failure risk is independent of disk
| age) is less than 0.1% by week. A 2% risk per week would be a 65%
| risk per year!!
|
| 2 simultaneous failures at the same week for just 2 disks (again
| with the huge assumption of age-independent risk) would in the
| order of magnitude of less than 1:10^6 , so more than 20k
| years(31.2 k years tbc)
|
| Of course you either change your drives every few years so the
| age-independent AFR still holds or you have to model the
| probability of failure using some exponential distribution like
| Poisson's. Exercise for the reader to estimate the numbers in
| that case.
| NicoJuicy wrote:
| > I chose raidz1. With only a handful of disks, the odds of two
| drives failing simultaneously is fairly low.
|
| Only if you buy different hard drives or at least from different
| production batches. I had a lot of trouble on the same premise
| and I won't make that mistake again.
|
| Edit: He mentioned it though ( a bit later in the article)
|
| > The problem is that disks aren't statistically independent. If
| one disk fails, its neighbor has a substantially higher risk of
| dying. This is especially true if the disks are the same model,
| from the same manufacturing batch, and processed the same
| workloads. Given this, I did what I could to reduce the risk of
| concurrent disk failures.
|
| > I chose two different models of disk from two different
| manufacturers. To reduce the chances of getting disks from the
| same manufacturing batch, I bought them from different vendors. I
| can't say how much this matters, but it didn't increase costs
| significantly, so why not?
| kalleboo wrote:
| > _I had a lot of trouble on the same premise and I won 't make
| that mistake again._
|
| Please elaborate, I'd love to hear your story!
|
| I hear a lot of advice around raid/z-levels and it often seems
| backed up by shaky math that doesn't seem to be backed up by
| reality (like the blog posts that claim that a rebuild of an
| array of 8 TB drives will absolutely have hard read errors, no
| exceptions, and yet monthly ZFS scrubs pass with flying
| colors?)
| benlivengood wrote:
| About 7 years ago there was an Amazon sale ($300) on Lenovo TS140
| towers with the low-powered Xeon chip and ECC RAM and 4 drive
| bays. Ever since I've been unable to find a similar price point
| for the same quality, but wanted a backup server. I recently got
| a Raspberry Pi 4 (8GB model) and external USB hard drive (8TB)
| mirrored with a s3backer volume on backblaze B2 for about $300
| total, and as a backup server it's fast enough (performance
| limited by the Internet speed to B2) and probably idles at
| 10W-15W.
|
| One of the nice benefits of ZFS native encryption + s3backer is
| that if I had a total outage locally and needed to recover some
| files quickly I could mount the s3backer-based zpool from any
| machine, decrypt the dataset, and pull the individual files out
| of a filesystem. It's also a weird situation with cloud providers
| that convenient network-attached block storage is ~10X the price
| of object storage at the moment but performance can be similar
| using s3backer.
| AviationAtom wrote:
| Appreciate the insights on your S3 backup solution.
|
| I will mention that I am one of those folks with a TS140. Love
| that it's a power sipper. I maxed out the processor and memory,
| as well as loading it up with two 10 TB rust disks and two 512
| GB SSDs.
| louwrentius wrote:
| > While I obviously don't want my server to corrupt my data in
| RAM, I've also been using computers for the past 30 years without
| ECC RAM, and I've never noticed data corruption.
|
| You never noticed also exactly because you can't know about data
| corruption if you don't run with ECC memory.
| Mister_Snuggles wrote:
| I've gone the TrueNAS route, but I'm running it on a QNAP TS-451.
| I'm running TrueNAS off of a USB stick hanging off the back, so I
| didn't have do to anything with the hardware and reverting back
| to QTS is just a matter of setting the boot order in the BIOS.
|
| I really like seeing other people's builds, but I know that
| building my own computer isn't something I want to do. I was
| happy to see the comparison between the DIY model and the
| roughly-equivalent commercial units. I'll likely buy another QNAP
| (to run TrueNAS on) when the time comes, and the comparison tells
| me that I won't get screwed too badly by doing so.
| mtlynch wrote:
| Thanks for reading!
|
| > _I 've gone the TrueNAS route, but I'm running it on a QNAP
| TS-451. I'm running TrueNAS off of a USB stick hanging off the
| back_
|
| Oh, I didn't realize that QNAP allows that. Synology makes it
| pretty hard to boot any other OS, and I assumed the other
| vendors were similar. I'll keep that in mind for my next build
| because I do have fun building servers, but I also really
| appreciate systems like Synology and QNAP where the hardware
| and case is optimized for the NAS use-case.
| xedarius wrote:
| I have pretty much this existing setup. One thing I'd add is that
| it's quite noisy. If you have somewhere you can put it, and your
| house is all cat 6'ed up then great. But if like me you have it
| in the same room as you, you wil notice it. And it's not the pc,
| the fractal case has very quite 120mm fans, it's the HDDs.
| kettleballroll wrote:
| Are you me? I have almost the exact same build as discussed in
| the post, and I am super annoyed with how loud the disks are. I
| have a cronjob that puts them to sleep every night(the NAS is
| in my bedroom)... For some weird reason they never stop
| spinning otherwise.
| nicolaslem wrote:
| I also run a home NAS in a Node 304. I went with a supermicro
| mainboard for ECC support which means I had to swap the three
| fans that come with the case because the mainboard only supports
| PWM fans. Non-PWM fans would only spin at full speed otherwise.
|
| Regarding the SLOG device, you probably don't need it for a file
| server, but if you do you can definitely free a drive bay for an
| HDD by just using double sided tape somewhere like on the PSU.
| I'm sure it's also possible to put three more HDDs above the CPU,
| right in front of the exhaust fan. If I had a 3D printer I would
| try to build something bringing the total to nine HDDs.
|
| If you need more SATA ports but are running out of PCIe slots,
| you may be able to reuse an empty M.2 slot. An M.2 with two lanes
| of PCIe 3 gives you 5 SATA ports with an adapter[0].
|
| [0] https://www.aliexpress.com/item/1005004223466933.html
| mrslave wrote:
| I occasionally look at the PCIe-to-SATA market and find it
| confusing. It appears polarized into either cards from a
| reputable brand and very expensive, even with few SATA ports,
| or cards from an unknown brand and relatively affordable.
| What's your experience with this and what can you recommend
| (2-port or 4-port)? Are the cheap cards safe & reliable or are
| they to be avoided?
| kalleboo wrote:
| Basically: they can be buggy. Either they work fine for you,
| or you hit an edge case and have trouble. They can also have
| worse performance as they only have one SATA controller and
| split it amongst the drives.
|
| The fancier more expensive ones are typically referred to as
| HBAs instead of "SATA cards" https://unraid-
| guides.com/2020/12/07/dont-ever-use-cheap-pci...
|
| If you're doing this at home, you can get used enterprise
| gear on eBay (like an LSI SAS HBA) for the same price or
| cheaper than brand-new consumer gear, and it will probably
| still be more reliable (I built a 130 TB NAS for my friend's
| video production business and literally everything aside from
| the drives and the cables was bought used on online auction,
| and it's been humming along fine for a while now - the only
| part that was bad was one stick of RAM, but the ECC errors
| told me that before I even got around to running my tests on
| the sticks)
| magicalhippo wrote:
| I've been running older SAS cards for years and it's been
| doing just fine. They go for cheap on eBay. Each SAS port
| serve four SATA drives, using SAS-to-SATA cables.
|
| Just make sure to get one that runs in IT mode or you have to
| mess with the firmware.
| vladvasiliu wrote:
| > Just make sure to get one that runs in IT mode or you
| have to mess with the firmware.
|
| In case some people wonder what "IT mode" is, as I used to
| some years ago, what you basically want is a card that will
| expose the drives directly to the OS, as opposed to
| "volumes".
|
| In other terms, if the card is a RAID controller, it may
| insist on you creating arrays and only expose those. You
| can circumvent it by creating single-drive arrays, but it's
| a pain.
|
| Some cards can do both, but it's usually not advertised.
| Non-RAID cards also tend to be cheaper. Others (usually
| LSI) can be flashed with a non-RAID firmware, but again,
| it's less of a hassle to not have to do it.
| Deritiod wrote:
| I don't know Im no longer feel save with >4tb drives and raidz1.
|
| I run two offline Nas (I unpower them and do scrabing every
| month) and have one with raidz2 for all critical things like my
| photos.
|
| To resilver 8tb takes ages and while he wrote his thoughts on it,
| I was missing the repair risk calculation
| mtlynch wrote:
| Author here.
|
| I consider myself an intermediate homelabber and a TrueNAS
| beginner. I just built my first NAS server, so wrote this to
| capture everything I wish I'd known at the start. I hope it's
| helpful for anyone else thinking about building their first NAS
| server.
|
| Any questions or feedback about the post are more than welcome.
| InvaderFizz wrote:
| A few points from someone with years managing raid and ZFS in
| arrays all the way up to 50 disks:
|
| RAID-Z1 is something I never consider without a solid backup to
| restore from and a plan to execute that process at least once
| in the lifecycle of an array.
|
| If you suffer a total disk failure of one of those disks in the
| array, you have likely lost some data. The good news is that
| ZFS will tell you exactly which files you have lost data for
| and cannot rebuild. If you have those files, you can overwrite
| them with the backups to get your integrity back.
|
| The reason is, with a total loss of a single disk, any read
| error on any of the remaining disks is a lost/corrupted file.
|
| For this reason, you need a strong(easily accessible,
| consistent, current) backup strategy and an acceptance of
| downtime with Z1.
|
| As for ECC, it's better, but your absolute worse case scenario
| is that you get a bit flip before the sync and hash happens,
| and now that bit flipped data is committed to disk and you
| think it's OK. I prefer ECC to avoid this, but you are still
| reaping a multitude of benefits from ZFS without ECC.
|
| The only valid rule for RAM and ZFS is that more RAM = more
| caching of recently read data. Single, or very few user
| appliances will see little benefit past 8GB even with 100TB
| unless you happen to be reading the same data over and over.
| Where ZFS shines is having hundreds of gigabytes of RAM and
| tens or more concurrent users mostly accessing the same data.
| That way the vast majority of reads are from RAM and the
| overall disk IOPS remain mostly idle.
|
| Most of the ZFS RAM myths come from Deduplication, which should
| be disregarded as a ZFS feature until they allow storing of the
| DDT on a Optane-like latency device. Even better would be
| offline deduplication, but I doubt that will be a thing in ZFS
| this decade.
| anarcat wrote:
| > If you suffer a total disk failure of one of those disks in
| the array, you have likely lost some data. [...] The reason
| is, with a total loss of a single disk, any read error on any
| of the remaining disks is a lost/corrupted file.
|
| Wait, what? If a RAID-(z)1 ZFS array loses one disk, there's
| data loss? I've ran so many RAID-1 and RAID-10 arrays with
| mdadm that I can't even being to count them, and I had many
| drive failures. If any of those arrays would have corrupted
| data, I would have been mad as hell.
|
| What I am missing here? How is this even remotely acceptable?
| InvaderFizz wrote:
| > any read error on any of the remaining disks is a
| lost/corrupted file.
|
| That is the meat of it. With traditional RAID it is the
| same issue, except you never know it happens because as
| long as the controller reads something, it's happy to
| replicate that corruption to the other disks. At least with
| ZFS, you know exactly what was corrupted and can fix it,
| with traditional RAID you won't know it happened at all
| until you one day notice a corrupted file when you go to
| use it.
|
| RAID-Z1 is better than traditional RAID-5 in pretty much
| every conceivable dimension, it just doesn't hide problems
| from you.
|
| I have encountered this literal scenario where someone ran
| ZFS on top of a RAID-6(don't do this, use Z2 instead). Two
| failed drives, RAID-6 rebuilt and said everything was 100%
| good to go. A ZFS scrub revealed a few hundred corrupted
| files across 50TB of data. Overwrote the corrupted files
| from backups, re-scrubbed, file system was now clean.
| KennyBlanken wrote:
| You don't need to fix anything.
|
| ZFS automatically self-heals an inconsistent array (for
| example if one mirrored drive does not agree with the
| other, or if a parity drive disagrees with the data
| stripe.)
|
| ZFS does not suffer data loss if you "suffer a total disk
| failure."
|
| I have no idea where you're getting any of this from.
| lazide wrote:
| If the data on disk (with no redundant copies) is bad,
| you've (usually) lost data with ZFS. It isn't ZFS's
| fault, it's the nature of the game.
|
| The poster built a (non redundant) zfs pool on top of a
| hardware raid6 device. The underlying hardware device had
| some failed drives, and when rebuilt, some of the
| underlying data was lost.
|
| ZFS helped by detecting it instead of letting the bad
| data though like would normally have happened.
| KennyBlanken wrote:
| You're not missing anything. They're completely wrong.
|
| In RAID-Z, you can lose one drive or have one drive with
| 'bit rot' (corruption of either the parity or data) and ZFS
| will still be able to return valid data (and in the case of
| bit rot, self-heal. ZFS "plays out" both scenarios,
| checking against the separate file checksum. If trusting
| one drive over another yields a valid checksum, it
| overwrites the untrusted drive's data.)
|
| Regular RAID controllers cannot resolve a situation where
| on-disk data doesn't match parity because there's no way to
| tell which is correct: the data or parity.
| 1500100900 wrote:
| They mean: lose one drive and have another with bit rot.
| anarcat wrote:
| ah. right. that's the bit I was missing (pun intended).
|
| thanks for the clarification.
|
| in that sense, yes, of course, if you have bit rot and
| another disk failing, things go south with just two disk.
| ZFS is not magic.
| InvaderFizz wrote:
| The situation I laid out was a degraded Z1 array with the
| total loss of a single disk(not recognized at all by the
| system), plus bitrot on at least one remaining disk
| during resilver. Pairity is gone, you have checksum to
| tell you that the read was invalid, but even multiple re-
| reads don't give valid checksum.
|
| How does Z1 recover the data in this case other than
| alerting you of which files it cannot repair so that you
| can overwrite them?
| dhzhzjsbevs wrote:
| Did similar recently.
|
| Some suggestions for anyone else looking to do the same:
|
| i3 runs a bit cooler than ryzen, still 8 threads. 8tb WD blues
| (they're SMR at 8 and up). You can find Atx boards with 8 sata
| ports and dual nvme slots for caching / fast pools.
| giantrobot wrote:
| I'd be really careful about SMRs in a RAID. You can end up
| with no end of performance issues. It's all the downsides of
| a single SMR drive multiplied by however many drives are in
| the pools.
| bcook wrote:
| I think @dhzhzjsbevs meant that 8TB and higher is CMR. A
| quick google search seems to support that.
| giantrobot wrote:
| You actually have to be careful. There's disk sizes where
| the manufacturer will do say 8TB CMR and essentially the
| same drive with different firmware as a 10TB SMR. They'll
| _also_ have a 10TB CMR model. You have to pay close
| attention to the model numbers. It 's even more of a
| crapshoot if you shuck drives. You have to carefully
| research what externals are known to have CMRs.
|
| SMRs are a fucking blight.
| 1500100900 wrote:
| - "RAID is not a backup" primarily because "you could rm -rf".
| ZFS snapshots cover that failure mode to the same extent that
| synchronization with offsite does, but cheaper. ZFS snapshots
| obviously don't cover other failure modes like natural
| disasters or a break in, so RAID is still not a backup.
|
| - for ZIL to do its work properly, you need the disks not to
| lie when they claim that the data has been truly saved. This
| can be tricky to check, so perhaps think about a UPS
|
| - if you have two M.2 slots you could use them to mirror two
| partitions from two different disks for your data pool's SLOG.
| The same could be done to form a new mirrored ZFS pool for the
| OS. In my case I even prefer the performance that a single-copy
| SLOG gives me at the risk of losing the most recent data before
| it's moved from the SLOG to the pool.
| walrus01 wrote:
| > - "RAID is not a backup" primarily because "you could rm
| -rf".
|
| or your house could burn down
|
| or somebody could steal the computer while you're away on
| vacation
|
| or lightning could strike your electrical grid service
| entrance or a nearby pole/transformer, causing catastrophic
| damage
|
| or your house could flood
|
| lots of other things.. if you really have important data it's
| important to plan to for the total destruction of the storage
| media and server holding it.
| DarylZero wrote:
| > - "RAID is not a backup" primarily because "you could rm
| -rf". ZFS snapshots cover that failure mode to the same
| extent that synchronization with offsite does
|
| Not really. You need to be synchronizing to a _write-only_
| backup archive. A local ZFS snapshot can be deleted locally.
|
| (Also fire, compromise, police confiscation, etc.)
| [deleted]
| AdrianB1 wrote:
| I have a couple of home built TrueNAS systems for many years
| (since FreeNAS, ~ 10 years ago), here is some feedback:
|
| - with same disk size, but just 3 disks, I get around 240
| MB/sec read speed for large files (with 10 Gbps NIC). I guess
| the biggest difference is the CPU power, your NAS seems very
| slow. On 1 Gbps NIC I get 120 MB/sec transfer speed. My system
| is even virtualized, on bare metal may be a little bit faster.
|
| - you cannot expand your pool, if you add one more disk there
| is no way to cleanly migrate to a 5 disk raidz1. There is some
| new development that kind of does something, but it is not what
| is needed
|
| - unless esthetics is a big deal for you, there are still $30
| cases around. The extra $70 can be used for something else *
|
| - * with a small percentage cost increase, an investment in CPU
| and RAM can give you the capability to run some VMs on that
| hardware, so that CPU will not sit at idle 99.9% of the time
| and be underpowered when you do use it. Using a dedicated
| computer just for a NAS is not very cost and power efficient,
| but if you group multiple functionalities it becomes a great
| tool. For example I run 3-4 VMs at all times, up to ~ 12 when I
| need it.
|
| - that motherboard and the comparison to a B450 is wrong. The
| MB restricts you to 4 SATA, while the B450 I bought for ~ $120
| has 6 SATA ports
|
| - TrueNAS does not *require* a HBA firmware change, that is
| needed if you want to convert a RAID controller to plain HBA
| mode or with certain old HBA that need newer firmware. However
| for your setup a HBA is not needed. If you want to add many
| disks and have a good performance (like more than 500-1000
| MB/sec) then you need the HBA
|
| - your math is wrong. You calculate available space using ~
| 3.8TB disks and divide to 4 TB. The 4TB disks don't have 4TB,
| but 4x10^12 bytes, so the percentages in your table are exactly
| 80%, 60% and 40%.
|
| - that CPU does not work with 32GB DIMMs. This works only with
| newer Ryzen generations, not with Zen+ in this CPU.
|
| - GPU is not missing. TrueNAS does not render anything on a
| GPU, there is no need for one. I did ran TrueNAS for a couple
| of years on a computer with no video capability at all (a Ryzen
| 2700) without any problem, I just used a GPU for the initial
| installation and then removed it.
|
| - unless you store a database for a SQL server or similar,
| there is no benefit in a SLOG; it is not a tiered cache, so it
| does not speed up file transfers in any way. You can have a
| disk dedicated as a read cache, but the cache content is
| currently wiped at every restart (a documented limitation) and
| not needed if you don't want very good performance with small
| files over the network
| XelNika wrote:
| > Performance topped out at 111 MiB/s (931 Mbps), which is
| suspiciously close to 1 Gbps.
|
| That's because of overhead in TCP over IPv4. You're testing the
| payload throughput, not the physical throughput. The
| theoretical maximum performance without jumbo frames is around
| 95%.
|
| https://en.wikipedia.org/wiki/Jumbo_frame#Bandwidth_efficien...
| mtlynch wrote:
| Ah, good to know. Thanks!
| sneak wrote:
| The slog is only used for synchronous writes, which most writes
| are not (as I understand it). Most workloads (ie non-db server)
| won't see much improvement with one.
| srinathkrishna wrote:
| Just wanted to share my appreciation for not just this post but
| all your work in recent times! Been following your trail since
| your post about Google promos and the set of useful projects
| you've been working on since then.
| mtlynch wrote:
| Thanks so much for the kind words and for following along!
| DeathArrow wrote:
| I just use pCloud mounted as a network drive and throw everything
| I am not currently working on on it. With 10Gbps I have at home,
| it works wonders.
|
| Plus, the storage is unlimited. Plus, it is more resistant to
| failures and disaster than anything home made. Plus, I don't have
| to store and take care of another noisy box in my home.
| red0point wrote:
| Where did you find the unlimited storage offering? I couldn't
| find it on their website.
| DeathArrow wrote:
| It's 2Tb plan, but unlimited in the sense I can grow it to
| whatever I need. I got 8Tb with 75% off on Black Friday.
| aborsy wrote:
| Lifetime plan on their website.
| red0point wrote:
| No that can't be it, it's limited to 2TB. Could you post
| the link? Thanks!
| aborsy wrote:
| Sorry I misread it!
| mtlynch wrote:
| Nice, that's a cool setup!
|
| What area of the world do you live where you get 10 Gbps to the
| Internet? Can you reliably get 10 Gbps transfers to pCloud?
|
| I got 1 Gbps fiber in the last year, but it's more like 700-800
| Mbps in practice. I consider myself lucky to even get that, as
| my experience before that has always been 100-200 Mbps even on
| a "1 Gbps" plan. I'm super jealous of people who get a full 10
| Gbps Internet connection.
| DeathArrow wrote:
| I live in Bucharest, Romania. We can get 10Gbps FTTH since
| the beginning of the year.
|
| Uploads to pCloud are about half of that while downloads can
| be over 1GB/s.
| mrg2k8 wrote:
| You can get 25 Gbps in Switzerland :)
| kalleboo wrote:
| > _And if you're dumb like me, and you've used a Synology-
| proprietary storage format, you can't access your data without
| another Synology system_
|
| I wonder what he means by this. If he's referring to SHR, then
| it's just standard mdraid and Synology themselves have
| instructions on how to mount the volume in Ubuntu
| https://kb.synology.com/en-us/DSM/tutorial/How_can_I_recover...
|
| edit: He later mentions encrypted volumes but those are also just
| using standard encryptfs
| https://www.impedancemismatch.io/posts/decrypt-synology-back...
|
| This is one of the reasons I feel comfortable recommending
| Synology devices - there's not a lot of lock-in
| mtlynch wrote:
| Oh, cool! I was referring to SHR. I thought it was a
| proprietary format and didn't realize you could access it from
| non-Synology systems. I've updated the post:
|
| https://github.com/mtlynch/mtlynch.io/pull/920
| AviationAtom wrote:
| It's pretty cool in that it's a mostly "COTS" implemention.
| LVM and MD, IIRC.
| pronoiac wrote:
| I'm fairly happy with my 4-bay Synology NAS. When I last looked
| at ZFS, it seemed that piecemeal upgrades - like, upgrade a 4TB
| drive to 8TB, and get more available space - wouldn't work in
| ZFS, but it would in SHR, at least if you had more than 2 drives.
|
| Having scheduled checks is a good idea: I have weekly short SMART
| tests, monthly long SMART tests, and quarterly data scrubs.
|
| The TinyPilot device looks nifty - it's a Raspberry Pi as a
| remote KVM switch. I stumbled on that last night as I was banging
| my head against a familial tech support issue.
| mrb wrote:
| Oh, my, you are right, the TinyPilot seems awesome! I see it
| was developed by the author of this ZFS NAS server blog post. I
| just ordered one to play with :)
| pronoiac wrote:
| Note that The TinyPilot hit the front page with its own post:
| https://news.ycombinator.com/item?id=31549368
| hddherman wrote:
| > I see people talking about snapshotting, but I haven't found a
| need for it. I already have snapshots in my restic backup
| solution. They're not especially convenient, but I've been using
| restic for two years, and I only recall needing to recover data
| from a snapshot once.
|
| The ease at which you can revert mistakes using ZFS snapshots is
| much better compared to restic. You can pretty much navigate to
| the correct snapshot on your live filesystem and restore whatever
| you need to restore.
|
| It also makes backups easier as you can just send the snapshots
| to the backup device (another server or external storage device).
| geek_at wrote:
| Not only that but also
|
| > I chose raidz1. With only a handful of disks, the odds of two
| drives failing simultaneously is fairly low.
|
| Which is not really the case of you bought x amount of the same
| disks and always use them together. I had that happen to me
| just a few months ago. 4 identical discs bought at the same
| time. Raidz1 reported one dead/dying disk so I replaced it and
| started resilvering, which can take days and leaves the disks
| at 100% utilization.
|
| So after 12 hours or so a second one failed and the data was
| gone.
|
| Lesson learned: mix up your disks
| madjam002 wrote:
| In my case even mixing up the disks might not help but I
| agree it's still helpful.
|
| I bought 4x Seagate Ironwolf Pro 12TB drives from different
| vendors, one failed after a year, then when I got the
| replacement another drive failed during the rebuild, and then
| 6 months later the replacement failed. Now another one of the
| original drives is also reporting reallocated sectors.
|
| Same system has 4x WD Red drives which have been running fine
| with 0 reallocated sectors for almost 7 years.
| idoubtit wrote:
| I'm okay with claims that snapshots are much better than
| backups for many uses. But in this case the GP was explaining
| that they only used their backups once in several years, so
| they did not need to change their backup system.
|
| I'm in the same boat. I configured remote backup systems on a
| handful of computers. I think I reached for backups only twice
| over the last ten years. Of course I need something, backups or
| snapshots, but for my use case snapshots (with a network copy)
| would need work to set up. And if the remote storage is worse,
| that would be more of a problem than the changes in the restore
| process.
| whoopdedo wrote:
| I think of a backup like a fire extinguisher. It's better to
| have one and never need it than to one day need it and it's
| not there.
| eminence32 wrote:
| I have personally been saved by ZFS snapshots (multiple times!)
| because sometimes I do dumb things, like running:
| rm -rf tmp *
|
| Instead of: rm -rf tmp*
| tenken wrote:
| I never do "rm prefix _" in a dir. I always do "rm
| ./aDir/prefix_" for example. This assures I'm not globbing
| outside a directory (or just a directory) and tries to help
| assure I'm not shooting my own foot.
|
| Yea, i love up 1 directory before I delete anything.
| mmastrac wrote:
| > i love up 1 directory
|
| It took me a minute, but I assume this should be "move up".
| This seems like a good habit.
| linsomniac wrote:
| This. I'm always going into my backup server and looking in the
| ".zfs/snapshots" directory to look at the history of how files
| have changed over the backups. Love restic, but the ZFS
| snapshots are fantastic.
| zaarn wrote:
| If you setup SMB right (TrueNAS configures this out of the box,
| SCALE is great if you need a Linux NAS), you can use Windows
| Shadow Copies to access ZFS snapshots and browse or restore
| file contents from them.
| MaKey wrote:
| Also possible with BTRFS. I set this up once for a small
| business with hourly snapshots during working hours. This way
| users could just restore older versions of files they
| accidentally deleted, overwrote or messed up in some other
| way. Another benefit: Those snapshots were read-only, so they
| also served as a protection against ransomware.
| zaarn wrote:
| I don't think BTRFS supports NFSv4 ACLs yet (ie, Windows
| ACLs are natively supported on ZFS, there is patchset so
| Linux also supports it but BTRFS obviously has no
| integration for a patchset that only exists for ZFS).
|
| Having NFSv4 ACL access is a huge plus since you can
| configure permissions natively from windows and have them
| enforced even on the shell.
| vetinari wrote:
| Not sure how Synology implemented it, but they do support
| Windows ACLs on btrfs volumes.
| zaarn wrote:
| They likely use XATTRs to store the ACL (that is an
| option in Samba), but it's not native like it's on the
| TrueNAS systems with the kernel. I bet if you log into
| the Syno's via SSH you don't get the ACLs enforced on the
| shell. With the NFSv4 ACL patchseries, they would and you
| could benefit from the better options that the NFSv4 ACLs
| give you.
|
| Storing them in metadata is not the same as having them
| natively.
| gravypod wrote:
| I wish there was a good guide for buying JBOD hba cards. I want
| to replace my drobo with all SATs ssds.
| NoNotTheDuo wrote:
| Is this what you're looking for?
|
| https://forums.serverbuilds.net/t/official-recommended-sas2-...
| dsr_ wrote:
| Get LSI 2000 or 3000 series SATA cards. Several manufacturers
| make them approximately to the reference spec. The drivers are
| in the Linux kernel. If they don't come flashed to the IT spec
| firmware (no RAID capabilities), do that, but the cheap ones
| usually do. The 4i models sometimes come with ordinary
| individual SATA connectors; the 8i will have one of two kinds
| of combo connectors that can accept cables that go to a
| backplace or breakout cables to ordinary SATA connectors.
|
| There you go.
| lvl102 wrote:
| I went down this rabbit hole about a decade ago. Spent a lot of
| time and money on a home lab. While it's cool, the payoff is just
| not there. I switched to Google/AWS a few years ago and never
| looked back.
| walterbell wrote:
| Are there current JBOD products in 1U short-depth (15" for wall
| mounted rack) form factor, e.g. 4 x 3.5" hotswap drive bays with
| a mini-SAS connection to the unit?
|
| This would be useful as a backup device, or low-power NAS when
| connected to a Linux thin-client with LSI HBA.
|
| There were some 1U products which included RAID support, priced
| around $500, which is a bit much for 1U chassis + SATA/SAS
| backplane + Pico power supply. 1U chassis with ~11" depth (seems
| to be a telco standard?) start around $100.
|
| StarTech 1U JBOD was discontinued, https://www.startech.com/en-
| us/hdd/sat35401u
|
| Silverstone RS431 JBOD,
| https://www.silverstonetek.com/product.php?pid=482&area=en
|
| iStarUSA upcoming (shipping in June) 1U JBOD is $400,
| http://www.scsi4me.com/istarusa-m-140ss-jb-1u-3-5-4-bay-tray...
|
| For ~$600, QNAP has an Arm-based NAS with 2x10GbE and 2x2.5GbE
| networking, plus dual M.2 NVME slots. _Maybe_ Armbian will run on
| that SoC. https://www.qnap.com/en-us/product/ts-435xeu
|
| The $100 ODROID M1 SBC has an M.2 NVME slot with 4x PCIe lanes.
| In theory, this could be bridged to a PCI slot + LSI HBA, within
| a small case, as a DIY low-power NAS.
| walrus01 wrote:
| I would recommend anyone building a home NAS like this in 2022 to
| look into buying some slightly older 10GbE network interfaces on
| ebay (an intel X520-DA2 with 2 x 10Gbps SFP+ ports can be found
| for $55) as a PCI-E card. It's not hard to exceed the transfer
| ability of an ordinary 1000BaseT port to a home switch these
| days.
|
| And if you have just a few powerful workstation desktop PCs it's
| also worth it to connect them at 10GbE to a new switch.
|
| here's a fairly typical one. these have excellent freebsd and
| linux kernel driver support.
|
| https://www.ebay.com/itm/265713815725?epid=1537630441&hash=i...
| semi-extrinsic wrote:
| Or go a little further and spend ~$200 on a used passive FDR
| Infiniband switch, $50 per used dual port FDR IB NIC, $40 for a
| 10 meter fibre optic cable including transceivers at each end.
|
| Then run IP over IB on each host and you have a 56 Gbit network
| that all your applications will just see as another network
| interface on each host.
| AdrianB1 wrote:
| I bought brand new 10 Gbps NICs ($47/pcs) and switch ($120)
| for less than that and replacements are readily available.
| The 20m AOC cables were indeed $40.
|
| A NAS home build of this size will never exceed 10 Gbps, I
| barely get ~ 2 Gbps out of the spinning disks.
| walrus01 wrote:
| the main application where one will see real 10Gbps speeds
| is if the NAS also has a fast NVME SSD used for smaller
| file size needs at high speeds...
|
| for instance I have a setup which is meant for working with
| uncompressed raw yuv420 or yuv422p 1080p and 4K video,
| there's a 512GB NVME SSD and a 1TB SSD set up as individual
| JBOD and exposed to the network for video editing scratch
| file storage, and it will definitely saturate 10GbE.
|
| this is actually needlessly complicated, if/when I build a
| more powerful desktop pc again I'm just going to put the
| same work file space storage on a 2TB NVME SSD directly
| stuck into the motherboard.
| AdrianB1 wrote:
| Very valid point, but I don't know how fast you will hit
| the chipset speed limits; that NIC is connected to the
| chipset that is connected to the CPU via a 4x PCIe link
| and from there to the nVME with 4x PCIe link. In theory
| you have 4 or 8 GB/sec max bandwidth, but the CPU-chipset
| link is not dedicated. If you go for Threadripper the
| math is very different, you have lots of direct CPU
| connections for (multiple) NIC and multiple nVME.
| walrus01 wrote:
| you may be referencing the original post's
| cpu/motherboard combo which is not the same as the pci-e
| bus/lanes, slot and setup on my home file server and
| desktop PC.
| axytol wrote:
| Can you share what your experience with implementing IPoIB
| with used gear was? I'm asking mainly because I actually got
| interested recently with such setups however I got rather
| discouraged by the driver support.
|
| As an example here is the driver page for Mellanox, now owned
| by Nvidia, since they are a major Infiniband equipment
| supplier: https://network.nvidia.com/products/infiniband-
| drivers/linux...
|
| It seems that some decent support only exists for more recent
| generations. The older ones like ConnectX-3 or earlier, which
| typically show up on ebay are either not supported any more
| or maybe available for older kernel versions and soon to be
| EOLed.
|
| So do I understand it correctly that to use such adapters one
| has to actually downgrade to an older kernel version?
|
| Or is there some basic support in the latest Linux kernels
| for older generations still?
| semi-extrinsic wrote:
| Yes, if you want to use the officially supported driver for
| ConnectX-3 (mlx4_xxx kernel modules, LTS release of v4.9
| available from Nvidia's page), you need to go with
| something like Ubuntu 20.04 LTS (which should be good until
| at least end of 2025). However, the latest Mellanox drivers
| (mlx5_xxx kernel modules) work just fine with the
| ConnectX-3, at least for basic functionality.
|
| I've not actually used IPoIB on such gear myself, but we
| have been working quite a bit on reusing old/ancient HPC
| clusters with IB adapters, and you can generally make
| things work if you spend enough time on trial and error and
| you are not afraid of compiling code with complicated
| dependencies. As long as you can get the IB stuff talking,
| and the driver is using OFED, the IPoIB part should Just
| Work.
|
| It is always going to be an adventure working with used
| gear. But HPC has such a high decommissioning tempo and low
| resale value that there will always be quite a few other
| enthusiasts toying about.
| walrus01 wrote:
| for home use I'd highly recommend sticking with just 10GbE
| because you're not locking yourself into a dead-end solution
| of used weird previous gen infiniband stuff.
|
| if you get a $200 switch with a few 10GbE interfaces in it
| you can easily expand things in the future by trunking vlans
| to another newer 10GbE capable switch, or connecting to a
| switch that has multi-gig copper ports for access ports to
| 2.5/5GBaseT capable desktop PCs and laptops, etc.
|
| $40 for a 10 meter fiber optic cable is a high price when you
| can buy LC-LC UPC 9/125 duplex 2 meter cables for $3.50 to
| $4.70 a piece (or a few cents more for additional meters) and
| connect them between $20 transceivers. no matter what route
| someone goes with would recommend buying $30-40 of basic
| fiber connector cleaning supplies.
|
| https://www.fs.com/products/40192.html?attribute=193&id=3026.
| ..
|
| if one wants to buy used weird previous gen dead-end stuff
| there are also tons of very cheap 40GbE mellanox ethernet
| adapters on ebay with the QSFP to go with them, and if you
| have a place to put a switch that doesn't matter if it's
| noisy like in a wiring closet somewhere, cheap 1U switches
| with 40GbE ethernet ports on them that can also be used as
| individual 10GbE when broken out.
| semi-extrinsic wrote:
| You make several good points that I agree with. This is not
| an easy setup for the inexperienced.
|
| I'll just clarify that I meant you can get a 10m fiber
| optic cable _with two transceivers_ for $40.
| sascha_sl wrote:
| >If you're new to the homelab world or have no experience
| building PCs, I recommend that you don't build your own NAS.
|
| >Before building this system, I had zero experience with ZFS, so
| I was excited to try it out.
|
| Sorry, but this is amusing to me. ZFS on TrueNAS is probably
| fine, but you're building your production NAS, to replace the
| Synology device you've become "so dependent on". Don't become
| dependent on ZFS without knowing the implications!
|
| I was facing this choice recently, and I agreed with the other
| tech savy person in the household that we should just use good
| old LVM + Btrfs. Not only does it run like a charm, but it also
| allowed us to switch the LV from single (during the data move) to
| RAID 1 and eventually to RAID 5/6 with zero issues. It will also
| be much easier to recover from than ZFS.
|
| On another note, it's a bad market to buy NAS drives, especially
| from Seagate. Seagate Exos drives are at this point in time often
| cheaper than IronWolf, even non Pro IronWolf. They're slightly
| more noisy and don't come with the free data recovery, but
| otherwise they're a straight upgrade over the IronWolf drives.
| kalleboo wrote:
| Has someone yet created open source patches for LVM + Btrfs
| like what Synology does to pierce the layers and use the btrfs
| checksums as a tie-breaker to tell lvm what disk to trust to
| repair errors?
| loeg wrote:
| My tiny NAS is still using ext4 + md raid1. It's the third
| incarnation of essentially the same design (previously used
| raid10 when drives were smaller).
|
| When it fills up, I delete some files rather than adding disks.
| Teknoman117 wrote:
| I chose the same case for my NAS. Main thing I did different was
| rather than buying a consumer board, I bought a mini-ITX Xeon-D
| board from supermicro which had integrated dual 10G NICs, 6x
| SATA, and an ASPEED IPMI for remote management. Was $400 for that
| board a few years ago (soldered CPU).
| sch00lb0y wrote:
| Home labs are always fun to build
| CraigJPerry wrote:
| Back around 2004 there was a technology called VRRP then OpenBSD
| got a similar thing called CARP - a quick google suggests these
| still exist but i never see mention of them in my filter bubble
| for some reason.
|
| I was obsessed with making an instant failover cluster. I never
| managed to get it working exactly how i wanted and it relied on
| two old UPSs with dead batteries to operate as STONITH devices
| (they had well supported rs232 interfaces).
|
| I sometimes think about investigating that idea again but maybe
| with raspberry pis and cheap iot plugs.
| barbazoo wrote:
| This is really cool. I've been tinkering, trying to get away from
| Dropbox and repurposed an old server to SMB share a disk that I
| occasionally rsync with another disk via ssh. I feel like it's
| not sufficient to protect against errors. What's a reliable, easy
| to maintain NAS solution for that purpose, Synology?
| NegativeLatency wrote:
| I have a similar setup to yours, but with more disks in the
| machine and a hot swap bay for offline backups.
|
| Did the price comparison for sonology a few years ago and felt
| it just made more sense to build my own. It's just the current
| LTS Ubuntu release and it runs plex, pihole, file sharing, cups
| print server and some other stuff
| bsder wrote:
| Unfortunately, he punted on ECC.
|
| Is there a pointer to someone who does this but actually goes
| through the ECC grief?
|
| It's really hard to chop through all the ECC "marketing" aka lies
| from the different motherboard manufacturers.
|
| What's a cost effective CPU/mobo/ECC for NAS?
| layer8 wrote:
| > With only a handful of disks, the odds of two drives failing
| simultaneously is fairly low.
|
| The problem is when the second drive fails while you're
| recomputing the parity after having replaced the first faulty
| drive, a process which may stress the discs more/differently than
| regular operation, and also tends to take some time. Raidz 2 (or
| Raid 6) helps to provide some redundancy during that process.
| Otherwise you don't have any until the Raid has been rebuilt.
| rajandatta wrote:
| Thinking of doing the same for a similar scale! Thanks for
| sharing.
| mtlynch wrote:
| Thanks for reading! Glad to hear it was helpful.
| lousken wrote:
| raidz1 - depending of the type of data, for more important stuff
| than movies and audio I wouldn't use it
|
| it is ok for like an offsite backup that you'll touch maybe once
| in years, if it blows up one day, just upload a new backup
| simonjgreen wrote:
| With over 25 years of large scale *nix sysadmin experience:
| please please please don't fall in to the trap of thinking
| RAID5/Z is a good idea. It almost never is.
|
| The number 1 trap you fall in to is during rebuild after a failed
| drive. In order to rebuild every byte on every other drive has to
| be read. On massive arrays this process invariably throws up
| additional errors, however this time you might not have the
| parity data to recover it. This process continues in a
| snowballing situation. This problem is exacerbated by using
| unsuitable drives. This author seems to have chosen well, but
| many choose to select drives for capacity over reliability in a
| quest for the most TB usable possible. A few years ago there was
| also the scandal of the WD Red drives that were totally
| unsuitable for RAID usage.
|
| And to make matters worse there is the performance impact.
| Writing consists of 4 operations: read, read parity, write, write
| partity. That gives a /4 penalty on the sum of your arrays drives
| IOPS.
|
| RAID6/Z2 gives you slight relief from the above risk, however at
| the increased cost of an additional performance hit (a /6
| penalty)
|
| If going RAID(Z), it is generally considered best practice to go
| for a model that includes a mirror. There are decisions to be
| made whether you stripe mirrors or mirror a stripe. Personally my
| preference for reducing complexity and improving quick rebuild is
| to stripe across mirrors. So that is RAID10. You pair your drives
| up in mirrors, and then you stripe across those pairs. The
| capacity penalty is 50%. The performance penalty is close to
| zero.
|
| The author also chose to skip a write buffer (ZIL) drive. This,
| imo, is a mistake. They are a trivial cost to add (you only
| require a capacity that gives you the maximum amount of data you
| can write to your array in 15 seconds (tunable)) and they offer a
| tremendous advantage. As well as gaining the benefit of SSD IOPS
| for your writes you also save wear on your data array by
| coalescing writes in to a larger chunk and buy yourself some
| security against power cuts etc as faster IOPS give you a reduced
| likelihood of coinciding with an environmental issue. And if you
| are especially worried you can add them as a mirrored pair.
|
| You can also add SSDs as a cache (L2ARC) drive (I think the
| author missed this in their article) to speed up reads. In the
| case of the authors use case this would really help with things
| like media catalogs etc as well as buffering ahead when streaming
| media. The ARC in ZFS always happens, and the L1 is in RAM, but a
| L2ARC is very beneficial.
|
| The author did comment on RAM for the ARC and sizing this. ZFS
| will basically use whatever you give it in this regard. The
| really heavy use case is if you turn on deduplication but that is
| an expensive and often unnecessary feature. (An example good use
| case is a VDI server)
|
| Last tip for ZFS: turn on compression. On a modern CPU it's
| practically free.
| AviationAtom wrote:
| This article seems right up my alley, so here are some thoughts:
|
| - ZFS is pretty amazing in it's abilities, with it ushering in
| the age of software RAID over hardware RAID
|
| - ZFS shouldn't be limited to FreeBSD. The Linux port has come
| quite a long way. I'd advise you to use PPA over repo though, as
| many key features are missing from the version on repos.
|
| - TrueNAS is more targeted towards enterprise applications. If
| you want good utility as a home user then give Proxmox or the
| like a look. Then you can make it into more than just a NAS (if
| you're open to it).
|
| - If you want to make things even more simple then consider
| something like UnRAID.
|
| - ZFS' snapshotting can really shine on a virtualization server
| application, with the ability to revert KVM VMs to a previous
| state in a matter of seconds. Lookup Jim Salter's (great dude)
| Sanoid project to see a prime example.
|
| - I don't recall why, but I've heard that RAIDZ should be
| avoided, in favor of stripped mirrors.
| barrkel wrote:
| Raidz (or preferably raidz2) is good for archival / media
| streaming / local backup and has good sequential read/write
| performance, while striped mirrors - raid10 - are better for
| random access read and write and are a little bit more
| redundant (i.e. reliable), but costs more in drives for the
| same usuable space.
|
| Raidz needs to read all of every drive to rebuild after a drive
| replacement while a striped mirror only needs to read one.
| However if you're regularly scrubbing zfs then you read it all
| regularly anyway.
|
| Raidz effectively has a single spindle for random or concurrent
| I/O since a whole stripe needs to be read or written at a time.
| Raidz also had a certain amount of wastage owing to how stripes
| round out (it depends on how many disks are in the array), but
| you still get a lot more space than striped mirrors.
|
| For a home user on a budget raidz2 usually makes more sense
| IMO, unless you need more concurrent & random I/O, in which
| case you should probably build and benchmark different
| configurations.
|
| I've been using zfs for over 10 years, starting with Nexenta, a
| defunct oddity with Solaris kernel and Ubuntu userland. These
| days I use Zfs on Linux. I've never lost data since I started.
| gjulianm wrote:
| > - TrueNAS is more targeted towards enterprise applications.
| If you want good utility as a home user then give Proxmox or
| the like a look. Then you can make it into more than just a NAS
| (if you're open to it).
|
| I have questions about this. I'm thinking of building my own
| NAS server, and I don't know which OS to use. On the one hand
| it looks like people recommend TrueNAS a lot, which is nice now
| that they have a Linux version, but I'm not really sure what
| does it offer over a raw Debian apart from web/configuration
| and some extra tools? I have quite some experience in running
| Debian systems and managing RAIDs (not with ZFS but doesn't
| seem too much of a jump) and I worry that TrueNAS, while nice
| at the beginning, might end up being limiting if I start to
| tweak too much (I plan on using that NAS for more things than
| just storage).
| AviationAtom wrote:
| If you want it to be strictly a NAS then TrueNAS should
| suffice. If you want to do anything more then I'd consider
| Proxmox or Ubuntu.
| aborsy wrote:
| What you miss with raw Debian compared to TrueNAS is
| compatibility. TrueNAS makes sure that all pieces are
| compatible with one another so that when you update each
| piece or OS, the storage doesn't break. The whole package is
| tested throughly before release.
|
| Also, TrueNAS makes setup painless: users, permissions,
| shares, vdevs, ZFS tuning, nice dashboard etc. With Debian,
| you get a lot of config files and ansible playbooks that
| become hard to manage.
|
| Ideally you won't run other stuff on a NAS, outside Docker.
| AviationAtom wrote:
| There's been a movement in the industry to bring storage
| back onto servers. They use the fancy buzz term of "hyper
| convergence" now though.
|
| I will definitely argue that TrueNAS gives stability and
| ease of management. Some of that can be found with Proxmox
| too though. I think it just really depends on which medium
| you prefer. Perhaps trying both is the best option?
| mardifoufs wrote:
| I know software raid is better overall but are there any
| advantages to hardware raid anymore? Is it just worse at
| everything?
| simonjgreen wrote:
| I would go as far as to say "hardware raid" these days is
| limiting, expensive, and less performant that can be achieved
| with software RAID.
| Anthony-G wrote:
| I'm in the process of setting up a home server after buying
| a pair of matching 3TB Western Digital "Red" disks. I plan
| on installing them in a HPE ProLiant MicroServer G7 Server
| / HP Micro G7 N40L that I was gifted a couple of years ago.
| Even though it comes with a hardware RAID, I was
| considering setting up RAID 1 using Linux software RAID.
| However, according to the _Linux Raid Wiki_ 1, Hardware
| RAID 1 is better than Software RAID 1.
|
| > This is in fact one of the very few places where Hardware
| RAID solutions can have an edge over Software solutions -
| if you use a hardware RAID card, the extra write copies of
| the data will not have to go over the PCI bus, since it is
| the RAID controller that will generate the extra copy.
|
| I was intending to use these disks for local backup and for
| storing rips of my extensive CD and DVD collection. As
| sibling comments mention, the possibility of the hardware
| controller failing is a worry, so I'd need to have a backup
| strategy for the backup disks. Since it's going to be a
| home server, down-time wouldn't be a problem.
|
| I don't have much experience with either hardware or
| software RAID so I'd welcome any advice.
|
| 1 https://raid.wiki.kernel.org/index.php/Overview#What_is_R
| AID...
| zepearl wrote:
| > _according to the Linux Raid Wiki1, Hardware RAID 1 is
| better than Software RAID 1._
|
| Don't take that too much into consideration - the article
| was last updated in 2007 ( https://raid.wiki.kernel.org/i
| ndex.php?title=Overview&action... ) so it lacks some
| details (the same can be said as well for many ZFS-
| related infos that you might find) => nowadays
| doublechecking articles related to raid and ZFS is a
| must.
|
| In my case I bought some HBA (Host Bus Adapter) cards
| (e.g. LSI SAS 9211-8i), set their BIOS to not do anything
| special with the HDDs connected to it (to be able to use
| them as well with other controllers) and used mdadm
| (earlier) or ZFS (now) to create my RAIDs => it works
| well, i get max throughput of ~200MiB per disk, I have
| all fancy features of ZFS without the problem of
| proprietary stuff related to the controller card :)
| AviationAtom wrote:
| I think the worst point is vendor lock-in. If your
| controller fails, and a replacement isn't available, then
| you may be dead in the water. That kind of goes against the
| very point of RAID.
| GekkePrutser wrote:
| But this is why businesses spend so much on their RAID
| controllers. To make sure they're in warranty and that
| kind of thing doesn't happen.
|
| Incidentally its also pretty great because no business
| buys them second hand without warranty. So they're
| usually available for half nothing.
|
| I don't use raid cards right now but I do use fibre
| channel which is also dirt cheap second hand
| AdrianB1 wrote:
| 3 weeks ago I had a controller failure in a manufacturing
| plant in Latin America. The contract with the
| manufacturer was to provide a replacement on site in 4
| hours. Guess what, 8 hours later the technician with the
| replacement controller was still on the way.
|
| With TrueNAS I can move my drives to any other computer
| with the right interface and they will just work. I did
| this in the past 10 years of using TrueNAS.
| SkyMarshal wrote:
| _> - ZFS shouldn 't be limited to FreeBSD. The Linux port has
| come quite a long way. I'd advise you to use PPA over repo
| though, as many key features are missing from the version on
| repos._
|
| Agreed. Also for anyone using NixOS, I've found its ZFS support
| is first class and easy to set up:
|
| https://www.reddit.com/r/NixOS/comments/ops0n0/big_shoutout_...
| KennyBlanken wrote:
| > I don't recall why, but I've heard that RAIDZ should be
| avoided, in favor of stripped mirrors.
|
| Most people care about random IO (also once your filesystem has
| been populated and in use for a while, true linear IO really
| ceases to be due to fragmentation.) Striped arrays lose random
| IO performance as drive count goes up; an array of mirrored
| pairs gains random IO performance. This is less of an issue
| with tiered storage and cache devices, especially given you
| almost have to work to find an SSD less than 256GB these days.
|
| You can only upgrade a zdev by upgrading all its drives; it's a
| lot nicer cash-flow-wise to gradually upgrade a mirrored pair
| here and there, or upgrade exactly how many pairs you need to
| for the space you need.
|
| With RAID-Z you have a drive fail and pray a second doesn't
| fail during the resilver. With RAID-Z2 you can have any two
| drives fail. With mirrors you can lose 50% of your drives
| (provided that they're the right drives.)
| magicalhippo wrote:
| > also once your filesystem has been populated and in use for
| a while, true linear IO really ceases to be due to
| fragmentation
|
| Enough concurrent clients doing sequential IO also looks like
| random IO to a storage server.
| agilob wrote:
| >- ZFS shouldn't be limited to FreeBSD. The Linux port has come
| quite a long way. I'd advise you to use PPA over repo though,
| as many key features are missing from the version on repos.
|
| FreeBSD migrated from own ZFS to OpenZFS so you have single ZFS
| implementation in BSD and Linux
| https://openzfs.github.io/openzfs-docs/Getting%20Started/Fre...
| vetinari wrote:
| > I'd advise you to use PPA over repo though, as many key
| features are missing from the version on repos.
|
| I would advise using ZFS only with distros that come with it
| (i.e. Ubuntu, Proxmox), especially if you plan to have your /
| on it. I wasted too much time on CentOS with ZFS, would not do
| it again.
| aborsy wrote:
| Yeah, Ubuntu has done a pretty good job with ZFS on root
| installation.
|
| Zero setup, works out of box. Highly recommend ZFS and Ubuntu
| with ZFS!
| AviationAtom wrote:
| I think for me I am more concerned about trying to get the
| OS bootable again, if something becomes corrupted on the OS
| level. Even with MD RAID it came be a bit of a struggle to
| recover, but ZFS on Root seemed much harder to troubleshoot
| and repair. Perhaps I am mistaken in this belief though?
| aborsy wrote:
| Isn't ZFS there precisely to address your concern?!
|
| If OS doesn't boot, you boot from the latest snapshot!
| Every time you run apt-get upgrade, a system snapshots is
| taken automatically and an entry is added to boot menu.
| AviationAtom wrote:
| I guess I was referring to more to corruption resulting
| in an unbootable system. If you can't boot in then how
| would you roll it back?
| GekkePrutser wrote:
| That's where backups come in. Any filesystem can get
| corrupted. Though for ZFS it's less likely than with
| something like ext4. Even though both have journalling,
| only ZFS has copy on write.
| aborsy wrote:
| In Ubuntu's implementation, root and boot are separate
| pools (bpool, rpool). Both are (and can be manually)
| snapshoted. So if boot is corrupted, you roll back. I
| should say I haven't tried it though, to see how boot
| selection works (rolling back rpool is straightforward
| though).
|
| The boot corruption could occur with the default file
| system ext4 also, except with ext4 there l is no
| recourse.
|
| Needless to say, you can always boot from a live USB and
| mount your ZFS pool (and perhaps roll back).
| mustache_kimono wrote:
| I've had to recover a ZFS on root system, whose
| bootloader installation I had somehow screwed up, and the
| process is pretty straight forward.
|
| See: https://openzfs.github.io/openzfs-
| docs/Getting%20Started/Ubu...
| AviationAtom wrote:
| ZFS on Root just sounds like pain to me. I opt for MD RAID on
| root and then ZFS my other volumes.
|
| I would also say Ubuntu is probably the better choice for
| Linux ZFS, as CentOS seems to be lacking good support.
| GekkePrutser wrote:
| ZFS on root is really amazing on FreeBSD and the advantage
| is that you can snapshot your boot drive.
| mvanbaak wrote:
| have a look at Boot Environments. It really is amazing.
| mustache_kimono wrote:
| Once you try it, you're never going back. Snapshots are
| made for things like system administration. Upgrade borked
| your system? Just rollback.
|
| Want to use the last version of your firewall config? I
| wrote a utility you might like to try, httm[1], which
| allows you to restore from your snapshot-ed unique
| versions.
|
| If you like ZFS, then _trust me_ you have to have ZFS on
| root.
|
| [1]: https://crates.io/crates/httm
| AviationAtom wrote:
| Had you previously done a Show HN on this? I feel like I
| saw it once before.
| mustache_kimono wrote:
| Someone else posted about it awhile ago:
| https://news.ycombinator.com/item?id=31184404
| AviationAtom wrote:
| I also forgot to mention, your guidance on ZFS memory
| requirements is outdated. From what I have heard recent
| releases have drastically reduced the size of ARC cache
| necessary. One person reported it working phenomenally on a
| newer Raspberry Pi.
| GekkePrutser wrote:
| True but COW is even harder on an SD than ext4 is so I would
| really not use it on a pi unless it's not using SD storage :)
| agapon wrote:
| Been using ZFS on eMMC with things like Orange Pi-s and
| Rock64-s for a few years, so far works good for me.
| AviationAtom wrote:
| I do think many folks using such applications are booting
| from disk. IIRC, Raspberry Pi 4 supports disk booting
| natively.
| eminence32 wrote:
| Nice build. I recently built my second NAS (from a used R720 from
| ebay). The total (without disks) is pretty similar to the build
| documented in this article.
|
| Having a large NAS has had an interesting (though predictable)
| impact on all the computers around it: Pretty much every bit of
| data lives on the NAS (accessed either by CIFS, NFS, or iSCSI).
| When I had to reinstall Windows, it was mostly painless because
| all my important data and Steam games library was on a remote
| iSCSI disk. When I replaced the drives on my linux servers, I
| didn't have to backup hardly anything, as I worked almost
| exclusivly on NFS-mounted directories. When bringing up a new
| raspberry pi for projects, it also has instant access to more
| terabytes of storage than it could ever need.
|
| Also, for a homelab, getting 10GBe fiber between two machines is
| surpringly cheap and easy. For certain workloads, it can be a
| noticable speed boost over 1GBe.
| Macha wrote:
| How is performance, especially with regards to load times, if
| your steam library is mounted remotely?
|
| I ask because the difference between a SSD and a hard drive can
| be massive in this regards, so I'd be really interested to know
| if the network latency is also a comparable hit.
| vladvasiliu wrote:
| I'm not a hardcore gamer by any means, but I really wonder
| how much influence drives actually have on games.
|
| My gaming computer had an old SATA SSD (Samsung 840 Evo
| IIRC). Some games took ages to load (particularly Fallout 4).
| I switched to a much faster NVME drive, and subjectively,
| it's not any faster loading games. I'd say this was a very
| underwhelming purchase.
| Macha wrote:
| There's certainly diminishing returns between "an SSD" and
| "a faster SSD" (unless the slower one is DRAMless or QLC),
| but hard drive to SSD is still a big gulf
| eminence32 wrote:
| The performance is fine. It's been years since I ran my steam
| library on HDD, so I don't have anything really to compare to
| (except my own expectations and impatience). The NAS is
| running 7 SSD drives in zfs raid, and exports a 1TB volume
| over iSCSI, via a dedicated 10GBe fiber link. Anecdotally, I
| will often boot into games faster than friends who have a
| local SSD (so I think this means that I've gotten disk perf
| to be "good enough" that other hardware components start to
| dominate things like load times)
| kennywinker wrote:
| If your budget server starts with a first step of buying new
| hardware, I'm going to ignore your advice. A 500W psu? nope. Buy
| a used thinkcentre with a 250W CPU for $50-$100 and save the
| planet more of this e-waste.
| kstenerud wrote:
| I set up my most recent NAS using a TerraMaster unit. It's
| basically a nifty case (9x5x5 inches) around a low power Intel
| board with a USB stick for a boot device (which I replaced with a
| mini 100gb USB SSD).
|
| I don't know and don't care about TerraMaster's software (it
| might be awesome - I have no idea). I just rolled my own NixOS
| install with ZFS so that I could have a deterministic
| installation (I've heard good things about the TrueNAS OS as
| well, but I'm a control freak and like being able to rebuild the
| entire server with a single command and a config file, so I stick
| with NixOS).
|
| The nice thing is that I essentially got a motherboard, CPU, PSU,
| and compact case for $350 (for the F2-422). All I had to do was
| upgrade the RAM (SO-DIMM) and add the drives.
|
| I've long since reduced to only two drives for my NAS. At one
| point I was up to 7 drives before I realized my madness. It's
| cheap enough to get the storage I need with two mirrored drives,
| is quieter and uses less energy (I can keep it in the same room),
| and when I finally outgrow them in 5 years or so, the old drives
| will be re-purposed as backup via an external USB enclosure I
| keep around.
| aborsy wrote:
| Can you install TrueNAS on it?
|
| It's unclear if one could install TrueNAS on a Synology, QNAP,
| Teramaster etc. Sometimes hardware is not supported.
| kstenerud wrote:
| I don't know, but if I could install NixOS without
| difficulty, it should be possible. I installed Ubuntu server
| on it at first and that also worked fine. No tweaking
| necessary at all. You just flash the standard x64 installer
| on a USB stick, plug it in, and install like you would on any
| PC (because it basically is a PC - it even has a working HDMI
| port).
|
| Edit: Looks like someone did a writeup for TrueNAS on a
| TerraMaster: https://joelduncan.io/freenas-on-
| terramaster-f2-221/
|
| Also: https://mightygadget.co.uk/how-to-upgrade-the-
| terramaster-f4...
| aborsy wrote:
| It's an attractive option, because it might be cheaper than
| a TrueNAS mini from iXSystems (which is also difficult to
| ship outside US) or a DIY NAS.
|
| You get an affordable TrueNAS server.
| jen20 wrote:
| Nice article - though one misstatement is that ZFS dies not allow
| you add disks to a pool. It does [1] [2], by adding new vdevs.
| The linked issue is about adding support for expanding existing
| vdevs instead.
|
| [1]: https://openzfs.github.io/openzfs-docs/man/8/zpool-
| add.8.htm...
|
| [2]: https://docs.oracle.com/cd/E53394_01/html/E54801/gayrd.html
| linsomniac wrote:
| It's a hard pill to swallow, adding two more drives in a vdev
| to get one more drive worth of storage (authors case maxes out
| at 6 drives, currently has 4). So often you will bite the
| bullet and just completely rebuild.
|
| True RAIDz expansion is something that's supposed to be coming,
| possibly in Q3 2022, so it may be that by the time one needs to
| expand a volume, that ability will have landed. That'll be a
| game changer.
| tristor wrote:
| I would advise using ZFS on Linux over ZFS on FreeBSD. You may
| find this somewhat surprising if you know my post history being a
| major FreeBSD advocate, but I have run into a somewhat surprising
| and persistent (and known, but not to me when I started building)
| issue with FreeBSD's USB Mass Storage support. This issue does
| not happen on Linux. This is among several issues I noted which
| affected my ability to make a budget-friendly homelab NAS.
|
| Since you are using an M.2 drive rather than a USB drive for your
| boot drive, you are not affected by the issue that affected me.
| But I've reached a point where I would not trust FreeBSD to not
| have weird and esoteric hardware issues that could affect
| performance or reliability for storage. I'd recommend using ZFS
| on Linux (Note, I still use FreeBSD as my primary OS for my
| personal laptop).
| vermaden wrote:
| Did something similar but with a lot less power consumption and
| price:
|
| - https://vermaden.wordpress.com/2019/04/03/silent-fanless-fre...
| acheron wrote:
| I've been wanting to build a NAS recently, this looks pretty
| good.
|
| On the other hand, I can't stand people who say "homelab". Ugh.
| pxeger1 wrote:
| Why?
| hamandcheese wrote:
| > ZFS doesn't let you add a new drive to an existing pool, but
| that feature is under active development.
|
| This is not true at all. You can't add new drives to an existing
| _vdev_ , but you are free to add new vdevs to an existing pool
| whenever you want.
| aborsy wrote:
| I wonder if it's possible to back up ZFS (eg, a ZFS server) to
| btrfs (eg, a synology nas)?
|
| I mean backing up the file system (as with ZFS send) not scanning
| all files (using rsync of restic).
| dsr_ wrote:
| The target of a zfs send can be a plain file on another
| filesystem- btrfs, ext4, xfs, exfat, whatever.
| aborsy wrote:
| Yes, but it's usually not recommended, because the receiver
| side doesn't verify that the data is identical to that at the
| sender side, and a small error could corrupt the file system.
|
| ZFS send and receive is a good way to do it, but there is no
| ZFS send and btrfs receive!
| termios wrote:
| this costs as much as every PC i've ever owned put together!
|
| my budget homelab is 100% recycled (ewaste):
|
| - dual core pc (free)
|
| - hard drives (free)
|
| - ram (free)
|
| - lcd monitor (free)
|
| - mdadm + ext4
| diffeomorphism wrote:
| What is the power consumption?
|
| At the beginning of the year 0.35EUR/kWh was a good estimate.
| An extra 10W for one year then costs about 30EUR.
|
| I get the "recycled" motivation, but at that point you might be
| wasting lots of electricity (and as a result also money).
| RobLach wrote:
| If you're going for budget, decommissioned data center rack
| servers work well.
|
| I purchased a dual xeon (24 cores total) with 64gb of memory, 12
| 3.5" bays, dual power supplies, for about $250 from a liquidator.
|
| Filling it with HDDs was pricey but you can expand as you need it
| to spread the expenditure.
| mobilio wrote:
| Same here - 32 cores/64G ram/4 3.5" bays, dual power 460w for
| $300.
|
| Results: - idle - 55W - full usage - 200W
|
| Not bad for 10 years old server.
| fuzzy2 wrote:
| I have to say, I'm put off by the power benchmark. I have a way
| older system with way more stuff (6 3.5 inch HDD, 2 2.5 inch HDD,
| 2.5 inch SSD, 8-port SAS controller, Intel NIC card) and it idles
| (all drives spun down) at ~30 watts.
|
| When I first bought the system over 10 years ago, ZFS on Linux
| wasn't really a thing, so I used FreeBSD. I later switched and
| with the switch came substantial power savings.
| mtlynch wrote:
| Oh, that's interesting. TrueNAS is available on Debian now, so
| I wonder if there would be a big drop in power consumption.
|
| Lawrence Systems just ran benchmarks[0] between TrueNAS Core
| (FreeBSD) and TrueNAS Scale (Debian), but they didn't include
| power consumption, unfortunately.
|
| [0] https://www.youtube.com/watch?v=BoiHHnBDg0E
| mrb wrote:
| I am sure the author will appreciate ditching the proprietary
| Synology to go instead with a custom ZFS server, as the
| reliability, recoverability, and feature set of ZFS is quite
| frankly hard to beat. I have been using ZFS to build my custom
| NASs for the last... _checks notes_ 17 years. I started back when
| ZFS was only available on Solaris /OpenSolaris. My builds usually
| have between 5 and 7 drives (raidz2).
|
| However I do not recommend his choice of 4 x 8TB drives in a
| raidz1. Financially and technically it doesn't make sense. He
| spent $733 for 24TB usable ($30.5/TB)
|
| He should have bought fewer, larger drives. For example 14TB
| drives sell for $240. So a config with 3 x 14TB in a raidz1 would
| total $720 for 28TB usable ($25.7/TB). Smaller costs, more
| storage, one less drive (= increased reliability)! It's win-win-
| win.
|
| Especially given his goal and hope is in a couple years to be
| able to add an extra drive and reshape the raidz1 to gain usable
| space, then a 14TB drive then will be significantly cheaper per
| TB than an 8TB drive (today they are about the same cost per TB).
|
| Actually, with only 8.5TB of data to store presently, if I were
| him I would probably go one step further and go with a simple zfs
| mirror of 2 x 18TB drives. At $320 per drive that's only $640
| total for 18TB usable ($35.6/TB). It's a slightly higher cost per
| TB (+17%), but the reliability is much improved as we have only 2
| drives instead of 4, so totally worth it in my eyes. And bonus:
| in a few years he can swap them out with 2 bigger-capacity
| drives, and ZFS _already_ supports resizing mirrors.
| octopoc wrote:
| > For example 14TB drives sell for $240
|
| Where? Also, is it worthwhile to buy hard drives explicitly for
| NAS when you're using ZFS? For example, Seagate has the
| IronWolf product line explicitly for NAS and cost more.
| gen220 wrote:
| https://diskprices.com/?locale=us&condition=new&capacity=14-.
| ..
| mrb wrote:
| I looked at a popular line (Seagate X16) here:
| https://www.amazon.com/dp/B07T63FDJQ - $237.99 right now
|
| Drives branded for NAS applications differ slightly from
| mainstream drives. For example Seagate claims the IronWolf is
| "designed to reduce vibration, accelerate error recovery and
| control power consumption" which essentially means the drive
| head actuators will be operated more gently (reduced
| vibration) which slightly increases latency and slightly
| reduces power consumption, and also the firmware is
| configured so that it does fewer retries on I/O errors, so
| the disk commands time out more quickly in order to pass the
| error more quickly to the RAID/ZFS layer (why wait a minute
| of hardware retries when the RAID can just rebuild the sector
| from parity or mirror disks.) IMHO for home use, none of this
| is important. Vibration is only an issue in flimsy chassis,
| or extreme situations like dozens of disks packed tightly
| together, or extreme noise as found in a dense data center
| (see the video of a Sun employee shouting at a server). And
| whether you have to wait a few seconds vs a few minutes for
| an I/O operation to timeout when a disk starts failing is
| completely unimportant in a non-business critical environment
| like a a home NAS.
| philjohn wrote:
| Thought I'd chime in here with my low-cost NAS/backup server/home
| server.
|
| It's running in a 2U case I got from servercase UK that takes 6
| hard drives, it's running:
|
| - Core i3 9100T (35w TDP, configurable down to 25W)
|
| - Asrock Rack WS246I (mini itx workstation board, no need for an
| HBA as there are 8 SATA ports on board, 4 standard and another 4
| from the OCuLink)
|
| - 32GB ECC DDR4 (2 16 GB sticks)
|
| - Solarflare 7 series 10Gb SFP+ NIC (second hand, from ebay)
|
| - 6 ironwolf 4TB NAS drives
|
| Total cost was just a shade under 1000 GBP and it's racked up
| with my networking gear in the garage.
| js2 wrote:
| New drives are around $16-$20/TB depending on if you catch a sale
| and are willing to shuck. You can pick up used SAS drives in
| 3-4TB capacity for around $5/TB. I'm a crazy person, so I built
| this to hold 11 3TB SAS drives:
|
| https://imgur.com/a/JqbBN1p
|
| https://pcpartpicker.com/list/dLhNvf
|
| I used a Supermicro MB and ECC RAM. It's not much more expensive
| and it's nice having IPMI. I personally think it's crazy to
| forego ECC. The SAS controller, expander, and drives were used,
| everything else was new. Prices have gone up. The new parts were
| $638 at the time. The drives were ~$20/ea. The HBA and expander
| were ~$85 for both. After the fans, cables, extra drive cage and
| brackets, total cost was around $1K. This hardware is all
| supported out-of-the-box by TrueNAS. I haven't done the math to
| figure out when the cost of running this will exceed having
| purchased higher capacity drives.
|
| This is what a typical used SAS drive looks like. 30K hours hours
| but very few on/off cycles. Zero defect list:
|
| https://pastebin.com/WcUYX4JR
|
| The failed SMART test turned out to be a firmware issue. I had to
| update the firmware on all the drives. That was a bit of an
| adventure:
|
| https://pastebin.com/j3AGX0xN
|
| A few drives arrived with a non-zero defect list or otherwise
| failed burn-in. I contacted the seller on eBay and they sent me
| replacements w/o any fuss. I'm not necessarily recommending used
| SAS drives, but I'm not recommended against them either. I will
| recommend the serverbuilds forum for generally good advice on all
| this. I think this post got me started:
|
| https://forums.serverbuilds.net/t/another-nas-killer-4-0-bui...
|
| The current NAS killer version is 5.0:
|
| https://forums.serverbuilds.net/c/builds/18
| gjsman-1000 wrote:
| The catch with shucking drives from cheap NAS systems like from
| WD is that those drives have a strong likelihood of being SMR
| drives instead of CMR. Basically, they'll work fine until
| something goes wrong requiring, say, a RAID rebuild, and then
| you'll be out weeks while they rebuild begging that they don't
| fail in the process because the random write performance is
| abominable.
| Nextgrid wrote:
| SMR only affects writes, so assuming you're originally fine
| with SMR (let's say you only write to the pool very
| sparingly), you can start off with SMR but then use CMR for
| replacements.
| krnlpnc wrote:
| Is anyone running a similar scale setup with cloud storage and
| rclone? I've been considering retiring my 16TB NAS
| RektBoy wrote:
| Isn't it illegal to RIP dvds, in your country? Just curious,
| because I live in country where it's "legal" to download movies,
| etc., for own use.
| 627467 wrote:
| How do you handle backups of such amount of data?
| Joel_Mckay wrote:
| "CephFS supports asynchronous replication of snapshots to a
| remote CephFS file system via cephfs-mirror tool. Snapshots are
| synchronized by mirroring snapshot data followed by creating a
| snapshot with the same name (for a given directory on the
| remote file system) as the snapshot being synchronized." (
| https://docs.ceph.com/en/latest/dev/cephfs-mirroring/ )
|
| We found ZFS led to maintenance issues, but it was probably
| unrelated to the filesystem per say. i.e. culling a rack
| storage node is easier than fiddling with degraded raids.
| mtlynch wrote:
| I have a nightly restic backup from my main workstation to
| buckets on Backblaze and Wasabi. It backs up the few local
| folders I have on my workstation and all the files I care about
| on my NAS, which the workstation accesses over Samba. I've
| published my scripts on Github.[0]
|
| I don't back up my Blu-Rays or DVDs, so I'm backing up <1 TB of
| data. The current backups are the original discs themselves,
| which I keep, but at this point, it would be hundreds of hours
| of work to re-rip them and thousands of hours of processing
| time to re-encode them, so I've been considering ways to back
| them up affordably. It's 11 TiB of data, so it's not easy to
| find a good host for it.
|
| [0] https://github.com/mtlynch/mtlynch-backup
| farmerstan wrote:
| As it gets bigger and bigger to me the only thing that makes
| sense is getting another nas and replicating that way.
| willis936 wrote:
| I use B2 + E2EE. TrueNAS can push and pull pools to many
| different options but Backblaze is the cheapest I've found.
| fuzzy2 wrote:
| Buy another, use ZFS send/receive. It's only double the price!
| Better yet, put it elsewhere (georedundancy). With ZFS
| encryption, the target system need not know about the data.
|
| For critical data though I use Borg and a Hetzner StorageBox.
| throw0101a wrote:
| A 22TB pool can perhaps be backed up to a single 26TB drive
| (over USB? Thunderbolt?):
|
| * https://www.techradar.com/news/larger-than-30tb-hard-
| drives-...
|
| Buy multiple drives and a docking station and you can rotate
| them:
|
| * https://www.startech.com/en-us/hdd/docking
|
| ZFS send/recv allows for easy snapshotting and replication,
| even to the cloud:
|
| * https://www.rsync.net/products/zfs.html
|
| * https://arstechnica.com/information-
| technology/2015/12/rsync...
| Macha wrote:
| However, such a drive is getting heavily into diminishing
| returns territory.
|
| e.g. a 20TB drive from Seagate is $500. A 4TB drive is $70,
| 8TB is $140. Getting the same spend in smaller capacity
| drives would give you 28TB in the 4TB drives and 24TB/32TB in
| the 8TB drives (for $80 under/$60 over).
|
| Add in a second to rotate and you're spending $1000 in
| drives, assuming these 26TB drives replace the 20TB drives at
| a similar price when they trickle down to consumer hands.
| trollied wrote:
| You have to factor in the power usage of having multiple
| drives spinning. Though I'd agree that smaller drives are
| better when you have a drive failure, as resilvering is
| quicker.
| throw0101a wrote:
| OpenZFS 2.0's sequential resilver may help:
|
| * https://github.com/openzfs/zfs/pull/10349
|
| * https://github.com/openzfs/zfs/releases/tag/zfs-2.0.0
| Vaslo wrote:
| I have tried a few different OSes but my favorite by far is
| unRaid. It's really easy to setup and maintain and it gave me a
| lot of really good experience with server maintenance and the
| whole container ecosystem. I bought a 24 drive server chassis and
| am slowly filling it up. Up to 80 TB now and I only have to have
| one extra drive for local backup (I also backup to another box
| that I do periodically).
| rkagerer wrote:
| Although I understand, this made me sad:
|
| _I ultimately decided against ECC RAM_
|
| Also unsurprised to see load-related issues stemming from his
| embedded Realtec hardware.
| lazzlazzlazz wrote:
| My conclusion from this was that the Synology is actually
| excellent value, and a newer one would likely have been superior
| on all dimensions (including time spent).
| ulnarkressty wrote:
| Right, apart from the entertainment/hobby value I'm not sure I
| understand these guides. It might be cheaper to build but in
| the end what you pay for is the software and not spending your
| time to configure it.
|
| At some point I wanted to go the TrueNAS / FreeNAS / OwnCloud
| etc. route but after seeing the pages upon pages of
| troubleshooting and lost data horror stories I stuck with a
| commercial solution.
| aborsy wrote:
| It's hard to beat synology: small form factor, low power,
| quiet, excellent DSM software, web interface for file browsing,
| expandable array, a lot of apps including mobile apps for photo
| backup and backup apps, etc.
|
| But Synology doesn't use ZFS, which is a better filesystem than
| btrfs. In particular ZFS offers native encryption (instead of
| the clunky ecryptfs in synology), and allows ZFS send from
| Linux servers.
| throw0101a wrote:
| 22TB of total capacity is interesting because we're now getting
| >26TB on _single drives_ :
|
| * https://www.techradar.com/news/larger-than-30tb-hard-drives-...
|
| Crazy.
| copperfoil wrote:
| Yes but there's a price/reliability/performance trade off.
| Also, with disks that big failures become qualitatively
| different. For example, when a disk fails in a mirror, the
| bigger the disk size the higher the chance the 2nd disk will
| have unreadable blocks.
| kstenerud wrote:
| This is what ZFS scrubbing is for. If a drive develops
| unreadable sectors, ZFS will alert you.
| giantrobot wrote:
| If the drives are in a RAID ZFS will not only alert you but
| fix the corruption from parity on other disks.
| 2OEH8eoCRo0 wrote:
| Why is there a need to proselytize about ZFS in every thread on
| this topic?
| Joel_Mckay wrote:
| Because people don't know about cephfs yet, and the silliness
| of degraded raid setups. i.e. trusting a zfs community edition
| in a production environment can be a painful lesson. ;-)
| magicalhippo wrote:
| I know about CephFS, but performance was abysmal compared to
| ZFS for a home server. On a single box with 4-8 drives I
| didn't come close to saturating a 10G link, which ZFS managed
| just fine.
|
| It was also very complex to manage compare to ZFS, with many
| different layers to consider.
|
| I'm sure it shines in a data center, for which it has been
| designed. But unless something radical has changed in the
| last year, it's not for a budget homelab NAS.
| Joel_Mckay wrote:
| The cephfs per-machine redundancy mode is usually the
| preferred configuration. i.e. usually avoids cramming
| everything into a single point of failure, buying specialty
| SAS cards, and poking at live raid arrays to do
| maintenance.
|
| Seen too many people's TrueNAS/FreeNAS installs glitch up
| over the years to trust the zfs community edition as a sane
| production choice. ZFS certainly has improved, but Oracle
| is not generally known for their goodwill toward the
| opensource community. ;-)
| zamalek wrote:
| > Oracle
|
| BTRFS seems to be maturing nicely, hopefully we can start
| using it for these types of workloads in the next few
| years.
| Joel_Mckay wrote:
| BTRFS? you have to be joking... zfs despite its history,
| it has rarely achieved that level of silliness.
| magicalhippo wrote:
| I've never run TrueNAS/FreeNAS in proper production, but
| I have run it at home for over a decade and never lost
| data, despite generally running on old hardware, multiple
| drive failures, motherboards dying and power
| outages/lightning strikes.
|
| Overall been very little fuzz for my home NAS system.
| linsomniac wrote:
| I'm sure it'd be painful, but let's throw "infrequent" onto
| your description of the lesson. :-)
|
| I've run ZFS for home storage and work backups for ~15 years,
| across Nexenta, ZFS-fuse, FreeBSD, and OpenZFS, backing up
| hundreds of machines, and have never lost data on one of
| them.
| dsr_ wrote:
| It's almost entirely because people really like technology that
| not only promises to reward you with excellent features and
| stability, it follows through on those promises.
| Deritiod wrote:
| Because it works so well. At least that's why I talk about it.
| mmastrac wrote:
| > I purchased the same model of disk from two different vendors
| to decrease the chances of getting two disks from the same
| manufacturing batch.
|
| I prefer mixing brands/models instead. Two vendors _might_ get
| you a different batch, but you could be choosing a bad model. I
| ended up building mine from three different WD models and two
| Seagate ones. I'm paranoid and run with two spares.
| zamalek wrote:
| > Power usage
|
| A 500W PSU won't necessarily draw more than a 250W PSU, that is
| merely its maximum sustained load (what the rest of the system
| asks for) rating. The Bronze 80+ rating is likely part of the
| problem here, that indicates what the power draw from the wall is
| compared to what is being provided to your system. Titanium 80+
| would net you about 10% reduction in wall power usage. Keep in
| mind that manufacturers play fast and loose with the
| certification process and a consumer unit may not actually be
| what it says on the box, you need to rely on quantitative
| reviews.
|
| Other than that, spend some time in the firmware settings. Powtop
| also does a great job at shaving off some watts.
| KennyBlanken wrote:
| Switch-mode PSUs are very inefficient at the low end of their
| duty cycle.
|
| A 250W 80-bronze PSU for a 60W load will be operating at 25%
| capacity and 82% efficiency or better.
|
| A 500W 80-titanium PSU at 60W will be at around 12% and 90%
| efficiency or better.
|
| So, an 8% difference in _minimum required_ efficiency...for a
| _huge_ increase in cost.
|
| It's much better to buy a high "tier" PSU (for reliability and
| safety), sized so that it spends most of its time at or above
| 20% duty cycle (which in OP's case would indeed be 250W.)
|
| 80-gold is very common in the marketplace and where most people
| should probably be buying.
| mrb wrote:
| " _A 500W PSU won 't necessarily draw more than a 250W PSU_"
|
| Mostly true, but not exactly. Most computer PSUs are more
| efficient when operating around 50% of their rated load. So if
| a computer consumes 125W internally, a 250W PSU would translate
| to lower power consumption measured at the wall than a 500W
| PSU, typically by about 2-5%.
|
| For example see the chart https://www.sunpower-
| uk.com/files/2014/07/What-is-Effciency.... (115 VAC input) :
| 88% efficiency at 25% load, vs 90.5% efficiency at 50% load. In
| practice if the consumption is 125W at the PSU's DC output,
| this translates respectively to 142W vs 138W measured at the
| wall.
|
| This 2-5% difference may not seem much, but it's similar to
| upgrading 1 or 2 levels in the 80 PLUS ratings (Bronze, to
| Silver, to Gold, to Platinum, to Titanium).
| Jhsto wrote:
| What is the technical upside of using TrueNAS instead of samba?
| If you want to optimize for control, it seems a bit weird to me
| to settle for an "all in one" software stack.
| loeg wrote:
| You're asking the equivalent of "why use Linux instead of TCP?"
| Thaxll wrote:
| Samba config files are painful, it's pretty old school. On top
| of that you need to setup users/groups etc...
| Tomdarkness wrote:
| Not really sure the comparison is valid. TrueNAS uses Samba to
| provide SMB network shares.
| Jhsto wrote:
| I see, so I assume the upside is that it's a time saver.
| Thanks! I personally wen't with samba on Linux and with
| btrfs. I was wondering if there's something non-obvious in
| TrueNAS that I'm missing out on.
|
| And to my account, I think my upsides are that:
|
| - ability to choose the kernel
|
| - no need for SSD for base OS since running off of RAM is
| rather easy on Linux
|
| - samba can run in a container thus a bit more control
| security-wise
|
| - server may run something else as well
|
| Of course, this comes with a lot more technical hurdles. More
| like a side-project than utility really. That's why I was
| wondering does TrueNAS provide non-obvious upsides that would
| be lacking in self-rolled one.
| kalleboo wrote:
| There are two flavors of TrueNAS - Core and Scale. Core is
| basically a FreeBSD distro and Scale is basically a Linux
| distro. They're both a base OS with the typical packages
| anyone would need for a NAS, with sane defaults + a user-
| friendly web-based management system.
|
| The upsides are that it's plug-and-play for anyone who
| doesn't want to research all the options available and
| figure out the various pitfalls on their own.
|
| > _no need for SSD for base OS since running off of RAM is
| rather easy on Linux_
|
| I don't understand this sentence. You're running off a RAM
| disk with no boot drive? What if you have a power outage?
|
| > _samba can run in a container thus a bit more control
| security-wise_
|
| Core supports FreeBSD jails and Scale supports Docker so
| you could run samba in a container on either if you're
| willing to do set it up yourself.
|
| > _server may run something else as well_
|
| As before, both have jail/container functionality. I
| haven't used Scale myself but Core comes with a bunch of
| "click to install" jail options for stuff like Plex,
| ZoneMinder, etc. Our machine also runs a Windows VM (ew)
| and a Wordpress install in a Jail
| Jhsto wrote:
| Thanks, this is a great explanation! I wish the blog post
| would have described the TrueNAS like this.
|
| > You're running off a RAM disk with no boot drive? What
| if you have a power outage?
|
| Yes, the server only has the HDDs which contain the NAS
| data. The server bootloops until it gets an image from
| the router (ipxe boot). The disk images have systemd
| scripts which install everything from 0 on each boot.
| Coincidentally, this means system restart is how I
| upgrade my software.
|
| > Core supports FreeBSD jails and Scale supports Docker
|
| This clarifies the situation -- TrueNAS seems like an
| option that I would recommend for anyone who wants a
| quick OSS NAS setup.
| idatum wrote:
| I use 2 ZFS mirrored 4TB drives mounted on a USB C dual bay
| device (iDsonix brand) for backing up my ZFS pools. I have a
| simple script that imports the backup pool and sends snapshots
| from my main ZFS pools to the backup pool.
|
| My question: How do you safely store your physical backup
| drives/devices?
|
| I have a fireproof box, but I don't think it was made for safely
| storing electronics in the event of a fire.
| Sylveste wrote:
| The same solution only powered by POE and buried in a pelican
| case filled with dessicant sacks in your back garden
| idatum wrote:
| :-)
|
| To be clear I meant the drives backing up the NAS, not the
| actual NAS.
|
| I think backing up online ultimately is the safest choice,
| and it takes getting comfortable doing that and being okay
| with paying a fee. This is for data that I can't lose, like
| family photos, etc.
|
| I started looking into using rclone directly from my FreeBSD
| NAS device. rclone seems to support many providers.
| amluto wrote:
| As far as I can tell, building a similar system using NVMe is
| vastly more complicated. If you can fit everything into M.2
| slots, it's easy. Otherwise you need barely-standardized PCIe
| cards, backplanes, connectors ordered off a menu of enterprise
| things with bizarre model numbers, and possibly "RAID" cards even
| though you don't actually want hardware RAID.
| jandrese wrote:
| 22TB of NVMe drives is going to be a bit more expensive than
| the system in the article however.
|
| I do wonder what the power consumption figures would be though.
| His system was drawing an annoyingly large amount of power and
| I suspect that was mostly those HDDs.
| zekica wrote:
| You can use pcie switches on pcie x4 to 4xM.2x4 drives.
| loeg wrote:
| The active pcie switches are a lot more expensive than the
| passive splitters that only work on systems with pcie
| bifurcation.
| amluto wrote:
| For a large scale storage system (more NVMe devices than
| PCIe lanes), a switch is mandatory. Or a "RAID" card or
| another switch-like device.
| [deleted]
| willis936 wrote:
| If your motherboard is new enough. I run my home NAS+server
| on 2013-era enterprise hardware and use a Supermicro AOC-
| SHG3-4M2P to make it work.
| wojciii wrote:
| Good article. He choose a fractal design case which I really like
| (the company not the specific model).
|
| I had all kinds of thermal problems with a too small case that I
| used for my truenas build. It would turn off without any trace in
| server logs (I have real server HW and therefore expected
| something in logs since there is a whole separate computer for
| this).
|
| I changed the case from a NAS case to another fractal desfrign
| case with lots of space for drives and heatsink. All thermal
| issues disappeared.
|
| I just wanted to warn anyone who is building to take this
| seriously. Some HW drives generate a lot of heat.
| diekhans wrote:
| Not used ECC is not a good tradeoff in my experience. The only
| ZFS corruption I have experienced was direct attached ZFS on a
| Mac with memory errors undetected by Apple's diagnostics.
| sandreas wrote:
| Building your own NAS Server by hand may be a nice project, but
| if you would like to get something up and running quickly, you
| should consider prebuild servers like Dell T* series or HP
| Microserver. It is real server hardware, supporting ECC RAM and
| by far less work to build, often providing (semi-)professional
| remote management.
|
| If you plan to build a budget NAS and enough room is not a
| problem, I personally would recommend to get an old and used Dell
| T20 Xeon E3-1225v3 with min. 16GB ECC DDR3 RAM, 2x10TB Seagate
| Exos ZFS RAID and a bootable USB Stick with TrueNAS or if you
| prefer Linux, TrueNAS Scale / OpenMediaVault.
|
| If room IS a problem, you could get a HP Microserver Gen8 or
| higher with a Xeon the config above.
|
| - Server Cost: 150 Bucks
|
| - Total Cost: 650 Bucks (150 for server, 500 for HDD)
|
| - Power Consumption: 33W Idle, 60W Heavy File Transfer
|
| - Silent enough without modding the fans
|
| - Ethernet-Transfer-Speed: 110MB/s on 30% System Load
|
| I do not own a 10Gbit ethernet card, but I'm pretty sure,
| transfer speeds with 10Gbit would be acceptable, too.
| [deleted]
| marius_k wrote:
| what would you recommend for power efficient hdds (when idle)?
|
| I have recently built a truenas box with 2x4TB ssd's. But I
| think I will want to expand it. Currently it runs at 14W idle.
| If I add 2hdd I expect it to increase to 40W(?). How can I
| optimize this?
| jazzythom wrote:
| the_only_law wrote:
| > who needs to store 32tb of data?
|
| People hosting large private media libraries, I assume. Think
| people eschew music/movie streaming services and instead
| download FLACs/videos and stream those over their local
| network.
| jazzythom wrote:
| jazzythom wrote:
___________________________________________________________________
(page generated 2022-05-29 23:00 UTC)