[HN Gopher] 7 watts idle - building a low powered server/NAS on ...
___________________________________________________________________
7 watts idle - building a low powered server/NAS on Intel 12th/13th
gen
Author : ryangibb
Score : 167 points
Date : 2023-12-31 12:10 UTC (10 hours ago)
(HTM) web link (mattgadient.com)
(TXT) w3m dump (mattgadient.com)
| ggm wrote:
| Sort of not surprising how variant divergent chipsets go with
| power states, and other things.
|
| How does he get raidz2 to spin down without busting the raidset?
| Putting drives into sleep states isn't usually good for in-CPU
| zfs is it? Is the l2arc doing heavy lifting here?
|
| Good comments about ECC memory in the feedback discussion too.
| phil21 wrote:
| I've found ZFS to be extremely forgiving for hard drives having
| random slow response times. So long as you are getting a
| response within the OS I/O timeout period, it's simply a matter
| of blocking I/O until the drives spin up. This can honestly
| cause a lot of issues on production systems with a drive that
| wants to half fail vs. outright fail.
|
| I believe this is on the order of 30-60s from memory.
|
| l2arc likely works quite well for a home NAS setup allowing for
| the drives to be kept spun down most of the time.
|
| Strangely I also built (about 10 years ago now) a home NAS
| utilizing a bunch of 2.5" 1TB Seagate drives. I would not
| repeat the experiment as the downsides in performance was
| simply not worth the space/power savings.
|
| Then again, I also built a ZFS pool out of daisy chained USB
| hubs and 256 (255?) free vendor schwag USB thumb drives. Take
| any advice with a grain of salt.
| paulmd wrote:
| yup. the problem is really with the SMR drives where they can
| (seemingly) hang for _minutes at a time_ as they flush out
| the buffer track. ordinary spin-down isn 't really a problem,
| as long as the drives spin up within a reasonable amount of
| time, ZFS won't drop the disk from the array.
|
| ZFS is designed for HDD-based systems after all. actually it
| works notably kinda poorly for SSDs in general - a lot of the
| design+tuning decisions were made under the assumptions of
| HDD-level disk latency and aren't necessarily optimal when
| you can just go look at the SSD!
|
| however, tons and tons of drive spin-up cycles are not good
| for HDDs. Aggressive idle timeout for power management was
| famously the problem with the WD Green series (wdidle3.exe
| lol). Best practice is leave the drives spinning all the
| time, it's better for the drives and doesn't consume all that
| much power overall. Or I would certainly think about, say, a
| 1-hour timeout at least.
|
| https://www.truenas.com/community/threads/hacking-wd-
| greens-...
|
| However, block-level striping like ZFS/BTRFS/Storage Spaces
| is not very good for spinning down anyway. Essentially all
| files will have to hit all disks, so you have to spin up the
| whole array. L2ARC with a SSD behind it might be able to
| serve a lot of these requests, but as soon as any block isn't
| in cache you will probably be spinning up all the disks very
| shortly (unless it's literally 1 block).
|
| Unraid is better at this since it's a file-level striping -
| newer releases can even use ZFS as a backend but a file
| always lives on a single unraid volume, so with 1-disk ZFS
| pools underneath you will only be spinning up one disk. This
| can also be used with ZFS ARC/L2ARC or Unraid might have its
| own setup for tiering hot data on cache drives or hot-data
| drives.
|
| (1-disk ZFS pools as Unraid volumes fits the consumer use-
| case very nicely imo, and that's going to be my advice for
| friends and family setting up NASs going forward. If ZFS
| loses any vdev from the pool the whole pool dies, so you want
| to add at least 2-disk mirrors if not 4-disk RAIDZ vdevs, but
| since Unraid works at a file-stripe level (with file
| mirroring) you just add extra disks and let it manage the
| file layout (and mirrors/balancing). Also, if you lose a
| disk, you only lose those files (or mirrors of files) but all
| the other files remain intact, you don't lose 1/8th of every
| file or whatever, and that's a failure mode that aligns a lot
| better with consumer expectations/needs and consumer-level
| janitoring. And you still retain all the benefits of ZFS in
| terms of ARC caching, file integrity, etc. It's not without
| flaws, in the naive case the performance will degrade to 1-
| or 2-disk read speeds (since 1 file is on 1 disk, with eg 1
| mirror copy) and writes will probably be 1-disk speed, and a
| file or volume/image cannot exceed the size of a single disk
| and must have sufficient contiguous free space, and
| snapshots/versioning will consume more data than block-level
| versioning, etc. All the usual consequences of having 1 file
| backed by 1 disk will apply. But for "average" use-cases it
| seems pretty ideal and ZFS is an absolutely rock-stable
| backend for unraid to throw files into.)
|
| anyway it's a little surprising that having a bunch of
| individual disks gave you problems with ZFS. I run 8x8TB
| shucked drives (looking to upgrade soon) in RAIDZ2 and I get
| basically 8x single-disk speed over 10gbe, ZFS amortizes out
| the performance very nicely. But there are definitely
| risks/downsides, and power costs, to having a ton of small
| drives, agreed. Definitely use raidz or mirrors for sure.
| justsomehnguy wrote:
| > home NAS utilizing a bunch of 2.5" 1TB Seagate drives. I
| would not repeat the experiment as the downsides in
| performance was simply not worth the space/power savings.
|
| 5400 drives? How many and how bad the performance was?
| louwrentius wrote:
| I have the same amount of storage available in a ~9-year-old
| 24-bay NAS chassis that does 150 Watt idle (with drives
| spinning).
|
| My NAS is powered down most of the time for this reason, only
| booted (IPMI) remotely when needed.
|
| Although the actual idle power consumption in the article seems
| to be a tad higher than 7 watts, it's so much lower, it's not
| such a big deal to run it 24/7 and enjoy the convenience.
|
| Loved the write-up!
| newsclues wrote:
| I had the same issue of picking a motherboard with limited SATA
| ports and then having to deal with extra expansion cards.
|
| 4 is not enough for homelab type servers.
| chx wrote:
| Why not the N100?
|
| Even an N305 fits the purpose, the N100 would be even less
| https://www.reddit.com/r/MiniPCs/comments/12fv7fh/beelink_eq...
| cjdell wrote:
| I'm very impressed with my N100 mini PC (fits in your palm)
| that I bought from AliExpress. Takes between 2-8W and uses just
| a plain old 12V plug-style power supply with a DC barrel.
| Perfect for Home Assistant and light virtualisation.
|
| Performance is actually better than my quad core i5-6500 mini
| PC. Definitely no slouch.
| imglorp wrote:
| Because author said they wanted a bunch of disks.
| hrdwdmrbl wrote:
| +1 for the Nx00 series of chips. I just bought myself a pre-
| built mini pc with an N100. Low power, good price, great
| performance.
|
| I wonder if in a few years they might not eat the whole mini PC
| market. If the price can come down such that they're
| competitive with the various kinds of Pis...
| arp242 wrote:
| N100 wasn't yet released when this was written in May (or was
| only _just_ released).
|
| Also the N100 only supports 16G RAM, and this guy has 64G.
| Number of pcix lanes (9 vs. 20) probably matter for their use
| case as well. And the i5 does seem quite a bit faster in
| general.
|
| Comparison:
| https://ark.intel.com/content/www/us/en/ark/compare.html?pro...
| s800 wrote:
| I'm running a home server on an N100 (ITX model) with a 32GB
| DIMM, works well.
| adrian_b wrote:
| It has been widely reported that Alder Lake N actually
| works with 32 GB, but for some reason Intel does not
| support this configuration officially.
|
| The same happened with the previous generations of Intel
| Atom CPUs, they have always worked without any apparent
| problems with more memory than the maximum specified by
| Intel.
| trescenzi wrote:
| I bought a tiny, fits in the palm of my hand, N100 box on
| Amazon for $150[1]. It currently does basically everything I
| need and idles at 7.5W.
|
| I've got cloudflare setup for dns management and a few simple
| sites hosted on it like blogs and Gitea. It has an sd card slot
| that I use as extra storage.
|
| Sure it's not nearly as awesome as the setup detailed here but
| I couldn't recommend it more if you just want a small simple
| home server.
|
| [1] https://a.co/d/cIzEFPk
| MochaDen wrote:
| Low-power is great but running a big RAID long-term without ECC
| gives me the heebee jeebies! Any good solutions for a similar
| system but more robust over 5+ years?
| ianai wrote:
| Agree. Didn't even see ECC discussed.
|
| Apparently this board supports ecc with this chip: Supermicro
| X13SAE W680 LGA1700 ATX Motherboard
|
| Costs 550.
|
| One option is building around that and having some pcie 4.0 to
| nvme boards hosting as many nvme drives as needed. Not cheap
| though but around home affordable.
| ThatMedicIsASpy wrote:
| You need workstation chipsets to have ECC on intel desktop
| CPUs.
|
| And yes they start at around 500.
| philjohn wrote:
| If you go back a few generations, the C246 chipset can be
| had on boards costing 200, and if you pair it with an
| i3-9100T you get ECC as well as pretty damn low power
| usage.
| ianai wrote:
| You are limited to pcie 3.0 speeds there though. But good
| suggestion.
| faeriechangling wrote:
| Embedded SOCs like AMDs which are used by Synology etc such as
| AMD V2000.
|
| If you want to step up to being able to serve an entire case or
| 4U of HDDs, you're going to need pcie lanes though, in which
| case w680 with i5-12600k and a single ecc udimm and a SAS HBA
| in the pcie slot with integrated Ethernet is probably as low
| wattage as you can get. Shame w680 platform cost is so high,
| am4/zen2 is cheaper to the point of still being viable.
|
| You can also get Xeon, embedded Xeon, am5, am4 (without an
| iGPU).
|
| There's nothing inherently wrong with running a raid without
| ecc for 5 years, people do it all the time and things go fine.
| eisa01 wrote:
| Been thinking to just get a Synology with ECC support, but
| what I find weird is that the CPUs they use are 5+ years old.
| Feels wrong to buy something like that "new"
|
| Same with TrueNas mini
| cpncrunch wrote:
| It depends what your requirements are. Ive been using a low
| end synology box for years as a home dev server and it is
| more than adequate.
| faeriechangling wrote:
| For the most part, these are computers which are meant to
| stick around through 2-4 upgrade cycles of your other
| computers. Just doing various low power 24/7 tasks like
| file serving.
|
| You could be like "well that's stupid, I'm going to make a
| balls to the wall build server that also serves storage
| with recent components" but the build server components
| will become obsolete faster then the storage components, it
| can lead to incidental complexity to try and run something
| like windows games on a NAS operating system because you
| tried to consolidate on one computer, being forced to use
| things like ECC will compromise absolute performance,
| you'll want to have the computer by your desk potentially
| but also in a closet since it has loud storage, you're
| liable to run out of pcie lanes and slots, you want to use
| open cooling for the high performance components and a
| closed case for the spinning rust, it's all a bit awkward.
|
| Much simpler is to just treat the NAS as an appliance that
| serves files, maybe runs a plex server, some surveillance,
| a weather station, rudimentary monitoring, and home
| automation. Things for which something like a v2000 is
| overkill. Then use breeding edge chips in things like cell
| phones and laptops. Then have the two computers do
| different jobs. Longer product cycles between processors
| makes things like support cheaper to maintain for long term
| periods of time and offer low prices.
| hypercube33 wrote:
| I have a 3u Nas I built in 2012 or something with a two
| core sempron running windows and using storage spaces and
| it still holds up just fine.
| tyingq wrote:
| If you're on a budget, a used HP Z-Series workstation supports
| ECC ram. A bare-bones one is cheap, though the ECC memory can
| be expensive since it's not the (plentifully available) server
| type RDIMMs. Not a low-power setup either :)
| philjohn wrote:
| That's why I went with an i3-9100T and an Asrock Rack
| workstation board, ECC support (although UDIMM vs RDIMM)
| a20eac1d wrote:
| This sounds similar to a build I'm planning. I cannot find
| the workstation mainboards at a reasonable price though. They
| start at like 400EUR in Europe.
| philjohn wrote:
| There's an Asus one that's available as well, the ASUS C246
| PRO - it's about 250 GBP.
|
| I did build mine 2 years ago, so the 246 motherboards are
| less available now, the C252 is another option which will
| take you up to 11th gen Intel.
| rpcope1 wrote:
| I think the trick is to go with a generation or two old
| Supermicro motherboard in whatever ATX case you can scrounge
| up, and then use either a low power Xeon or a Pentium/Celeron.
| Something like the X11SAE-F or X12SCA-F (or maybe even older)
| is plenty, though maybe not quite as low power. I still use an
| X9SCA+-F with some very old Xeon for a NAS and to run some LXC
| containers. It idles at maybe 20-30W instead of 5, but I've
| never had any issues with it, and I'm sure it's paid itself off
| many times over.
| j45 wrote:
| I would never run a self-hosted nas when a synology/qnap are
| available as a dedicated appliance for around the same price.
|
| The hardware is much more purpose equipped to store files long
| term and not the 2-3 years between consumer SSDs'
|
| It's not to say self-hosting storage can't or shouldn't be
| done, its just about how many recoveries and transitions have
| you been through, because it's not an if, but a when.
| justinsaccount wrote:
| > The hardware is much more purpose equipped to store files
| long term
|
| What hardware would that be, specifically? The low end
| embedded platforms that don't even support ECC?
|
| > how many recoveries and transitions have you been through
|
| 3 or 4, at this point, using the same 2 disk zfs mirror
| upgraded from 1TB to 3TB to 10TB.
| dbeley wrote:
| The hardware is basically the same as self-hosted NAS, the
| motherboard could even be of a lower quality. The software
| though is closed source and most consumer NAS only get
| support for 4-5 years which is outrageous.
| jhot wrote:
| I'm running truenas on a used e3 1245 v5 ($30 on ebay) and an
| Asus workstation Mobo with 32 GB ECC and 4 spinning drives. Not
| sure individually, but the nas along with a i5 12400 compute
| machine, router, and switch use 100W from the wall during
| baseline operation (~30 containers). I'd consider that hugely
| efficient compared to some older workstations I've used as home
| servers.
| NorwegianDude wrote:
| I've been running a E3-1230v3 for over 10 years now. With
| 32GN ECC, 3 SSDs and 4 HDD and separate port for IPMI I'm
| averaging 35 W from the wall with a light load. Just ordered
| a Ryzen 7900 yesterday, and I guess the power consumption
| will be slightly higher for that one.
| Dachande663 wrote:
| Is there not an element of penny-wise, pound-foolish here where
| you end up optimizing the cpu/mono side of things but then run 6+
| drives vs fewer larger ones?
| bluGill wrote:
| You should buy drives in multiples of 5 or 6. Of course this is
| subject to much debate. Drives fail so you need more than one
| extra for redundency - I suggest 2 so when (not if!) one fails
| you still have procuction while repacing the bad drive. 3
| drives in a raid-1 mirror for the price of one is spendy so
| most start looking at raid-5 with dual parity. However putting
| more than 6 drives in those starts to run into performance
| issues better handled by striping across two raids. (if you do
| try more than 6 your odds of 3 drives failing become reasonable
| so start adding more paricy stripes) which is why I say 6
| drives at once is the sweet spot, but others come up with other
| answers that are not unreasonable.
|
| of course one input is how much data do you have. for many 1
| modern disk is plenty of space, so you go raid 1 and redundancy
| is only so you don't need to wait for off site backups to
| restore after failure.
| jnsaff2 wrote:
| I have a 5-node ceph cluster built out of Fujitsu desktops that I
| got for 50 euro a piece.
|
| 4 nodes have 8gb ram and one has 16gb.
|
| CPU in each is i5-6500.
|
| Each has an NVMe that is split for OS and journal and a spinning
| HDD.
|
| The cluster idles at 75W and full load about 120W. That is
| intense ceph traffic not other workloads.
| Throw839 wrote:
| That fujitsu part is important. Many mainstream brands do not
| implement power states correctly, Fujitsu seems to be focused
| on power consumption quite lot.
| skippyboxedhero wrote:
| The NUC and Optiplex aren't bad either. There are also very
| good AsRock boards (I can't remember what the modern ones are
| called but H110T is one, I used this for a bit, idled at 6W,
| laptop memory and power brick). But Fujitsu is the S-tier.
|
| In practice, I found I needed a bit more power but you can
| get some of the Fujitsu boards with a CPU for $30-40, which
| is hard to beat.
| ThatMedicIsASpy wrote:
| I have a HP ProDesk powertop shows up to C10 which I never
| reach. Must be the SSD or NVMe I have. But yeah The BIOS in
| those are super cut down and there are hardly any energy
| settings I can change.
| paulmd wrote:
| fujitsu has always been underappreciated in the mainstream
| tbh. there has always been a thinkpad-style cult following
| (although much smaller) but japanese companies often do a
| pretty terrible job at marketing in the west (fujifilm being
| another fantastic example).
|
| my university issued T4220 convertible laptops, with wacom
| digitizers in the screens. I rarely used it but the pivot in
| the screen made it indestructible, it survived numerous falls
| hitting the corner of the screen/etc because the screen
| simply flops out of the way and pivots to absorb the energy.
| I later got a ST6012 slate PC that my uni bookstore was
| clearing out (also with a wacom digitizer, and a Core2Solo
| ULV!). Both of them are extremely well-thought-out and
| competently designed/built hardware. Doesn't "feel" thinkpad
| grade, but it absolutely is underneath, and featured PCMCIA
| and bay batteries and other power-user features.
|
| https://www.notebookcheck.net/Fujitsu-Siemens-
| Lifebook-T4220...
|
| https://www.ruggedpcreview.com/3_slates_fujitsu_st6012.html
|
| They also did a _ton_ of HPC stuff for Riken and the other
| japanese research labs, they did a whole family of SPARC
| processors for mainframes and HPC stuff, and pivoted into ARM
| after that wound down. Very cool stuff that receives almost
| no attention from mainstream tech media, less than POWER
| even.
|
| https://www.youtube.com/watch?v=m0GqCxMmyF4
|
| Anyway back on topic but my personal cheat-code for power is
| Intel NUCs. Intel, too, paid far more attention to idle power
| and power-states than the average system-integrator. The NUCs
| are really really good at idle even considering they're using
| standalone bricks (my experience is laptop bricks are much
| less efficient and rarely meet 80+ cert etc). A ton of people
| use them as building blocks in other cases (like HDPlex H1 or
| Akasa cases), they don't have a _ton_ of IO normally but they
| have a SATA and a M.2 and you can use a riser cable on the
| M.2 slot to attach any pcie card you want. People would do
| this with skull canyon f.ex (and HDPlex H1 explicitly
| supports this with the square ones). The "enthusiast" style
| NUCs often have multiple M.2s or even actual pcie slots and
| are nice for this.
|
| https://www.amazon.com/ADT-Link-Extender-Graphics-Adapter-
| PC...
|
| And don't forget that once you have engineered your way to
| pcie card formfactor, you can throw a Highpoint Rocket R1104
| or a SAS controller card in there and run multiple SSDs (up
| to 8x NVMe) on a single pcie slot, without bifurcation. Or
| there are numerous other "cheat code" m.2 devices for
| breaking the intended limits of your system - GPUs (Innodisk
| EPV-1101/Asrock M2_GPU), SATA controllers, etc.
|
| https://www.youtube.com/watch?v=M9TcL9aY004 (actually this
| makes the good point that CFExpress is a thing and is very
| optimized for power. No idea how durable they are in
| practice, and they are definitely very expensive, but they
| also might help in some extreme low power situations.)
|
| Personally I never found AMD is that efficient at idle. Even
| with a monolithic apu you will want to dig up an X300TM-ITX
| from aliexpress, since this allows you to forgo the chipset.
| Sadly AMD does not allow X300 to be marketed directly as a
| standalone product, only as an integrated system like a nuc
| or laptop or industrial pc, despite the onboard/SOC IO being
| already quite adequate for beige box usage. Gotta sell those
| chipsets (hey, how about _two_ chipsets per board!?). But OP
| article is completely right that AMD's chipsets just are not
| very efficient.
| eurekin wrote:
| What effective client speeds are you getting?
| jnsaff2 wrote:
| Currently only gigabit network and I can easily saturate
| that.
|
| Thinking about chucking in 25gbit cards.
| jeffbee wrote:
| I wonder if this system is connected with wired ethernet or wifi.
| I found that it makes a large difference on my NAS. With a wired
| link the SoC can't reach a deep sleep state because the ethernet
| peripheral demands low-latency wakeup from the PCIe root port.
| This is power management policy that is flowing from the link
| peer all the way to your CPU! I found that wifi doesn't have this
| problem, and gives better-than-gigabit performance, sometimes.
| skippyboxedhero wrote:
| If you have an network card over PCIe then there may be an
| issue with the card. I have never had an issue reaching low
| sleep state, you can modify WoL behaviour too. Wifi is, again
| in my experience, uses significantly more power. I have seen
| 3-5W and usually switch if off.
| jeffbee wrote:
| I don't think it's an issue with the card. It's a combination
| of ethernet and PCIe features that make this happen. There is
| a standard called "energy efficient ethernet" that makes it
| not happen, but my switch doesn't do it.
| jauntywundrkind wrote:
| I feel like I see a good number of nas builds go by, but rarely
| are they anywhere as technical. Nice.
| squarefoot wrote:
| For those interested in repurposing a small mini-PC with no Mini
| PCI ports available as NAS, I recently purchased a ICY
| IB-3780-C31 enclosure (USB3.1 to 8xSATA), and although I still
| have to put it in operation (will order new disks soon), I tested
| it with a pair from my spares and can confirm it works out of the
| box with both Linux and XigmaNAS (FreeBSD). Just beware that
| although it can turn back on after the connected PC goes to sleep
| and then wakes up (the oddly named "sync" button on front panel
| does that), it doesn't after a accidental loss of power or power
| outage _even if the connected PC is set up to boot automatically_
| , which to me is quite bad, therefore having recently moved to a
| place where power outages aren't uncommon and can last longer
| than a normal UPS could handle, I'll probably modify the
| enclosure by adding a switchable monostable circuit that emulates
| a short press of the power button after power is restored. That
| would mean a big goodbye to warranty, so I'll have to think about
| it, but the problem can indeed be solved.
| ThatMedicIsASpy wrote:
| 7950X3D, X670E Taichi, 96GB 6400MHz CL32, 2x4TB Lexar, 4x18TB
| Seagate Exos X18, RX570 8G, Proxmox.
|
| Idle no VM ~60-70W.
|
| Idle TrueNAS VM drives spinning ~90-100W.
|
| Idle TrueNAS & Fedora Desktop with GPU passthrough ~150W
|
| In a few weeks the 570 is replaced by 7900 xtx. The RAM adds a
| lot of W. 3-5W per 8GB of RAM depending on the frequency is
| common for DDR5.
|
| I was expecting around 50-100W for Proxmox+TrueNAS. I did not
| consider the power draw of the RAM when I went for 96GB.
| eurekin wrote:
| What about networking? Did you go over 1gbit?
| ThatMedicIsASpy wrote:
| It has 2.5G. There are X670E with 10G if you desire more.
|
| My home net is 1G with two MikroTik hAP ax3 which are
| connected with the single 2.5G poe port they have (and one
| powers the other).
| hypercube33 wrote:
| I really want the Ryzen Embedded and or Epyc 3000(?) series
| that has dual 10gbe on package for something like a NAS but
| both are super expensive or impossible to find.
| ThatMedicIsASpy wrote:
| AsRock Rack B650D4U-2L2T/BCM, 2x10G, 2x1G, IPMI
|
| For less power consumption Ryzen 8000 is coming up (wait for
| Jan 8th, CES) and the APU tend to be monolithic and draw a
| lot less power than the chiplets.
| tw04 wrote:
| Even that uses Broadcom 10gbe, not the embedded AMD
| Ethernet. It's really strange, I can only assume there's
| something fatally wrong with the AMD Ethernet.
| AdrianB1 wrote:
| Or it just tells the customers of this kind of equipment
| want proven solutions instead of other (novelty) options,
| so the manufacturers build their products with that in
| mind. Stability and support are very important to most
| buyers.
| tw04 wrote:
| If that were the case I'd expect to still see at least
| SOME products utilizing the AMD chipset, even if budget
| focused. I have literally not seen a single board from
| any MFG that utilizes the built in NIC. Heck there are
| Intel Xeon-d chipsets that utilize both the onboard NIC
| and external Broadcom to get 4x for cheap.
| j45 wrote:
| It may be possible to install, or add an external 2.5 or
| 10GbE device.
|
| Either way, it's awful there is not more 10 GbE connectivity
| available default. There's no reason it shouldn't be the next
| level up, we have been at 1 / 2.5 for far too long.
| ThatMedicIsASpy wrote:
| You can find what you desire but you always have to pay for
| it.
|
| ASUS ProArt X670E-Creator WIFI, 10G & 2.5G at 460EUR
|
| 10G simply isn't that cheap. The cheapest 5 port switch is
| 220EUR. Upgrading my home net would be rather expensive.
| vetinari wrote:
| What makes is more expensive is insisting on 10GBase-T.
| 10G over SFP+ is not that expensive; the cheapest 4 port
| switch (Mikrotik CRS305) is ~130 EUR.
| MrFoof wrote:
| You can go down to 50W idle, but it requires some _very
| specific_ hardware choices where the ROI will never
| materialize, some of which aren't available yet for Zen4.
|
| I have...
|
| * AMD Ryzen 7 PRO 5750GE
|
| * 128GB ECC DDR4-3200
|
| * Intel XL710-QDA2 _(using QSFP+ to a quad-SFP+ passive DAC
| breakout)_
|
| * LSI 9500-16i
|
| * Eight WD 16TB HDDs (shucked)
|
| * Two 2TB SK Hynix P41 Platinum M.2 NVMe SSD
|
| * Two Samsung 3.84TB PM9A3 U.2 NVMe SSD
|
| * Two Samsung 960GB PM893 SATA SSD
|
| So that's the gist. Has a BMC, but dual 40GbE and can sustain
| about 55GbE over the network _(in certain scenarios, 30-35GbE
| for almost all)_ , running TrueNAS scale purely as a storage
| appliance for video editing, a Proxmox cluster _(on 1L SFFs
| with 5750GEs and 10GbE idling at 10W each!)_ mostly running
| Apache Spark, a Pi4B 8Gb k3s cluster and lots more. Most of
| what talks to it is either 40GbE or 10GbE.
|
| There is storage tiering set up so the disks are very rarely
| hit, so they're asleep most of the time. It mostly is serving
| data to or from the U.2s, shuffling it around automatically
| later on. The SATA SSDs are just metadata. It actually boots
| off a SuperMicro SuperDOM.
|
| ----
|
| The Zen 3 Ryzen PRO 5750GEs are _unicorns_ , but super low
| power. Very tiny idle _(they're laptop cores)_ , integrated
| GPU, ECC support, and the memory protection features of EPYC.
| 92% of the performance of a 5800X, but all 8C/16T flat out _(at
| 3.95GHz because of an undervolt)_ caps at just under 39W
| package power.
|
| The LSI 9500-16i gave me all the lanes I needed _(8 PCIe, 16
| SlimSAS)_ for the two enterprise U.2 and 8 HDDs, and was very
| low idle power by being a newer adapter.
|
| The Intel dual QSFP+ NIC was deliberate as using passive DACs
| over copper saved 4-5W per port _(8 at the aggregation switch)_
| between the NIC and the switch. Yes, really. Plus lower latency
| _(than even fiber)_ which matters at these transfer speeds.
|
| The "pig" is honestly the ASRock X570D4U because the BMC is
| 3.2W on its own, and X570 is a bit power hungry itself. But all
| in all, the whole system idles at 50W, is usually 75-80W under
| most loads, but can theoretically peak probably around 180-190W
| if everything was going flat out. It uses EVERY single PCIe
| lane available from the chipset and CPU to its fullest! Very
| specific chassis fan choices and Noctua low profile cooler in a
| super short depth 2U chassis. I've never heard it make a peep,
| disks aside :)
| ianai wrote:
| Is the "e" for embedded? Ie needs to be bought in a package?
| I'm not seeing many market options.
| MrFoof wrote:
| Nope. The extra E was for "efficiency", because they were
| better binned than the normal Gs. Think of how much more
| efficient 5950Xs were than 5900Xs, despite more cores.
|
| So the Ryzen PRO line is a "PRO" desktop CPU. So typical
| AM4 socket, typical PGA (not BGA), etc. However they were
| never sold directly to consumers, only OEMs. Typically they
| were put in USFF (1L) form factors, and some desktops. They
| were sold primarily to HP and Lenovo _(note: Lenovo PSB
| fuse-locked them to the board -- HP didn 't)_. For HP
| specifically, you're looking at the HP ProDesk and
| EliteDesk _(dual M2.2280)_ 805 G8 Minis... which now have
| 10GbE upgrade cards _(using the proprietary FlexIO V2
| port)_ available straight from HP, plus AMD DASH for IPMI!
|
| You could for a while get them a la carte from boutique
| places like QuietPC who did buy Zen 3 Ryzen PRO trays and
| half-trays, but they are _long_ gone. They 're also well
| out of production.
|
| Now if you want one, they're mostly found from Taiwanese
| disassemblers and recyclers who part out off-lease 1L
| USFFs. The 5750GEs are the holy grail 8-cores, so they
| command a massive premium over the 6-core 5650GEs. I
| actually had a call with AMD sales and engineering on being
| able to source these directly about a year ago, and though
| they were willing, they couldn't help because they were no
| longer selling them into the channel themselves. Though the
| engineer sales folks were really thrilled to see someone
| who used every scrap of capability of these CPUs. They were
| impressed that I was using them to sustain 55GbE of actual
| data transfer _(moving actual data, not just rando network
| traffic)_ in an extremely low power setup.
|
| -- -----
|
| Also, I actually just logged in to my metered PDU, and the
| system is idling right now at just 44.2W. So less than the
| 50W I said, but I wanted to be conservative in case I was
| wrong. :)
|
| 44.2W that has over 84TiB usable storage, with fully
| automagic ingest and cache that helps to serve 4.5GiB/sec
| to 6.5GiB/sec over the network _ain 't bad_!
| ianai wrote:
| Nice! Wish they were easier to obtain!!
| MrFoof wrote:
| Agreed! Despite being PCIe 3.0, these were _perfect_ home
| server CPUs because of the integrated GPU and ECC
| support. The idles were a bit higher than 12th gen Intels
| _(especially the similarly tough to find "T" and
| especially "TE" processors)_ mostly because of X570s
| comparatively higher power draw, but if you ran DDR5 on
| the Intel platform it was kind of a wash, and under load
| the Zen 3 PRO GEs won by a real margin.
|
| My HP ProDesk 405 G8 Minis with a 2.5GbE NIC (plus the
| built in 1GbE which supported AMD DASH IPMI) idled at
| around 8.5W, and with the 10GbE NICs that came out around
| June, are more around 9.5W -- with a 5750GE, 64GB of
| DDR4-3200 (non-ECC), WiFi 6E and BT 5.3, a 2TiB SK Hynix
| P31 Gold (lowest idle of any modern M.2 NVMe?), and
| modern ports including 10Gb USB-C. Without the WiFi/BT
| card it might actually get down to 9W.
|
| The hilarious thing about those is they have an onboard
| SATA connector, but also another proprietary FlexIO
| connector that can take an NVIDIA GTX 1660 GB! You want
| to talk a unicorn, try _finding_ those GPUs in the wild!
| I 've _never_ seen one for sale separately! If you get
| the EliteDesk (over the ProDesk) you also get a 2nd
| M2.2280 socket.
|
| I have three of those beefy ProDesk G8s in a Proxmox 8
| cluster, and it mostly runs Apache Spark jobs, sometimes
| with my PC participating _(how I know the storage server
| can sustain 55GbE data transfer!)_ , and it's hilarious
| that you have this computerized stack of napkins making
| no noise that's fully processing _(reading, transforming,
| and then writing)_ 3.4GiB /sec of data -- closer to
| 6.3GiB/sec if my 5950X PC is also participating.
|
| -----
|
| If you want a 5750GE, check eBay. That's where you'll
| find them, and rarely NewEgg. Just don't get Lenovo
| systems unless you want the whole thing, because the CPUs
| are PSB fuse-locked to the system they came in.
|
| 4750GEs are Zen 2s and cheaper _(half the price)_ , and
| pretty solid, but I think four fewer PCIe lanes. Nothing
| "wrong" with a 5750G per se, but they cap more around
| 67-68W instead of 39W.
|
| Just if you see a 5750GE, grab it ASAP. People like me
| hunt those things like the unicorns they are. They go
| FAST. Some sellers will put up 20 at a time, and they'll
| all be gone within 48 hours.
|
| -----
|
| I really look forward to the Zen 4 versions of these
| chips, and the eventual possibility of putting 128GiB of
| memory into a 1L form factor, or 256GiB into a low power
| storage server. I won't need them _(I 'm good for a
| looooong time)_, but it's nice to know it'll be a thing.
|
| Intel 15th gen may be surprising as well, as it's such a
| massive architecture shift.
|
| Obscenely capable home servers that make no noise and
| idle in the 7-10W range are utterly fantastic.
| ThatMedicIsASpy wrote:
| I'm looking at an LSI 9300-16i which is 100EUR (refurbished)
| including the cables. I just have to flash it myself. Even a
| 9305 is triple the cost for around half the power draw.
|
| My build is storage, gaming and a bunch of VMs.
|
| Used Epyc 7000 was the other option for a ton more PCIe. I
| have no need for more network speed.
| MrFoof wrote:
| Yep. 9300s are very cheap now. 9400s are less cheap. 9500s
| are not cheap. 9600s are new and pricey.
|
| As I said, you can't recoup the ROI from the reduced power
| consumption, even if you're paying California or Germany
| power prices. Though you can definitely get the number
| lower!
|
| I had this system _(and the 18U rack)_ in very close
| proximity in an older, non-air conditioned home for a
| while. So less heat meant less heat and noise. I also
| deliberately chased, _" how low can I go (within reason)"_
| while still chasing the goal of local NVMe performance over
| the network. Which makes the desire to upgrade non-
| existent, even 5+ years from now.
|
| Not cheap, but a very fun project where I learned a lot and
| the setup is absolutely silent!
| dist-epoch wrote:
| > 3-5W per 8GB of RAM
|
| I think that's wrong. It would mean 4*6=25W per DIMM.
|
| I also have 48GB DDR5 DIMMs and HwInfo shows 6W max per module.
| 1letterunixname wrote:
| I feel like an petrochem refinery with my 44 spinning rust units
| NAS 847E16-RJBOD, 48 port POE+ 10 GbE switch, 2 lights-out and
| environmental monitoring UPSes, and DECISO OPNsense router using
| a combined average of 1264W. ]: One UPS is at least reporting a
| power factor with an efficiency of 98%, while the other one isn't
| as great at 91%.
|
| APM is disabled on all HDDs because it just leads to delay and
| wear for mythological power savings that isn't going to happen in
| this setup. Note that SMART rarely/never predicts failures, but
| one of the strongest signals of drive failures is slightly
| elevated temperatures (usually as a result of bearing wear).
|
| This creates enough waste heat such that one room never needs
| heating, but cooling isn't strictly needed either because there's
| no point to reducing datacenter ambient below 27 C.
| syntheticnature wrote:
| I was looking into water heaters that use heat pumps recently,
| and a lot of them function by sucking heat out of the room.
| While water and computers don't mix, might be an even better
| use for all that waste heat...
| Palomides wrote:
| amusing to read this very detailed article and not have any idea
| what OP actually does with 72TB of online storage
|
| 1gbe seems a bit anemic for a NAS
| alphabettsy wrote:
| Very cool write up. Good timing too as I find myself attempting
| to reduce the power consumption of my homelab.
| vermaden wrote:
| Good read.
|
| Tried something similar in the past:
|
| - https://vermaden.wordpress.com/2019/04/03/silent-fanless-fre...
|
| - https://vermaden.wordpress.com/2023/04/10/silent-fanless-del...
| uxp8u61q wrote:
| I know nothing about building NASs so maybe my question has an
| obvious answer. But my impression is that most x64 CPUs are
| thoroughly beaten by Arm or RISC-V CPUs when it comes to power
| consumption. Is there a specific need for the x64 architecture
| here? I couldn't find an answer in TFA.
| hmottestad wrote:
| You can very easily run docker containers on it. That's why I
| went with a ryzen chip in mine.
|
| You could always use an rpi if you want to go with ARM, and
| you'll want something with ARMv8.
| adrian_b wrote:
| Most Arm or RISC-V CPUs (with the exception of a few server-
| oriented models that are much more expensive than x86) have
| very few PCIe lanes and SATA ports, so you cannot make a high
| throughput NAS with any of them.
|
| There are some NAS models with Arm-based CPUs and multiple
| SSDs/HDDs, but those have a very low throughput due to using
| e.g. only one PCIe lane per socket, with at most PCIe 3 speed.
| arp242 wrote:
| > my impression is that most x64 CPUs are thoroughly beaten by
| Arm or RISC-V CPUs when it comes to power consumption
|
| Not really.
|
| ARM (and to lesser degree, RISC-V) are often used and optimized
| for low-power usage and/or low-heat. x64 is often more
| optimized for maximum performance, at the expense of higher
| power usage and more heat. For many x64 CPUs you can
| drastically reduce the power usage if you underclock the CPU
| just a little bit (~10% slower), especially desktop CPUs but
| also laptops.
|
| There are ARM and RISC-V CPUs that consume much less power, but
| they're also much slower and have a much more limited feature-
| set. You do need to compare like to like, and when you do the
| power usage differences are usually small to non-existent in
| modern CPUs. ARM today is no longer the ARM that Wilson et al.
| design 40 years ago.
|
| And for something connected to mains, even doubling the
| efficiency and going from 7W to 3.5W doesn't really make all
| that much difference. It's just not a big impact on your energy
| bill or climate change.
| pmontra wrote:
| I'm using an Odroid HC4 as my home server. It has an ARM CPU
| and it's idling at 3.59 W now with a 1 TB SATA 3 SSD and some
| web apps that are basically doing nothing, because I'm their
| only user. It's got a 1 GB network card, like my laptop. I can
| watch movies and listen to music from its disk on my phone and
| tablet.
|
| There is no need to have something faster. The SATA 3 bus would
| saturate a 2.5 GB card anyway. The home network in Cat 6A so it
| could go up to 10 GB. We'll see what happens some years from
| now.
| sandreas wrote:
| There is a german forum thread with a google docs document
| listing different configurations below 30W[1]. Since there are
| very different requirements, this might be interesting for many
| homeserver / NAS builders.
|
| For me personally I found my ideal price-performance config to be
| the following hardware: Board: Fujitsu D3417-B2
| CPU: Intel Xeon 1225 V5 (better the also compatible 1275v6, but
| its way more expensive) RAM: 64GB ECC RAM (4x16GB)
| SSD: WD SN850x 2TB (consumer SSD) Case: Fractal Design
| Define Mini C Cooling: Big block no name, passively cooled
| by case fan Power: Pico PSU 120W + 120W Leicke power supply
| Remote Administration via Intel AMT + MeshCommander using a DP
| Dummy Plug
|
| I bought this config used VERY CHEAP and I am running Proxmox -
| it draws 9.3W idle (without HDDs). There are 6 SATA ports and a
| PCIe port, if anyone would like to add more space or passthrough
| a dedicated GPU.
|
| It may be hard to get, but I paid EUR380,00 in total. Does not
| work very well for Media Encoding, here you should go for a Core
| i3 8100 or above. Alternatively you could go for the following
| changes, but these might be even harder to get for a reasonable
| price: Boards: GIGABYTE C246N-WU2 (ITX), Gigabyte
| C246-WU4 (mATX), Fujitsu D3517-B (mATX), Fujitsu D3644 (mATX)
| Power: Corsair RM550x (2021 Version)
|
| Cheap used Workstations that are good servers are Dell T30 or
| Fujitsu Celsius W550. The Fujitsu ones have D3417(-A!) boards
| (not -B) having proprietary power supplies with 16 power pins (no
| 24pin ATX but 16pin). There are Adapters on Aliexpress for 24PIN
| to 16pin (Bojiadafast), but this is a bit risky - I'm validating
| that atm.
|
| Ryzen possibilities are pretty rare, but there are reports that
| the AMD Ryzen 5 PRO 4650G with a Asus PRIME B550M-A Board is
| drawing about 16W Idle.
|
| Hope I could help :-)
|
| [1]: https://goo.gl/z8nt3A
| manmal wrote:
| For anybody reading this - I think it's a great config, but
| would be careful around pico PSUs in case you want to run a
| bunch of good old spinning disks. HDDs have a sharp power peak
| when spinning up, and if you have a couple of them in a RAID,
| they might do so synchronously, potentially exceeding the
| envelope.
| agilob wrote:
| to go deeper, depending on a file system, some FS won't let
| HDDs go to sleep, so they always consume power for max RPM.
| ksjskskskkk wrote:
| b550m with a amd5 pro from 2023 (will double check models and
| post on that forum)
|
| i get 9w idle and amd pro cpus have ECC support which is a
| requirement for me on any real computer. i disable most
| components on the board. it's bottom tier consumer quality.
|
| best part is when i need to burn many more watts the integrated
| gpu is pretty decent
| ulnarkressty wrote:
| As exciting as it is to design a low power system, it's kind of
| pointless in the case of a NAS that uses spinning rust as storage
| media - as the author later writes, the HDD power consumption
| dwarfs the other system components.
|
| If one uses SSD or M.2 drives, there are some solutions on the
| market that provide high speed hardware RAID in a separate
| external enclosure. Coupled with a laptop board they could make
| for a decent low power system. Not sure how reliable USB or
| Thunderbolt is compared to internal SATA or PCIe connections
| though... would be interesting to find out.
| V__ wrote:
| Don't they stop spinning when idle?
| layer8 wrote:
| Not by default, but you can have the OS have them spin down
| after a certain idle period. Doing that too frequently can
| affect the life time of the drive though. You save maybe 4
| Watts per drive by spinning them down.
| orthoxerox wrote:
| They are never idle if the NAS is seeding torrents.
| nabla9 wrote:
| You can shut down HDD's when you don't use them.
| sudo hdparm -Y /dev/sdX
| homero wrote:
| Crucial Force GT supposed to say Corsair
| treprinum wrote:
| My NAS has Pentium J 4-core and is way under 7W idle, inside some
| small Fractal case with 6x20TB HDD. Why would you need 12th/13th
| gen for file transfers?
| wffurr wrote:
| For encoding maybe? OP says "reasonable CPU performance for
| compression" and also it was a CPU they already had from a
| desktop build.
| dbeley wrote:
| Interesting I assume it's with all drives off, how many Watts
| with some disk usage?
| jepler wrote:
| Author seems to have built 5 systems from 2016 to 2023, or around
| every other year.
|
| Some parts (e.g., RAM) are re-used across multiple builds
|
| It's interesting to wonder: How much $$ is the hardware cost vs
| the lifetime energy costs? Is a more power-hungry machine that
| would operate for 4 years better than one that would operate for
| 2 years?
|
| The motherboard + CPU is USD 322 right now on pcpartpicker. At
| USD 0.25/kWh (well above my local rate but below the highest
| rates in the US), 36W continuous over 4 years is also about $315.
| So, a ~43W, 4-year system might well be cheaper to buy and
| operate than a 7W, 2-year system.
___________________________________________________________________
(page generated 2023-12-31 23:00 UTC)