[HN Gopher] SolidRun 1U 2 node Arm Server
___________________________________________________________________
SolidRun 1U 2 node Arm Server
Author : cameron_b
Score : 78 points
Date : 2021-03-01 12:46 UTC (10 hours ago)
(HTM) web link (www.servethehome.com)
(TXT) w3m dump (www.servethehome.com)
| usr1106 wrote:
| The ARM "server" space for the hobbyist willing to spend < 10
| EUR/USD a month is a bit of a sad story.
|
| Scaleway had their nicely priced C1 instances. Well, they still
| have and they run pretty solidly in my (limited) experience. But
| they have not updated the kernel for ages. It seems pretty clear
| that they will shut them down sooner or later without any
| successor.
|
| Is there any other comparable offering out there?
|
| (I intentionally mix owning and renting a server. Aren't we in
| the age of the cloud...)
| yjftsjthsd-h wrote:
| Honestly, is there anything _at_ 10 USD /EUR per month? I mean,
| yes, I'd rather pay the same as I do for cheap Digital Ocean or
| whatever, but outside of AWS options are thin even if you _are_
| willing to pay a little bit more.
| usr1106 wrote:
| > Honestly, is there anything at 10 USD/EUR per month?
|
| In case you didn't know, a Scaleway C1 is
|
| * 2.99 per month, billed hourly
|
| * 1.00 per month for the network (unlimited) and you can drop
| that if you have other machines in the same datacenter and do
| not need direct internet access
|
| * ~ 1.00 per month for VAT (I guess that's 0,00 if you are
| outside of the EU)
|
| Of course they are not for high performance computing, but
| there are enough use cases where the performance is prefectly
| good enough.
|
| I use them for ssh jumpboxes and find their latency better
| than AWS EC2, which is much more expensive.
| kingosticks wrote:
| Technically you could get a Raspberry Pi at
| https://www.mythic-beasts.com/order/rpi for PS7.25 per month
| but I'm not sure if this is what you had in mind. It's not
| exactly a server and it's not going to win any prizes for
| performance.
| selfhoster11 wrote:
| For the type of workloads a Pi is capable of, and given the
| low capex of setting one up, it only makes sense to pay for
| this if you have extremely rubbish broadband at home.
| csunbird wrote:
| I agree. for 7.45 per month, you can just buy your own
| RPi.
| usr1106 wrote:
| Yes, something like this. No IPv4 might be inconvienent at
| times. The price is nearly twice as much as a scaleway C1,
| not sure how the specs really compare.
|
| What does Brexit mean for EU customers? Do we slip paying
| VAT now? 24% in my case.
| TacticalCoder wrote:
| > Honestly, is there anything at 10 USD/EUR per month?
|
| Well you can have an Intel Atom N2800 / 4GB DDR3 / 2Tb HDD
| dedicated server at OVH for 8 EUR / month.
|
| And, like all their entry level servers, it's 100 Mbps max,
| which really hurts when you've got fiber at home.
|
| But in some cases they can be convenient.
|
| They have a lot of different offers at various prices
| (Kimsufi / So you start / OVH: all the same company
| basically).
| yjftsjthsd-h wrote:
| Ah, sorry, I meant ARM
| keithlfrost wrote:
| My current selection is https://contabo.com/en/vps
| StillBored wrote:
| Thanks! I usually look for machines with unlimited transfer
| because otherwise it seems like they are trying to make all
| their money on transit.
|
| Those VPS, while limited, actually come with what I would
| consider a fairly reasonable monthly quota (32T) for a low
| cost plan.
| usr1106 wrote:
| The price sounds good for the spec. But what do I do with 8
| GB RAM and 200 GB disk to run an irc client, a minimal Web
| server and maybe an ssh jumpbox? They happily run with 1 -
| 2 GB RAM and 10 GB disk.
|
| On the non-quantitive side I really prefer to run ARM over
| ugly Intel.
| tecleandor wrote:
| C1 and C2 are EOLed. I read somewhere (here?) that their design
| was problematic and the maintenance was expensive :(
| m01 wrote:
| Apparently they emailed customers, see here: https://www.redd
| it.com/r/selfhosted/comments/g11uuj/scaleway...
| usr1106 wrote:
| Yeah, C2 (ARM64 virtual) is no longer available. Not sure
| whether all existing machine have really been shut down. I
| haven't had any.
|
| C1 (ARM32 dedicated) was still available recently when I
| checked last time. Existing customers' instances are
| definitely still running, used one today.
| zimpenfish wrote:
| Which is a shame because I really like my C1 - although
| they're not planning on deleting the C1 instances for now
| (according to support a couple of months ago.)
| depingus wrote:
| I had a C1 that ran well enough for my usage but eventually
| had to move to a DEV1-S for Wireguard (old kernel on the
| C1).
|
| I just wish they would let me run Alpine Linux on the
| DEV1-S. Its available for the more expensive DEV1
| instances, but they obviously don't want me taking full
| advantage of the meager resources the DEV1-S offers.
| usr1106 wrote:
| Can't you just install it yourself? (Haven't worked with
| DEV1-S yet, so I don't know.)
|
| Of course if you do it under time is money principle
| that's a trainwreck compared to using their image. But if
| you do it for the fun of it...
| phamilton wrote:
| t4g line from AWS is in that range
| ksec wrote:
| >Scaleway had their nicely priced C1 instances....
|
| They were far too ahead of its time that didn't make any sense
| ( It isn't a pun against their Tag Line ). Once AWS Graviton 2
| has the whole ecosystem ready ScaleWay could then enjoy the
| benefits. To kick start ARM on server space ScaleWay is biting
| off more than they can chew. The same answer to people ask why
| doesn't DO do Edge Worker / Container.
| kev009 wrote:
| There's no mention of OOB support. You could of course hook up
| something to the reset pins and serial port, but this places it
| categorically behind low end x86-64 servers.
| walrus01 wrote:
| No info on cost, and no info on operating system support.
|
| Anything that's COM Express is produced in very small quantities
| and is very expensive. I'll be shocked if it isn't 3 to 4x the
| cost of a single socket 8-core Ryzen that runs circles around it
| in performance.
| cameron_b wrote:
| The high-speed networking drivers are a several proposed
| patches away from being up-streamed into upcoming kernel
| releases. Currently SolidRun says Debian and Ubuntu work, I've
| heard of Arch working with a little setup.
|
| the SolidRun Article on their Systemsready work:
|
| https://www.solid-run.com/news/how-honeycomb-lx2k-and-system...
| snuxoll wrote:
| $750, right on SolidRun's website. Given the specs it's a fair
| price, although a 8-core Ryzen CPU would indeed run circles
| around 16 Cortex A72 cores.
| dragontamer wrote:
| 1U rack-form cases are pretty expensive actually. That's
| really the main issue with racks: you spend a substantial
| amount of money on just the form factor. 1U is very small: so
| the fans are moving fast and are very loud.
|
| Its a form-factor specifically designed for server rooms:
| where you put ear-plugs in before entering the room. The
| money spent on the racks / physical infrastructure is small
| compared to the ongoing costs of actually powering or cooling
| the room
| walrus01 wrote:
| Absolutely agreed.
|
| Not just that but 1U servers also use a fair percentage of
| their wattage moving air with 40mm fans. The hotter the TDP
| of all the electronics in your server is, the more air you
| need to move. 40x28mm high RPM 12VDC fans are quite
| inefficient in terms of a ratio of cubic meters of air
| moved per hour vs their watt-hours consumed.
|
| In a 2U chassis if you can find a way to pack in multiple
| motherboards and use 60mm height fans the possible
| efficiencies are much greater.
|
| If you look at the 'wall of fans' in the center of
| something like a Dell 1U dual socket system, there's eight
| or ten fans each of which if running at full speed can be
| an 8W to 10W load. This is necessary because you might have
| two 120W TDP CPUs under passive heatsinks that need a LOT
| of air pushed past them.
|
| Additionally a stack of 30 or 40 1U servers each with its
| own discrete 110-240VAC input, DC output power supply is
| quite inefficient in large numbers. A fair bit of wattage
| is wasted to spinning the 40mm fan in the power supply and
| in the densely packed AC to DC conversion circuitry. This
| is one reason why things like the FB Open Compute platform
| servers are sometimes 1.5RU high (so they can use 60mm
| fans), and use a single large AC-to-DC power supply that
| can take in 277VAC (or even 480VAC!), and output 12VDC to
| 48VDC to each motherboard in the same rack cabinet.
| dragontamer wrote:
| In general, 1U and even 2U seems very dense to me. I
| think that 4U is the typical density that a typical
| office building could support.
|
| Even 1-node in 2U is definitely getting into 220V
| dedicated outlets and dedicated buildings. 1U or 1/2U per
| node certainly exists, but you need to start spending a
| good amount of thought on power and cooling to actually
| support that kind of infrastructure.
|
| -----------
|
| > If you look at the 'wall of fans' in something like a
| Dell 1U dual socket system, there's eight or ten fans
| each of which if running at full speed can be an 8W to
| 10W load. This is necessary because you might have two
| 120W TDP CPUs under passive heatsinks that need a LOT of
| air pushed past them.
|
| That's possibly an advantage here. ARM's are known for
| lower power consumption, so maybe the fans can run slower
| (and therefore draw less power). I'd have to think about
| these specs more though...
|
| A 2-node x 1U ARM server might work out if these ARM
| servers have very low power requirements in a very strong
| "horizontal scale-out" kind of setup.
|
| But its still hard for me to think exactly what the use
| case would be. An I/O based server would probably be
| better in 4U: for 50 to 100 Hard Drives on a beefier 4U
| Xeon or 4U EPYC chassis.
|
| So its one of those "what's the use" products. Under
| typical circumstances, a 2U or 4U beefy server split up
| into multiple VMs probably is a superior architecture.
|
| But having more "tiny" physical machines that are mostly
| independent (maybe sharing a PSU, but otherwise a fully
| independent node) for "bare metal hosting" has some
| benefits over VMs. I mean... do you want a single dual-
| socket 128-core EPYC in 4U or do you want 8x16 ARMs in
| 1/2U each.
|
| I dunno, I think the 128-core dual-socket 4U EPYC is
| gonna be better for cooling, power, and VM-flexibility.
| At least, that's my instinct. Unless you know that you
| absolutely want individual non-VM nodes.
|
| ------
|
| EDIT: The I/O options discussed here are exceptional. If
| low CPU-power but high I/O is needed (more SPF+ ports or
| whatever), then that's probably the ARM's advantage over
| a single dual-socket EPYC system.
|
| EDIT2: 16x A72 seems weak. But 10Gbps SFP+ x4? That's...
| actually really really good. Might be hell to actually
| write software that takes advantage of all that bandwidth
| though.
| walrus01 wrote:
| The part with four 10Gbps ports is nice. I wish there
| were more mini-itx x86-64 server boards with such. The
| vast majority of mini-itx boards are not designed for
| front-to-back server airflow, and are intended/marketed
| more for consumer enthusiast small gaming PCs and stuff.
|
| Ordinarily in a x86-64 intel or AMD chipset server if you
| wanted 4 x 10Gbps SFP+, in addition to whatever NICs are
| on the motherboard, you'll end up using one of the Intel
| chipset, low-profile PCI-Express interface cards in a
| slot. Using some 1U long depth Dells as an example you
| might get a motherboard that has 2 x 1000BaseT and 2 x
| 10GbE SFP+ in daughtercard plugged into the motherboard,
| and then you'd have two or three low-profile PCIE slots
| to add more NICs. Or some combination like one full-
| height size PCIE slot and one low profile slot.
| snuxoll wrote:
| The $750 is just for the board. Four SFP+ ports, some
| onboard SATA ports, an open PCIe 3.0 x8 port and a 16-core
| Cortex A72 CPU on a mini-itx compatible board is still a
| decent price given it's not a mass-market product. Hell,
| development kits for many ARM SOCs cost more than that for
| way less.
|
| That said, you are absolutely correct that rackmount
| equipment is expensive by it's form-factor - but given this
| is a mini-itx board clearly not designed for use in a
| rackmounted chassis (given clearance for CPU cooling and
| orientation of the memory modules) it's not a factor here.
| rjsw wrote:
| The $750 is for one board, the case in the article holds two
| of them.
| mobilio wrote:
| I'm curious what is price for that server? Or just for one node.
| paxswill wrote:
| You can get the board for $750, but it says 8 weeks shipping:
| https://shop.solid-run.com/product/SRLX216S00D00GE064H08CH/
|
| It's mini-ITX, so there's a pile of cases you can use for it if
| you want.
| flatiron wrote:
| looks to me like bang/buck here is still going to be amd. you
| could put together a very nice amd board for $750
| MartijnBraam wrote:
| It's very interesting for certain usecases. Ordered one of
| those boards to run as a buildserver for postmarketOS.
| trevorishere wrote:
| Agreed. Unless you're concerned with power consumption, I'm
| not sure where this server fits, especially with an A72
| core which is a few years old.
|
| I'd love to have one of these to replace ODroid N2+ just
| for a rack mount solution, but not at that price.
| walrus01 wrote:
| As a design to put into a 1U case that location for the CPU
| on the daughterboard above the motherboard is really
| problematic, because it leaves almost no headroom for a
| proper passive heatsink.
|
| In a well engineered 1U server setup you want the motherboard
| to be as low to the bottom of the chassis as possible, then
| the CPU in its socket, and a big passive heatsink (either
| aluminum or skived copper) occupying almost all the rest of
| the vertical room inside the case. The server should be like
| a front to back wind tunnel where the 40mm fans are moving
| air through the CPU heatsink(s) without the need for separate
| fans on top of the CPUs themselves.
|
| The fan unit on top of that is really problematic from a
| above-the-fan vertical clearance and airflow perspective.
|
| Additionally the SODIMM slots are blocking a typical airflow
| path from the front edge of the motherboard towards the rear.
| More normally a server motherboard for 1U would not use
| laptop size RAM, but would use normal size DIMMs oriented in
| such a way that they're parallel to the path of airflow.
|
| (disclosure: I used to work for a server manufacturer and was
| responsible for procuring components from Taiwanese vendors,
| and designing new generations of 1U single and dual socket
| boxes to custom specs)
| cameron_b wrote:
| Its clear that the COM express carrier wasn't initially
| designed for this application. SolidRun has simply gotten
| enough requests for help solving rack density questions
| that they put together an answer.
| snuxoll wrote:
| It _is_ advertised as a workstation board, so the lack of
| design considerations for a rack mounted chassis isn't
| surprising. That said, with the power draw of the entire
| board a 2U chassis with a standard circular fan on the CPU
| and modest chassis airflow is likely to be sufficient.
| ojn wrote:
| Unfortunately a low-signal article and announcement: No release
| date, no pricing, nothing in-depth.
| nine_k wrote:
| At least, the form factor, and the interfaces: 4x 10G SFP+, 4x
| SATA, not bad. Looks like a compact but solid board for CPU-
| light loads, likely comparable to boards based on Intel's
| C37xx. What made me sad is that the memory is not ECC
| apparently :(
|
| No estimated price and no performance figures still.
| emelinad09 wrote:
| Ehrlicher und sehr interessanter Blog uber Online-Einnahmen
| https://mr.bet/at/game/view/demo/mystic-mirror .
| yjftsjthsd-h wrote:
| It's a pity that the whole point seems to be hitting the middle
| price point but there's no actual pricing info available. Still,
| once they get closer to shipping this could be interesting to
| watch. Anything to get better competition in "normal" server
| space:)
___________________________________________________________________
(page generated 2021-03-01 23:02 UTC)