[HN Gopher] Home Lab Hardware Guide
___________________________________________________________________
Home Lab Hardware Guide
Author : ashitlerferad
Score : 115 points
Date : 2021-08-23 10:50 UTC (1 days ago)
(HTM) web link (haydenjames.io)
(TXT) w3m dump (haydenjames.io)
| willis936 wrote:
| I'm on board with most of this except the suggestion that rack
| mounted hardware should be kept where people live.
|
| _No._
|
| I can't afford a house but like to tinker with networking. I pay
| more for weaker equipment so it will be low power/low noise. I'd
| love to just buy half a dozen used Dell PowerEdges but rack-
| mounted hardware is insanely loud.
|
| A basement is the ideal spot. Water isn't an issue as long as
| your rack is not directly under anything that could leak
| (including on higher floors) and has a palette underneath it. If
| there is the possibility of your basement flooding more than 2
| inches then you have bigger problems you need to address first.
| Keeping a rack with electronic equipment there will motivate you
| to do what you should be doing to the place anyway: dehumidifying
| and managing pests.
| croutonwagon wrote:
| Yeah i tend to agree..
|
| I had a small rack in our old house. Mostly just to house the
| router/switch because the little cabinet wouldnt fit.
|
| My wife dubbed it the EyeRack, short for eyesore.
|
| For the most part you can easily hide this stuff in a cabinet
| or a bookshelf, no one will be the wiser.
|
| Ill never really understand people using pizza box style
| servers, especially 1U units, in a home. With the sole
| exception of the one time i saw one stood veritcally behind an
| entertainment center. I think i was the only person at the
| party to notice it.
| bluedino wrote:
| IF you live near a co-lo, you might be able to get a half rack
| and go in on it with a buddy or two.
| snuxoll wrote:
| > I'd love to just buy half a dozen used Dell PowerEdges but
| rack-mounted hardware is insanely loud.
|
| This is a common belief that isn't quite correct. My 2U Dell
| R520 is quite quiet after the initial boot (once the BMC takes
| over fan control), albeit I had to do some ipmitool magic to
| get it to not ramp them up with non-OEM PCIe cards installed.
|
| My 1U R420 and R320 boxes? Yeah, they're a little loud, 40mm
| fans have to run at higher speeds to get air flowing.
|
| Ultimately my lab lives in my home office and the noise doesn't
| really bother me, I wouldn't put it in the bedroom or living
| room though.
| theandrewbailey wrote:
| I've kept my server[0] in my basement since I moved into a
| place with one. I keep it elevated off the floor (not just for
| water concerns, but airflow, too) and under a table. Although
| it doesn't, it can make a lot of noise since no one's near it
| most of the time.
|
| [0] is really an old desktop.
| deeblering4 wrote:
| I've done this and it wasn't good. The servers heated up the
| basement, and chewed through power like crazy. Plus it was
| heavy as hell and a pain to recycle.
|
| Rack servers are that form factor to maximize expensive
| datacenter rack space. Once you are not in a datacenter regular
| commodity hardware is a better bet.
|
| For home use really laptops are ideal. They have an inbuilt UPS
| and KVM.
| Saris wrote:
| Agreed, I started with rack mount stuff and quickly moved away
| from it. Very loud and for the budget stuff frequently used by
| homelabs, very power hungry for not that much performance.
| memetomancer wrote:
| So many of these posts gainsaying the practice of a home lab are
| spot on. In my view it's plain foolish to try and cram enterprise
| gear into a living space. It's almost too hot and loud for my
| office, why would I take any of that home?
|
| The other posts talking about Tiny Mini Micros are on the right
| track but I think it goes further yet - there's good reason to
| have a small rack of crap in the corner:
|
| - ISP hardware.
|
| - pfSense gateway.
|
| - Wifi base station.
|
| - A good gigabit switch for the house.
|
| - Those tiny mini micros or Mac minis for lab stuff.
|
| - a NAS chassis or two.
|
| - Raspberry Pi clusters.
|
| - PiDP-11 or other such hobby stuff that needs a place to sit and
| blink.
|
| There are plenty of other uses too, like security DVRs, ingest
| stations for cameras/recorders, optical and tape media devices,
| etc.
|
| None of that stuff is hot or loud, but you probably wouldn't want
| it piled up on your desk or spilling out of some bookshelf. And I
| think the article kinda gets at that point, tbh.
| the_third_wave wrote:
| I made a rack out of some dumpster-dived supermarket shelves,
| lumber, a truck air filter and a forced draft fan. The thing
| doubles as drying cabinet for produce (mint, mushrooms, fruit
| etc.) by having the equipment in the top half of the rack
| followed by an air flow divider and 8 rack-sized metal-mesh-
| covered drying frames. From top to bottom the thing contains:
|
| * D-Link DGS-3324SR (managed switch, EUR35)
|
| * HP DL380G7 with 2xX5675 @3.07GHz, 128GB (ECC) RAM and 8x147GB
| SAS drives (EUR450)
|
| * NetApp DS4243 (24x3.5" SAS array, currently populated with
| 24x650GB 15K SAS drives, EUR400)
|
| * the mentioned airflow divider
|
| * 8 drying frames
|
| It is managed through Proxmox on Debian and runs a host of
| services including a virtual router (OpenWRT), serving us here on
| the farm and the extended family spread over 2 countries. The
| server-mounted array is used as a boot drive and to host some
| container and VM images, the DS4243 array is configured as a JBOD
| running a mixture of LVM/mdadm managed arrays and stripe sets
| used as VM/container image and data storage. I chose mdadm over
| ZFS because of the greater flexibility it offers. The array in
| the DL380 is managed by the P410i array controller (i.e. hardware
| raid), I have 4 spare drives in storage to be used as
| replacements for failed drives.
|
| The rack is about 1.65m high, it looks like this (here minus the
| DS4243 array which now sits just above the air flow divider):
|
| https://imgur.com/a/M4Lbf1K
|
| In the not-too-distant future I'll replace the 15K SAS drives
| with larger albeit slower (7.2K) SAS or SATA drives to get more
| space and (especially) less heat - those 15K drives run hot.
| After a warm summer I added an extra air intake + filter on the
| front side (not visible on the photos), facing the equipment.
| This is made possible by the fact that cooling air is pulled
| through the contraption from the underside instead of being blown
| in through the filter(s).
|
| I chose this specific hardware - a fairly loaded DL380G7, the
| DS4243 - because these offered the best price/performance ratio
| when I got them (in 2018). Spare parts for these devices are
| cheap and easily available, I made sure to get a full complement
| of power supplies for both devices (2 for the DL380G7, 4 for the
| DS4243) although I'm only using half of these. I recently had to
| replace a power supply in the DL380 (EUR20) and two drives in the
| DS4243 (EUR20/piece), for the rest everything has been working
| fine for close to 4 years now.
|
| On the question whether this much hardware is needed, well, that
| depends on what you want to do. If you just want to serve media
| files and have a shell host to log in to the answer is probably
| 'no', depending on the size of the library. Instead of using
| 'enterprise class' equipment you could try to build a system
| tailored to the home environment which prioritizes a reduction in
| power consumption and noise levels over redundancy and
| performance. You'll probably end up spending about the same
| amount of money for hardware, a bit more in time and get a
| substantially lower performing system but you'd be rewarded by
| the lower noise levels and reduced power consumption. The latter
| can be offset by adding a few solar panels, the former by moving
| the rack to a less noise-sensitive location - the basement, the
| barn, etc.
|
| As to having 19" rack equipment in the home I'd say this is
| feasible as long as you don't have to sit right next to the
| things. Even with the totally enclosed, forced-draft rack I made
| the thing does produce enough noise to make it hard to forget it
| is there.
| KaiserPro wrote:
| For UK people https://www.bargainhardware.co.uk/ is an
| _excellent_ source of kit.
|
| Personally I steer away from cisco. Yes, some people in
| enterprise swear by it, but I _personally_ hate it with a
| passion. However there is a fucktonne of it on ebay.
|
| I use ubiquiti for APs, but I've not tried their switching.
| Currently I have some dlink "smart" stuff. Its PoE and has vlans,
| which is good enough for my purposes. Can do 10gig, not bad for <
| PS120 (second hand)
|
| Firewall, I'm all for pfsense. I've never liked hardware
| firewall/router appliances. They've always sucked.
| ChuckMcM wrote:
| I strongly endorse this notion of equipping your own laboratory
| for your experiments. Learning through doing is always more
| durable than learning through reading only.
|
| While the author is looking at learning about and perfecting
| their skills as an administrator of networked computer systems
| there are other "kinds" of laboratories that people set up.
|
| Mine, and one I'm more familiar with, are electronics labs. If
| you're going to be learning about circuits and such it helps to
| have the basic kit at hand. Similarly for people doing robotics,
| having a 3D printer in their home lab is essential these days.
| Nearly everything you might do in a home laboratory will involve
| some sort of data processing so the ideas by the author are great
| for creating the lab's "IT infrastructure."
|
| In California it also makes it easier to defend the "I built this
| technology my own gear (picture/description of lab) so you don't
| own it." But that may be unique to California.
| mcshicks wrote:
| Yeah I think I was expecting a home lab = "home electronics
| lab", although some networking equipment is still required. The
| rack mount stuff is nice, but yeah, scopes, bench dmm and power
| supplies, solder station, cable hangers and many drawers to
| sort parts was what I was expecting to see. Still the rack
| mount stuff is pretty cool. I have quite a few raspberry pis
| these days and was always looking for some "rack mount" style
| ways to make the power/ethernet cables nice and still have some
| easy way to temporarily hook up keyboard/mouse/monitor when
| it's occasionally needed.
| tbyehl wrote:
| I've been re-establishing my home lab and decided to get away
| from rackmount gear. I found ServeTheHome's TinyMiniMicro[1]
| series invaluable for choosing some mini PCs that would be right
| for me.
|
| I went with three HP Prodesk 600 G4 that averaged $250/ea with
| the i5-8500T/i5-8600T, 256GB NVMe, and a total of 40GB RAM. They
| can go to 64GB RAM, the dual M.2 M-key plus potentially an SFF
| SATA drive offer plenty of storage potential, they're effectively
| silent, and power consumption is much lower than a big server
| full of fans. vPro potentially offers out-of-band remote
| management but I haven't tried digging into that yet.
|
| I have two dedicated to Frigate with M.2 Coral TPUs. On the third
| I've been consolidating the sprawl of Linux VMs and Docker
| containers running home automation and network management stuff.
| Could probably make do with just two but why buy only two when
| you can have three?
|
| [1] https://www.servethehome.com/tag/tinyminimicro/
| rektide wrote:
| > _AMD has really raised the bar. I'm most impressed with the CPU
| performance of the M715q. They both run quiet and cool, with
| Ubuntu Server and Windows 10._
|
| The M715q was offered with a fantastic 4750G chip, a Ryzen 7 Pro
| chip with 8 core. Today all one can buy in terms of small form
| factor business PCs is an M75n with low end low power Ryzen
| 3300U, a multiple-generations old Ryzen 3 with 4 cores.
|
| Small business PCs are great, and for a while, there was serious
| excitement that AMD was going to make this segment much more
| interesting. Those dreams seem to have all been cancelled. I'm
| glad to see that affordable, competent AMD laptops are about,
| because in many ways it feels like AMD has succeeded so greatly
| that they have vanished from the market. They don't seem to be
| allocating production capacity to consumer GPUs, they seem to
| have withdrawn from this price-conscious market segment,... AMD
| keeps vanishing.
| unethical_ban wrote:
| I just did some work on my main server, and my take is a bit
| different.
|
| I upgraded my desktop and built a server from a Ryzen 1700, put
| 64GB of RAM in it, and now this one device acts as DNS
| filter/cache(Pi-Hole), VPN server (Pi-VPN Wireguard), and a 10TB
| ZFS NAS. This is just the base, I also use it for gaming and
| labs.
|
| The main recommendations:
|
| Fractal Design Define R5 - this is a large case, and is pretty
| wide - but it is a dream to work in. The extra width gives plenty
| of room behind the motherboard for hiding cables. It has quiet
| fans, it is built to minimize noise, and it can hold 8+ hard
| drives.
|
| OS: Proxmox. I use this as the host OS, and configure my ZFS on
| the host. I then expose the ZFS as a NAS via a privileged
| container running Turnkey Linux.
|
| If you get some multi-port NICs on it, you can put an OPNSense
| firewall as a VM, and use the machine as your router as well. In
| the end, you would only need UPS, modem, small switch, and the
| host.
| dragontamer wrote:
| Note that 10Gbe SFP+ switches have come down to $250 or so, and
| may be worthwhile for homelabs to experiment with. See Mikrotik's
| CRS309-1G-8S+IN, or servethehome's review
| (https://www.servethehome.com/mikrotik-crs309-1g-8sin-
| review-...).
|
| If only because most of us probably already know how to use
| Cat5/Cat6 Ethernet, but how many of us have experimented with
| fiber optics?
|
| 10Gb Ethernet over Cat6 exists too. But that may be boring for
| some! Home labs are about experimenting with new things.
| bombcar wrote:
| Another advantage of fiber is it helps prevent lightning and
| other power surges from spreading. If your equipment is
| protected on the power edge, fiber isolates it on the network
| side.
| dragontamer wrote:
| Hmm, maybe "fiber" is the wrong moniker here.
|
| I'm more talking about SFP+ ports, because most of your
| connections within the rack will probably be DAC (copper
| cables pretending to be fiber) for lower costs. Fiber is
| really for longer runs. If you only have a few feet worth of
| cable, I'm not sure if fiber per se is worth it over DAC.
|
| But learning to work with SFP+ hardware is a skill, just like
| learning to strip CAT6 cable or run it around. Working with
| DAC cables, or SFP+ modules and finding what works is the
| "dumb part" of IT, but the kind of stuff you need to practice
| a few times to understand.
|
| ----
|
| Grabbing a few SFP+ ConnectX-2 cards from Ebay (for $30 or
| so), a few DAC cables, and a $250 switch... you can be well
| on your way to a 10Gbit network.
| kazen44 wrote:
| DAC cables are usually vastly more expensive then SFP+
| optics and some multimode cable.
|
| Singlemode is not really required in a homelab setting
| because of distance, but DAC cables are more trouble then
| their worth in my opinion.
| walterbell wrote:
| _> Enterprise features: Ubiquiti EdgeRouter ER-10X, 10 Port
| Gigabit Router with PoE Flexibility - $110 (specs) - (10) Gigabit
| RJ45 Ports, PoE Passthrough on Port 10, Dual-Core, 880 MHz,
| MIPS1004Kc Processor, 512 MB DDR3 RAM, 512 MB NAND Flash Storage,
| Internal Switch, Serial Console Port_
|
| Has anyone been able to purchase a small Ubiquiti EdgeRouter in
| the last six months? They've been out of stock at Amazon, Newegg,
| B&H. Beginning to wonder if they have deprioritized the consumer
| market, since other vendors are shipping routers.
| tbyehl wrote:
| Ubiquiti appears to have been prioritizing their own online
| store since the supply chain disruptions began. The ER-10X in
| particular may also be suffering from a general lack of
| popularity -- they've only had stock twice this year in fairly
| low quantities. They sell through fairly quick, but not nearly
| as fast as the ER-X which has been stocked regularly and in
| much larger quantities.
|
| There's an inventory tracker for the Ubiquiti store on the
| Discord.
|
| https://discord.gg/ui
| mbreese wrote:
| Are they still making them? I thought they had switched to
| pushing their Dream Machine as opposed to the Edge Router
| series.
| nullwarp wrote:
| I _hate_ the Dream Machines. We've been switching to them at
| work and the whole cloud UI is just an absolute mess. It's so
| hard to find anything.
|
| I will be sad when the last of our Mikrotik stuff gets
| swapped out.
| p_j_w wrote:
| EdgeRouter 4 is available on their own store right now. Most of
| it's lighter weight siblings aren't, but their 3 most expensive
| models are also out of stock. None of their consumer wifi gear
| seems out of stock, though. I'd guess Si shortages before
| assuming they deprioritized the consumer market.
| walterbell wrote:
| Yeah, at that $250 price point there are x86 coreboot
| alternatives.
| KingMachiavelli wrote:
| Rack mount hardware is almost always expensive, loud, and power
| hungry. I just have never seen the point of building a home lab
| like this.
|
| A single ATX desktop can do almost [1] everything a homelab at a
| fraction of the cost & power consuption. I think a lot of the
| reason for homelabs/server hardware was to get access to more CPU
| cores, now that 8+ cores are very cheap it is actually cheaper to
| buy a new consumer desktop than it is to run an old server.
|
| What makes even less sense is _wanting_ use to use software like
| vSphere or ESXi since its about 10x more complicated than just
| using virt-manager /QEMU. It's like using an excavator to dig a
| fire pit. Server hardware & software makes sense when it's not
| your home because then you do need a remote access tool like
| iDRACK. (There are DIY options if you just need something for
| personal use).
|
| That said if you enjoy it as a hobby (or your homelab is actually
| a business thing) then go for it.
| dheera wrote:
| > expensive
|
| This isn't always true. I got a 24-port Aruba 802.3at PoE
| switch with FOUR 10-gigabit ports, for a grand total of $120.
|
| > loud, and power hungry
|
| Again, not always true. Enterprises do care about power a lot
| of the time.
|
| I highly recommend the Ubiquiti stuff for home lab use -- most
| of it is pretty quiet.
|
| If you buy other rack-mount hardware, try to buy at least 2U
| hardware, the bigger fans are much quieter than the 40mm fans
| in 1U equipement.
|
| If you must buy non-Ubiquiti 1U equipment, you can usually
| change the fans out for Noctua fans.
|
| > A single ATX desktop
|
| You can build an ATX desktop into a 4U case. I highly recommend
| the SilverStone RM42-502 for about $250 on Amazon. It takes
| standard components including standard ATX power supply,
| standard ATX, micro-ATX, or mini-ITX motherboard, standard fans
| (or even a CorsAir liquid cooler), it's basically a standard
| case. If you use quiet components it will be quiet. My ATX rack
| mount PC is not even noticeable unless I'm running up my GPU
| doing machine learning stuff.
|
| There are much cheaper cases available as well, but the
| SilverStone case is quality, and will last you forever, you can
| just keep building new PCs into it for as long as ATX/ITX
| exist.
|
| One of the advantages to building your PC in a rack mount
| configuration is that it's very easy to stack multiple PCs
| along with your network routers, switches, NAS, UPS, in one
| nice rack that's easy to move from apartment to apartment in
| one piece, and all your cables and connections stay nice and
| tidy.
|
| It's also ideal if you play with a lot of smaller devices. For
| example if you want to have a cluster of 10 RPis, a rack mount
| solution is great for keeping the ethernet and power cables
| tidy, and it isn't going to be loud or any more power hungry
| than if you had them spread out across the table.
|
| You can also 3D print rack mounts for non-rack equipment, just
| to keep them tidy.
|
| My rack: https://i.redd.it/xcss9uassrg71.jpg
| actually_a_dog wrote:
| I agree. My setup is very much like the setup shown near the
| end of the article (the one that consists of a couple Synology
| NAS boxes and what looks like a few Mac Minis or other small
| form factor PCs). I have a file server with a decently large
| storage array and a few Raspberry Pis and other small
| electronic gizmos, some of which connect to my wifi, and some
| which plug into a PC via various cables (ethernet, USB, _etc._
| )
|
| The only thing out of this that consumes any amount of power
| worth mentioning is the file server/storage array. I haven't
| measured how much power it uses (I probably should), but I'm
| able to minimize it by allowing the disks to go to sleep and
| the CPU to run slower than when I'm actively using it.
|
| I've never felt limited by this setup at all, but, then again,
| my home lab isn't really my main hobby.
| kaladin-jasnah wrote:
| > A single ATX desktop can do almost [1] everything a homelab
| at a fraction of the cost & power consuption.
|
| So this is true, but the iDRAC/iLO on my big, loud server has a
| virtual KVM feature that lets me lazily sit at my desk and
| click buttons and install my server operating system of choice.
| It saves a lot of space and effort compared to flashing a USB,
| going and plugging in a keyboard, (mouse), and monitor, and
| going through the whole deal. I'd wager that's one of the big
| things that would compel me to buy a rack server. I recently
| built a nice ATX desktop, fitted with a 5950X and everything,
| and I found that the PiKVM project [0] does a pretty good job
| at replacing that "integral" part of the server for me (you can
| also look into an ASRock Rack PAUL [1], but good luck finding
| one for sale right now.
|
| > What makes even less sense is wanting use to use software
| like vSphere or ESXi since its about 10x more complicated than
| just using virt-manager/QEMU
|
| A lot of people (not me, I end up using libvirt/QEMU as it
| suits my needs) buy homelabs to work towards having hands-on
| experience for their system administration job, which uses
| ESXi/vSphere. It might also be for working on getting
| certifications from VMware, in which case they really don't
| have any choice but to use ESXi on their servers.
|
| > Server hardware & software makes sense when it's not your
| home because then you do need a remote access tool like iDRACK
|
| Now, I addressed this earlier (laziness), but these BMC things
| are very useful--you can monitor the health of various
| components of your server, and I believe even update the BIOS
| without stepping out of your chair. It makes administering a
| homelab much easier, and even the Pi-KVM, a DIY option, I'm
| pretty sure, doesn't have monitoring features. Plus, those DIY
| solutions require wiring stuff into your ATX motherboard, which
| can get janky and might put off people who want a turnkey
| solution.
|
| [0] https://pi-kvm.org/
|
| [1]
| https://www.asrockrack.com/general/productdetail.asp?Model=P...
| PragmaticPulp wrote:
| > So this is true, but the iDRAC/iLO on my big, loud server
| has a virtual KVM feature
|
| This is available on AMD Ryzen motherboards, too.
|
| Just get one of ASRock's motherboards with the BMC controller
| built-in: https://www.asrockrack.com/general/productdetail.as
| p?Model=X...
|
| No need to mess with PiKVM or add-in cards. It's a server
| board with KVM management that works out of the box with
| Ryzen processors. It might need a BIOS update to support your
| 5950X, but it will work.
| kaladin-jasnah wrote:
| Yeah, I know about those ASRock boards, but they're more
| expensive than the rudimentary Pi-KVM solution I have right
| now (a 35 dollar Pi and a 12 dollar HDMI capture dongle; I
| would use wake-on-lan for powering the board on... if my
| MSI board's WoL worked). Also, not the one you linked, but
| the newer B550 ASRock Rack boards are impossible to find
| for sale--the only place I could find the B550 boards were
| on wisp.net.au, and I don't live in Australia or New
| Zealand so it wouldn't be cost effective. Perhaps I
| should've opted for an X470 board, but it was "older" so I
| was put off.
| [deleted]
| KingMachiavelli wrote:
| Yea that's true. ESXi/vSphere can be very relavent although I
| found that it was easy enough to learn on the job. The real
| complicated stuff of vSphere probably isn't going to come up
| in a homelab but if experience is necessary to get the job
| then it's worth it.
|
| pi-kvm looks very nice. I would really like to not have to
| use iDRACK or pay the license fee.
|
| Normal ATX motherboards do lack a lot of features. I'm not
| sure why none of the normal mobos don't just use LVFS [1] to
| update the BIOS but luckily they can read the update file
| directly off the vfat EFI partition so pi-kvm would solve
| that. I think every modern ATX motherboard also supports UEFI
| network boot so you could setup a simple DHCP+iPXE server for
| onboarding machines.
|
| [1] https://fwupd.org/
| PragmaticPulp wrote:
| > Rack mount hardware is almost always expensive, loud, and
| power hungry.
|
| Buying old rack-mount server hardware for home use is almost
| always a mistake. Old server hardware may feel cheap when you
| see an old dual-socket rack mount server on eBay with hardware
| that was fast 8 years ago, but you can probably meet or exceed
| the performance with something like a cheap 8-core Ryzen.
|
| Rack mount servers are also exceptionally loud. Unless you love
| the noise of small, high-RPM server fans, you don't want rack
| mount server hardware in your house.
|
| And don't forget the power bill. Some old servers idle at
| hundreds of watts, which will add up over the several years you
| leave it running. 24/7 server hardware is a good example of
| where it makes sense to be mindful of power consumption.
|
| > What makes even less sense is wanting use to use software
| like vSphere or ESXi since its about 10x more complicated than
| just using virt-manager/QEMU.
|
| I disagree. ESXi is actually extremely easy to use, as long as
| you pick compatible hardware up front. The GUI isn't perfect,
| but it's intuitive enough that I feel confident clicking around
| to accomplish what I need instead of looking up a tutorial
| first.
| beerandt wrote:
| I've always ignored advice even people say somethings too
| hard or not worth it, and pretty much never regret it.
|
| I absolutely regret trying to get a used rack mount server
| running.
|
| The combination of steep learning curve from workstations to
| server hardware, plus parts that were failing but tested ok,
| made for an extremely difficult path to troubleshooting and
| getting it running right.
|
| And that's before you get to the quirk's of getting it to
| boot and installing an OS and drivers and software.
|
| I love it now that it works, but it easily took 100x the time
| (yes 100x) and probably 2-2.5x the total expected cost
| getting it to that point.
|
| Not counting the additional AC unit I installed to keep it
| (somewhat) quieter.
|
| I usually expect one or two aspects of my projects to have
| unexpected roadblocks, but for this it was issues with what
| seemed like every single step.
|
| It's eventual replacement will be factory new.
| p_j_w wrote:
| >What makes even less sense is wanting use to use software like
| vSphere or ESXi since its about 10x more complicated than just
| using virt-manager/QEMU.
|
| I always just assumed it was as a learning experience. When I
| was in my 20s, I didn't try to get Apache, qmail, and bind
| running because it was practical for me, I wanted a marketable
| skill. There are lucrative jobs out there for people who know
| these technologies.
| R0b0t1 wrote:
| Best to avoid vSphere/ESXi in that case. I learned using qemu
| and was able to step into many roles immediately, including
| some VMware ones. The Linux/qemu/Xen ones pay better.
| awat wrote:
| That was my experience as well. Home labs seem to be cyclical
| for a lot people including myself. Started with a big
| overkill rack to learn technologies, and now I'm down to a 10
| inch desk rack that is ATX case size with just a few things
| to run my network and small VMs.
| KingMachiavelli wrote:
| The application side is a more understandable. There are lots
| of reasons to know and use apache, postfix, etc.
|
| As far as learning goes, if you really want to work in
| enterprise IT then vSphere is good to know but if you are
| willing to learn things on your own then you might as well
| learn kubernetes, docker, etc.
|
| Knowing things like vSphere is cool & perhaps useful but it
| also hides the how things work. If you want to know and
| understand things it is better to stick to the open source &
| interact directly with KVM and Xen. Like you wouldn't use
| cPanel to learn how a LAMP stack works.
| aftbit wrote:
| > Rack mount hardware is almost always expensive, loud, and
| power hungry.
|
| It really depends on the particular hardware. I recently picked
| up an R720 with 128GB of memory and dual E5-2670v1 for $450 on
| eBay. It idles at 120W (about the same as my brand new Ryzen
| 5950X desktop). It is not much louder than my old air-cooled
| 8700K consumer PC. Of course, it's not much faster either, and
| it's definitely slower than my Ryzen.
|
| I bought it to learn how to use iDRAC and practice ZFS with 10x
| $20 1GB 10k RPM SAS drives. Also maybe to give Proxmox a try.
| All of my practical home-prod stuff runs on an old i5 desktop,
| not my rack servers.
| Lammy wrote:
| > I just have never seen the point of building a home lab like
| this.
|
| https://www.youtube.com/watch?v=38ApYaywLzs
|
| I have to pay PG&E rates for power ($$$) so I'm a big fan of
| lower-power hardware for a system I'm going to leave on 24/7,
| e.g. my 25W-TDP 1U racked 8-core ECC-equipped Atom server built
| on this board:
| https://www.supermicro.com/en/products/motherboard/A2SDi-8C+...
| PaulWaldman wrote:
| >What makes even less sense is wanting use to use software like
| vSphere or ESXi since its about 10x more complicated than just
| using virt-manager/QEMU.
|
| I disagree. Assuming your example of a single ATX desktop, ESXi
| really is easy to setup and modern versions provide a graphical
| web client. This assumes you're staying away from vSAN,
| vMotion, and iSCSI storage.
| kazen44 wrote:
| but, assuming you are learning the vmware stack because of
| employment. Not knowing things like vsan, ISCSI and vmotion
| makes your lab nearly worthless.
|
| Learning vsphere and esxi properly requires atleast a
| decently sized cluster, especially if you start throwing NSX
| in the mix.
| deeblering4 wrote:
| The lenovo "tiny" hardware recommended in the article is really
| ideal. Its essentially laptop components in a micro case (no
| hid/battery/screen), they even are powered by a laptop style
| external DC adapter.
|
| They are affordable, quiet, powerful (modern x86_64 with basic
| gpu) and light on power usage.
|
| I run a pair of m92p myself.
| KingMachiavelli wrote:
| The downside to these small form factor kit PCs is that then
| you are very connectivity limited. You can't use it to build
| a NAS directly or GPU connected VMs, etc.
|
| They are quite good as a cheap thin client that you use to
| access your more powerful hardware. As hardware ages it tends
| to lack some of the nice feature's like dual 4K@60Hz output,
| thunderbolt, etc. so having a new but cheap/lowpower machine
| helps.
| deeblering4 wrote:
| Thats true. This model has just one small expansion slot so
| you have to be clever. But its fine for general purpose
| compute.
|
| Yeah realistically NAS is out, but thats probably ok. I
| mean NAS is not a great fit for most general purpose
| computers. In a pinch you could go usb-3 jbod or something.
| But personally I think it'd be better to go with either
| specifically storage oriented hardware, or a scale out
| filesystem on top of a cheap cluster, something like
| odroid-hc2.
|
| Personally I like to keep things compartmentalized even at
| home. So my NAS is a dedicated (off the shelf) system, and
| the lenovo mini servers mount it via NFS/CIFS.
| Steltek wrote:
| Yes! Companies refresh Dells pretty often and you can find an
| i7 for not much money on eBay. Buy two to double up the RAM. If
| you know the right people at a company, you could get them for
| free even.
|
| Home labs aren't about millions of hits. It's a playground.
| bradstewart wrote:
| I had more space in my network rack than near my desk, so I
| bought a cheap Rosewill rackmountable ATX case, and rebuilt my
| old desktop into it (since I pretty much only use my laptop
| these days).
|
| But I agree that buying old Dell servers for home use is rather
| silly at this point.
| jrm4 wrote:
| Right? Obviously, everyone should do their thing. But I suppose
| the tiny tiny "issue" I might have would be - I kind of feel
| like this reinforces the idea that "having server things in the
| house is big and complex."
|
| So I encourage everyone who does this also tells the newbs, "I
| mean, you could also just slap linux on that old computer in
| the corner and do 95% of what I"m doing here, BUT MINE WILL
| LOOK COOLER."
| User23 wrote:
| UPS plus a generator is a must have if you live someplace with
| severe weather.
| sgarland wrote:
| This is a good, if not opinionated, guide. And to be fair,
| r/homelab and its ilk are so full of options that it's easy to
| become overwhelmed.
|
| Personally, I settled on Supermicro because they're modular, and
| don't care what brand of stuff you throw in them (HP is notorious
| for spinning fans to turbo if you put non-HP disks into them),
| although I may be picking up two Dells to complement the one I
| have for a Proxmox HA/Ceph cluster.
| whalesalad wrote:
| I love Dell. IDRAC is killer.
| sgarland wrote:
| The HTML5 interface is definitely very nice. I haven't had a
| chance to use Supermicro's HTML5 IPMI, as I have an X9 board,
| and to my knowledge the minimum support for it is X10.
| sjackso wrote:
| I have a little experience with both the old and new
| Supermicro stuff. The new x10 IPMI experience is a lot like
| the old x9 java app experience, except you don't have to
| dig out an ancient computer to make it work. (Which feels
| great by comparison!)
| croutonwagon wrote:
| I dont run server anything.
|
| I have dell 7050's running a VMware/virtualization lab. They
| can have 64GB of ram and can handle anything i throw at them.
| Not iKVM though. But i just walk across the room and hook up a
| monitor the one or two times a year i need to.
|
| Honestly Synology has been the best godsend. I was a SAN admin
| in a previous life. I have run freenas, openfiler, openfiler in
| HA, linux+NFS+iSCSI etc over the years. Synology generally
| makes it simple and totally integrated and allows me to play
| with other things rather than getting storage working.
| youngtaff wrote:
| Serve the Home did a fab set of reviews on the different SFF
| machines and how useful they are for a homelab
|
| https://www.servethehome.com/?s=tinyminimicro
___________________________________________________________________
(page generated 2021-08-24 23:00 UTC)