[HN Gopher] Uptime Lab's CM4 Blade Adds NVMe, TPM 2.0 to Raspber...
       ___________________________________________________________________
        
       Uptime Lab's CM4 Blade Adds NVMe, TPM 2.0 to Raspberry Pi
        
       Author : geerlingguy
       Score  : 105 points
       Date   : 2021-08-04 14:30 UTC (8 hours ago)
        
 (HTM) web link (www.jeffgeerling.com)
 (TXT) w3m dump (www.jeffgeerling.com)
        
       | jdoss wrote:
       | I know this the Pi home lab hardware ecosystem is not the
       | datacenter ecosystem, but what I really want on a Pi is not TPM
       | but IPMI for remote management.
        
         | geerlingguy wrote:
         | Ping @merocle on Twitter--the board's design is not 100% final,
         | and there may be a few other features he could cram in (I've
         | already thought of moving at least the serial UART connection
         | to the front...).
        
         | mbreese wrote:
         | That's funny. Because one of the more legitimate uses for a Pi
         | in a data center is as a remote management KVM.
         | 
         | https://pikvm.org/
        
       | diarmuidc wrote:
       | "The board gets fairly warm, likely due to the overhead of the
       | PoE+ power conversion (which consumes 6-7W on its own!"
       | 
       | 7 W of losses? That's either wrong or insane for DC/DC
       | regulation. Rough figures, pi4CM (5W) + NVMe (2W) = 7W. So
       | regulation of 50% efficiency?
        
         | shadowpho wrote:
         | PoE can't do "purely" DC/DC regulation. They require (in any
         | reasonable setup) a transformer for isolation on both ends,
         | which lowers efficiency, especially at lower end. Now with that
         | said you can definitely get decent efficiency with PoE, it's
         | just that most people don't care and so it's not as big of a
         | focus.
        
         | geerlingguy wrote:
         | I found the same to be the case on the official Pi 4 PoE+ HAT
         | (the new 2021 version)[1].
         | 
         | [1] https://www.jeffgeerling.com/blog/2021/review-raspberry-
         | pis-...
        
       | anoonmoose wrote:
       | Getting kind of sick of OP reviewing products that we can't buy
       | and, imo, probably won't ever be able to buy. Rather, sick of it
       | making it to the HN front page.
       | 
       | https://news.ycombinator.com/item?id=27460885
        
         | geerlingguy wrote:
         | Most of these products are able to be built by hobbyists (in
         | fact, almost all of them are produced by hobbyists). The reason
         | most fail to launch to production is not only the extreme
         | difficulty of scaling from prototype to manufactured production
         | quantities of product, but also from 2020 onwards, the chip
         | shortages.
         | 
         | For each product I've been able to actually touch, there are
         | ten more I've heard about and wanted to see happen, but they
         | just couldn't be made because certain specialty ICs are just
         | not available to small-time hobbyists/makers.
         | 
         | I don't tend to review these boards because I want people to go
         | out and buy them. I review them because I am inspired to try
         | out new ideas and try my hand at new things (circuit designs,
         | PCB design, etc.), and I figure maybe some other people can get
         | inspired to try too.
         | 
         | I try to feature as much open source hardware as possible,
         | because it actually is possible for people to get custom PCBs
         | and put together their own versions of them.
        
         | jimmies wrote:
         | Being off-the-shelf is an added bonus, but not a requirement.
         | This website is called Hacker news, not Bestbuy news. As long
         | as someone else can replicate the results from what is given in
         | the post, I do think the post has enough merit.
        
         | rsync wrote:
         | Ugh, what ?
         | 
         | I love seeing the Jeff Geerling posts and his blog is great!
        
         | wila wrote:
         | You don't have to read his posts, the base URL is right next to
         | the title. It is very easy to skip an article if it is not your
         | cup of tea.
         | 
         | Personally I love the posts of Jeff as I think they are very
         | well written and well researched.
        
       | geerlingguy wrote:
       | To be clear, the blade adds TPM 2.0 via an Infineon chip, and you
       | can interact with the chip, but features like secure boot require
       | bootloader-level integration, and at this time, since no other Pi
       | hardware has TPM, the Pi's closed-source bootloader doesn't
       | support it.
       | 
       | Also something interesting to noodle on: _technically_ the Pi 4
       | /CM4/Pi 400 have ECC RAM [1] -- but some ECC functionality may
       | require CPU-level integration... which I'm guessing isn't present
       | on the Pi's SoC (but who knows? I posted this forum topic asking
       | for clarification: [2]).
       | 
       | [1] Product brief mentions LPDDR4 with on-die ECC:
       | https://datasheets.raspberrypi.org/rpi4/raspberry-pi-4-produ...
       | 
       | [2] https://www.raspberrypi.org/forums/viewtopic.php?t=315415
        
         | formerly_proven wrote:
         | On-die ECC means that the DRAM array in the chip stores ECC and
         | the chip itself handles that transparently. The interface does
         | not use ECC in this case. One of the reasons this is done is
         | higher cell densities (just like in storage) and allowing
         | longer refresh intervals (refreshing requires power regardless
         | of usage).
        
         | fakesheriff wrote:
         | Without bootloader integration, what's the difference between
         | adding a TPM vs an HSM like [0]? Does TPM just have a more
         | standardized interface?
         | 
         | [0] https://www.zymbit.com/2020/11/10/blog-security-module-
         | raspb...
        
           | geerlingguy wrote:
           | You can actually add both--there's a partial GPIO header for
           | the Zymkey 4i on the board.
           | 
           | But yeah, I think the idea is TPM is a bit more standardized
           | across hardware, so some software that uses it would not need
           | any tweaks to run on the Pi with a TPM built-in.
        
             | gruez wrote:
             | >But yeah, I think the idea is TPM is a bit more
             | standardized across hardware
             | 
             | But there are USB HSMs, along with smart cards (which are
             | also HSMs). Aren't those pretty standard?
        
               | tadfisher wrote:
               | Sort of; the communication protocol is standard (CCID),
               | but the actual HSM interface varies. Yubikeys implement
               | the OpenPGP smartcard interface, for example, as well as
               | PKCS #11.
               | 
               | The TPM specification has its own crypto interface that
               | is standard across all hardware, so you can do things
               | like generate a key and perform crypto operations without
               | requiring the hardware implement a particular interface
               | beyond whatever TPM version you require.
               | 
               | There are advantages and disadvantages to both
               | approaches. On Linux, TPMs are implemented in the kernel,
               | and CCID is handled by userspace drivers.
        
         | Stampo00 wrote:
         | The inclusion of TPM is definitely the most interesting aspect
         | of this to me. I was initially turned off by the idea because I
         | tend to associate them with nasty enterprise-y DRM types of
         | stuff.
         | 
         | But I wonder why it's included here. Maybe for secure storage
         | of private keys? I'm guessing that's the desired purpose for
         | TPM in a server setting.
         | 
         | Obviously I'm not super familiar with TPM. I wonder if it can
         | be used for things like better random number generation or
         | hardware-accelerated hashing. I'm guessing if it supported
         | hashing from userspace, the cryptocurrency crowd would have
         | been all over it already.
        
           | simcop2387 wrote:
           | I believe the 2.0 standard does support rng, but I;m not sure
           | of the specifics. there might be some hashing functionality
           | but the interface is going to be too slow for large data. I
           | think it;s there to do integrity checks for bootloaders and
           | such
        
           | megous wrote:
           | TPMs are just SPI connected devices. You can add them to any
           | SBC.
           | 
           | Hardware accelerated hashing would be severely limited by the
           | SPI bus speed.
        
           | duskwuff wrote:
           | > I wonder if it can be used for things like better random
           | number generation...
           | 
           | Too slow to be useful, and the BCM283x SoC already has an
           | internal RNG:
           | 
           | https://github.com/torvalds/linux/blob/master/drivers/char/h.
           | ..
           | 
           | > ... or hardware-accelerated hashing
           | 
           | The TPM is on a _really slow_ bus (33 MHz, 4 bits, tons of
           | wait states). Even without any hardware acceleration, hashing
           | on the CPU is much faster.
        
       | rsync wrote:
       | Can we talk about the 10" rack please ?
       | 
       | It appears that the rack posts are just chopped/hacked from some
       | other rack ... but where are people sourcing these little 10"
       | wide rack shelves ?
       | 
       | Also, the bottom component - is that a patch panel or a switch ?
        
         | Merocle wrote:
         | I used this one: https://www.rack-magic.com/Mini-
         | Rack-4826mm-19-Rack-Stand-6H...
        
         | youngtaff wrote:
         | Bottom component looks like a patch panel.
         | 
         | Plenty of 10" kit about e.g.
         | https://datacabinetsdirect.co.uk/soho-10-inch-data-network-r...
         | 
         | 10" soho rack appears to be magic search phrase
         | 
         | I've see a rack similar to the in the article before and can't
         | find it now, but one way to recreate would be to buy soemthing
         | like this a use a 10" blanking plate instead of the 19" one
         | https://www.amazon.com/Procraft-Desktop-System-Washers-DTR-1...
         | 
         | Lots of music gear shops sell desktop racks BTW
        
       | mbreese wrote:
       | At this level, I don't understand the PoE requirement. If you're
       | already making "blades", why not also make a proper backplane
       | that includes power and network connections as part of the
       | backplane. You'd probably be more power efficient and remove the
       | need for so many network ports.
       | 
       | This would also increase the costs for the chassis (because you'd
       | have to add a network switch), but you could probably also pack
       | in more blades... But even if you kept the ethernet PHY ports,
       | you'd probably be more power efficient with even just power on a
       | backplane.
        
         | postpawl wrote:
         | Yup, this is only really practical for a rack you manage
         | yourself. The colocation provider is going to charge you like
         | $5 per network connection.
        
         | silasb wrote:
         | I like this idea. Can we do something similar to how Frame.work
         | laptops work and just provide the connection to/from the
         | chassis as USB-C?
         | 
         | It'd be great if the next version of the compute module would
         | provide thunderbolt 4 instead. I think we'd be able to provide
         | both power + 40GB nic support over one connection.
        
         | q3k wrote:
         | A backplane is nontrivial to design electrically/mechanically
         | and expensive to manufacture (even just some high quality
         | connectors will quickly run up your BOM). If you also want to
         | carry networking there you'll end up having to do some semi-
         | custom network switch design, which locks you into a particular
         | switch/ASIC vendor 'forever', or at least vastly increases
         | friction when wanting to upgrade. Yes, they already have their
         | own PCBA and some mechanical design, but it's very simple
         | compared to what it takes to design a reliable backplane with
         | an integrated power supply and network switch.
         | 
         | At the end of the day this a low-cost system for low-cost
         | devices. IMO the little benefit from having a backplane is not
         | worth the R&D cost and the downsides of fully backplaned blade
         | systems.
         | 
         | And this is not just about this project: from what I see, the
         | industry seems to have rejected fully integrated blade systems.
         | Dell's M1000e is dying, and I don't think I've seen HPE
         | bladecenters in years. Instead, semi-integrated systems like
         | Supermicro's high-density offering is king. No proprietary
         | chassis management system, no proprietary network switches, no
         | locking yourself into whatever the backplane can carry.
        
           | formerly_proven wrote:
           | At work we have oodles of 2U4N (8P) systems. Just going by
           | eye I'd guess these have the same or even higher density than
           | a blade center, while allowing more granular scaling, and
           | being a standard form factor, and you get both front and back
           | access to each node, so you can have whatever I/O you want,
           | and you don't need a lift to handle the chassis.
        
           | Merocle wrote:
           | totally agree, thanks
        
         | geerlingguy wrote:
         | I think it's mostly convention and convenience--in the Pi
         | ecosystem, most people using these things headless or as small
         | servers are used to powering with PoE already.
         | 
         | Networking gear is often powered by PoE, but most servers have
         | higher power requirements so a custom backplane or separate
         | power method would be required.
         | 
         | Boards like the Turing Pi v2 (so far just prototypes and a
         | marketing page on a website) would have a built-in network
         | switch and power backplane (direct to each CM4), and I'm
         | guessing another board or two like it will appear someday. I
         | hope.
        
           | h2odragon wrote:
           | Is there somewhere after the POE that power could be tapped
           | in? Even just a pair of pins for 5v sounds like it might make
           | for significant power savings.
        
           | mbreese wrote:
           | No, I get that... and there is a different level between
           | hobbyist projects and larger production projects. Using the
           | existing PoE "infrastructure" makes a lot of sense in the
           | prototyping stages.
           | 
           | But if you're already going through the process to build
           | custom boards for a cluster, then a backplane makes much more
           | sense. We're not talking about a small 4-5 RPi cluster where
           | everything can work off of a single PoE gigabit switch with a
           | rat's nest of cat6. This project is talking about 16 of these
           | blades in a 1U chassis. If you're going to go that far, a
           | backplane isn't a big leap and would save power, space, and
           | make cabling much easier.
        
             | toast0 wrote:
             | The trick is, if your backplane is providing ethernet
             | switching, then you need to provide for different levels of
             | network needs.
             | 
             | Some people would be fine with a dumb N + 1 port gigE
             | switch, which would be inexpensive, others will need 10G
             | uplink, some 2x 10G with LACP so there's no bottleneck,
             | some are going to want vlans or other managed switch style
             | offerings, etc.
             | 
             | If the PoE power conversion really eats 6W per board though
             | as mentioned in the article, that seems like a lot of power
             | and an alternate power arangement would be a no-brainer.
        
               | [deleted]
        
       | karmicthreat wrote:
       | What I want to do for a funsies project is have a bunch of Pi4
       | CMs plugged into a PCIE switch and a multi-host 10gige adapter.
       | 
       | I don't think this is possible though, I think I read somewhere
       | that the Pi4 PCIE Root doesn't support host<->host comms.
       | 
       | If anyone knows different, speak up!
        
       | bserge wrote:
       | Wait, since when is using several separate machines an acceptable
       | way of computing? And especially counting total RAM as if all of
       | it can be used when needed?
       | 
       | I've been doing that for a decade now (for cheapness mostly) and
       | people said it was stupid and worse than a single computer with
       | the same number of cores and RAM.
       | 
       | Also would those chips need heatsinks and cooling?
        
         | toast0 wrote:
         | Using several separate machines is what you have to do when
         | your computing needs don't fit in a single computer.
         | 
         | Of course, the total computing power here would fit in a single
         | x86 computer, but it can be fun to play with clustering and
         | this might be a lot less expensive than an x86 cluster.
        
         | 908B64B197 wrote:
         | > Wait, since when is using several separate machines an
         | acceptable way of computing? And especially counting total RAM
         | as if all of it can be used when needed?
         | 
         | It brings joy to Pi owners. There's more to the Pi than just
         | the hardware.
        
         | gh02t wrote:
         | Supercomputers have been built like that for a long time
         | (decades) now, albeit with ultra-fast networking and storage to
         | back them. There is a whole lot of research and money poured
         | into specialized algorithms and software optimized for
         | distributed machines. And that's far the only form of
         | distributed computing in use.
        
       ___________________________________________________________________
       (page generated 2021-08-04 23:00 UTC)