[HN Gopher] IBM Power10 Coming to Market: E1080 for 'Frictionles...
       ___________________________________________________________________
        
       IBM Power10 Coming to Market: E1080 for 'Frictionless Hybrid Cloud
       Experiences'
        
       Author : rbanffy
       Score  : 53 points
       Date   : 2021-09-09 12:15 UTC (2 days ago)
        
 (HTM) web link (www.anandtech.com)
 (TXT) w3m dump (www.anandtech.com)
        
       | jstx1 wrote:
       | I still don't get what hybrid cloud is and how much of it is
       | marketing and how much is actually useful tools, services and/or
       | infrastructure.
        
         | imglorp wrote:
         | The other commenters mention multi-cloud use cases but another
         | case is prem-cloud hybrid. You may want some of your business
         | on prem for whatever reason but be able to move workloads to
         | various clouds and back, for price, load, or whatever reason.
        
           | jstx1 wrote:
           | So if that's the set-up (on-prem + a cloud provider), what's
           | being sold as a hybrid cloud solution - some services that
           | allow you to manage them better or something else? What are
           | some examples?
        
             | pm90 wrote:
             | AWS Outpost and GCP Anthos are examples of hybrid cloud
             | services.
             | 
             | At first cloud providers created tools to make it easier to
             | migrate/connect your on prem to your cloud environments.
             | I'm guessing what usually happened was that some workloads
             | would get migrated but others would remain on prem (for
             | various reasons) so cloud providers started building
             | products that made "your Data Center in our cloud". Ie
             | deploy cloud-like services within your Data Center so your
             | developers have access to the same interface regardless of
             | where the workload is deployed.
        
             | Closi wrote:
             | Chips that are architecturally similar / behave like the
             | cloud platforms, and they are claiming it is designed to
             | allow a higher container density (i.e. run more containers
             | at once, where some of those containers might have very
             | little activity).
        
             | imglorp wrote:
             | AWS is probably further along than the others throwing
             | offerings at the wall. They let you do things like have a
             | single api that can manage your stuff on your prem and in
             | their cloud. It can look like containers, like VM's, like a
             | VPN, all sorts of options. Sometimes it sticks to the wall.
             | 
             | https://aws.amazon.com/hybrid
        
         | pjmlp wrote:
         | Basically having mixed deployments across cloud environments,
         | with services that ease application management and deployment
         | across them.
         | 
         | Sure one can do that today, but consulting shops and cloud
         | vendors need new products for keeping the board happy with
         | exponential growth.
        
         | dakial1 wrote:
         | Cloud was a great option in the beginning because oh the
         | elasticity, less cost and effort to maintain (in comparison to
         | on prem). But, as companies as AWS, Google, MS and IBM worked
         | to lock their clients in, costs have risen and the cloud
         | economic benefit faded, some companies tried multicloud, that
         | worked for a while and now some companies are actually moving
         | some of their infrastructure back to on prem, so there is the
         | hybrid cloud case. What I've seen is companies covering their
         | base demand with on prem and leave the cloud to deal with the
         | elastic part of their demand. If costs change sides, they can
         | also use more cloud and less on prem and vice versa.
        
         | playcache wrote:
         | It's mostly around federation and abstraction of the lock in's
         | of a particular cloud provider.
         | 
         | For example, I might want to run my own hardware for a certain
         | reason and then also run workloads on AWS, but I don't want to
         | have to manage different sets of API's , auth methods etc.
        
         | hughrr wrote:
         | It's about putting things in the cheapest place with the lowest
         | risk.
        
         | peytoncasper wrote:
         | I agree to some point with the other commenters. But the
         | reality is that almost every enterprise of any size will be
         | hybrid cloud for quite some time if not forever. It's
         | impossible to lift and shift everything at once which means you
         | need the infrastructure to connect those environments
         | regardless.
         | 
         | On top of that you've still go some applications that run on
         | platforms that just aren't compatible and will need to be
         | either rebuilt or left to run on-premise indefinitely.
         | 
         | Additionally, getting back to their other comments. I think
         | there is some logic around data being viewed as "heavy" given
         | the egress costs on most clouds. Having one place which you can
         | easily upload data to any new environment that pops up for
         | cheaper seems like a decent idea. Then again, a lot of money
         | related things can be solved by simple contract negotiation, so
         | maybe it's not worth it.
        
           | jstx1 wrote:
           | > you need the infrastructure to connect those environments
           | regardless
           | 
           | What does that infrastructure look like? What are some
           | examples?
        
             | tyingq wrote:
             | One example would be an on-prem object storage device
             | that's capable of replicating data to AWS S3.
        
             | pm90 wrote:
             | One example would be to use "interconnects" which link your
             | data center to the cloud through a high capacity (maybe low
             | latency?) "dedicated" line. So basically you have private
             | IP connectivity between your DC and a virtual private cloud
             | (VPC) so your workloads "think" they're on the same private
             | IP network. I thought of interconnects as beefy VPNs (but
             | this may not be accurate, just a helpful mental aid).
             | 
             | Note: I'm using mostly GCP terminology,
             | https://cloud.google.com/hybrid-connectivity
        
       | theandrewbailey wrote:
       | Would love to see one of these in a Raptor Talos machine.
        
         | _-david-_ wrote:
         | Here is a relevant article
         | 
         | https://www.phoronix.com/scan.php?page=news_item&px=IBM-POWE...
        
         | mnd999 wrote:
         | I was thinking the same, I dunno if they sold well enough to
         | justify a new version, but I hope so. I've been tempted several
         | times, but it seems like quite a lot of money for a Linux box
         | that probably has quite a lot of subtle incompatibility quirks.
        
           | p_l wrote:
           | The problem currently isn't how many sold, but issues
           | regarding IP that RCS (reasonably) does not want to
           | compromise on. Issues that block, for now, a new POWER10
           | system unless you still want blobs, just not shipped on-board
           | - and want to pay ridiculous prices for unique memory
           | modules.
        
             | mnd999 wrote:
             | Ahh, that's a shame. Blobs would kill the whole thing dead
             | for a large proportion of their customer base. Let's hope
             | they find a way to resolve it.
        
               | dragontamer wrote:
               | There seems to only be two blobs in POWER10. The memory
               | controller and something about I/O.
               | 
               | The question is what to do about it. Maybe an open source
               | firmware rewrite can happen. Or maybe IBM needs to make a
               | cheaper desktop version first with a more open DDR4/5
               | controller?
               | 
               | That advanced RAM module isn't needed for desktop level
               | workloads anyway.
               | 
               | Either way, 2022 is out. Maybe 2023 will have thing line
               | up better?
        
               | p_l wrote:
               | There is only one blob that can't be recreated from
               | source that is needed for a working system, and that's
               | OMI-DDR4 interface. There's talk of reversing it, but
               | nobody has the time or resources. So long as said chip is
               | the only source for making a motherboard that doesn't
               | require OMI memory modules (which have the same chip
               | anyway), there's a problem.
               | 
               | EDIT: Hah, I missed the PPE discussion recently. There's
               | now some ppl looking towards reversing that too, but
               | again, not RCS itself AFAIK nobody has seen non-OMI
               | POWER10 cpu on the road map.
        
               | ksec wrote:
               | What is the intended usage for that? As in used in
               | Consumer PC ?
               | 
               | Would microWatts [1] fit that purpose? Not only do you
               | have an Open ISA as in OpenPower, you also have an Open
               | implementation of that ISA. AFAIK RISC-V doesn't have
               | anything similar, only open source design in embedded
               | usage.
               | 
               | Otherwise a low cost POWER10 ( or now Power10 without the
               | capital... and I hate it ) wont make much sense. You are
               | talking about ~30mm2+ per core design. It is huge.
               | 
               | [1] https://github.com/antonblanchard/microwatt
        
               | rbanffy wrote:
               | > Or maybe IBM needs to make a cheaper desktop version
               | first
               | 
               | I doubt that will happen. For that to happen, there would
               | need to be a market for low-end POWER servers. Low-end
               | means low-margin and competition from x86 boxes. There
               | may be space for a low-spec IBM machine (a POWER-based
               | descendant of the AS/400), and I imagine the IBM i
               | subsidizes a lot of the development of the AIX-based
               | POWER boxes (because they are the same hardware).
               | 
               | I really think IBM should spend some money on entry-level
               | boxes for their exclusive platforms (P, I and Z). I don't
               | see I or Z going anywhere - they are good enough and too
               | expensive/risky to migrate away from, but P and AIX are
               | too close to generic x86 hardware running Linux for them
               | to feel too comfortable. Right now, it's hard to justify
               | even suggesting a green field project using anything
               | that's not a commodity platform. Being less expensive
               | doesn't help when the minimum sticker price is more than
               | 100K. Chances are development would start on generic
               | boxes or cloud and stay there. IBM is the only company
               | that can make POWER competitive with x86 at an entry-
               | level and a single-core SMT8 part would probably be a
               | justifiable expense for many R&D departments.
               | 
               | > with a more open DDR4/5 controller?
               | 
               | My impression is that the Power Axon interface would be
               | able to control DDR4/5 memory.
        
               | classichasclass wrote:
               | The OMI issue is probably solvable with an open
               | controller. Not a trivial undertaking but OMI, at least,
               | is documented.
               | 
               | The on-chip PPE I/O controller is a bigger problem. I
               | suspect, but don't know, that it's the PCIe interface. If
               | so, I can't think of an easy way of getting around it.
        
               | aww_dang wrote:
               | In an ideal world, IBM would help cultivate the ecosystem
               | Raptor is creating.
        
               | dragontamer wrote:
               | I'd say that releasing all the other firmware of POWER10
               | as open source is a big deal still.
               | 
               | They didn't reach Raptor's high-standard of open
               | firmware, but POWER10 is probably one of the most open
               | CPUs out there in the modern marketplace.
        
       | marcodiego wrote:
       | Don't forget: https://www.talospace.com/2021/02/a-better-theory-
       | on-why-the...
        
       | datameta wrote:
       | What a privilege it is to contribute to P10. OMI is mind-
       | bogglingly fast and will be an even greater advantage during the
       | enterprise DDR5 era.
        
       | ChuckMcM wrote:
       | Such a beast. I would love to play with one of these. Other than
       | some really serious CFD or EM field simulations I don't know if I
       | could keep it busy :-)
        
         | wallacoloo wrote:
         | So how do these stack up against running that same workload
         | (simulations) on a powerful GPU? Often a sim requires running
         | the same operation over thousands of different cells: it seems
         | really suited to the "warp" style of parallelism, where all
         | cores in a set run the same instruction in lockstep. Or SIMD,
         | but that's usually a bit more effort to port. LLVM has a spirv
         | backend these days, so does POWER actually make sense for these
         | workloads outside of places where it's difficult to port your
         | codebase?
        
           | ChuckMcM wrote:
           | Generally it depends on the working set. Massively parallel
           | architectures with shared visibility into a coherent memory
           | space can often more efficiently compute volumetric problems.
           | Doing the same problems on GPUs involve a lot of data
           | shuffling and you end up on the wrong side of Amdahl's law.
        
       ___________________________________________________________________
       (page generated 2021-09-11 23:02 UTC)