[HN Gopher] Silicon innovation and custom ASICs at Amazon [pdf]
       ___________________________________________________________________
        
       Silicon innovation and custom ASICs at Amazon [pdf]
        
       Author : mad44
       Score  : 87 points
       Date   : 2022-11-08 18:33 UTC (2 days ago)
        
 (HTM) web link (mvdirona.com)
 (TXT) w3m dump (mvdirona.com)
        
       | ancharm wrote:
       | Is there any video of this posted somewhere?
        
       | mk_stjames wrote:
       | Whenever I see these modern cloud centers with racks and racks of
       | GPU servers, or 64-core custom ARM CPU blade servers with
       | terabytes of RAM... I can't help but wonder how many years it
       | will be until I can just pick one rack up off Ebay for a few
       | hundo and play with it, like I used to do with old 80s and 90s
       | obsolete surplus. The way things are going... probably never.
        
         | arbuge wrote:
         | > The way things are going... probably never.
         | 
         | Why not? What way are things going then?
        
           | mk_stjames wrote:
           | I simply do not see corporations selling off obsolete
           | equipment in the ways I used to. It seems it has become
           | cheaper and less of a burden to take EOL hardware and shred
           | and sell for scrap than to sell off in a way that could ever
           | wind up in a normal person's hands. I've seen this first hand
           | recently- this very, very well known company had a new plant,
           | with a whole warehouse of industrial equipment just old
           | enough to be considered obsolete- literal highbays full of
           | what were once very advanced robotics and specialized
           | machines, perfectly useable for hobbyist engineers or even
           | small startups, but too much of a burden to try and sell off
           | for this company as that would take too long- the space was
           | needed yesterday. So a service came that literally sawed
           | things off the floor with grinders and torches and used
           | excavators to load up railcars with anything metal.
           | Electronics and cables were simply cut and piled in separate
           | bins. It all literally went for scrap value, and in ways that
           | it could never be repurposed.
        
             | pyrolistical wrote:
             | Sounds like an opportunity for a "scraping" company to make
             | under market bids and resell equipment
        
               | Arrath wrote:
               | Disposal contracts often have clauses prohibiting
               | activities like that.
        
           | sn0wf1re wrote:
           | It seems the hyperscalers are more in favour of shreding
           | their hardware rather than putting it for sale on ebay. But
           | I'm not sure if this holds true, I have seen old Google
           | servers on /r/homelab.
        
             | count wrote:
             | They have data sanitization requirements that become
             | difficult to manage at scale if they do anything else. Are
             | you SURE there was no customer data stored in a
             | recoverable-by-modern-physics manner on that machine you
             | sold? Would you stake billions of dollars on it?
        
               | dafelst wrote:
               | Pretty much everything is encrypted at rest in the big
               | datacenters these days
        
             | monocasa wrote:
             | Have you seen actual google servers, or their whiteboxed
             | dell search appliances they sold for a while for other
             | people to run in their own datacenters?
             | 
             | These are some examples of the search appliances:
             | https://www.ebay.com/itm/155187480321
             | https://www.ebay.com/itm/284745813810
        
           | UncleOxidant wrote:
           | I'm wondering the same. Maybe they're implying that things
           | are going to get so bad that Amazon will just never be able
           | to afford to replace them and they'll stick around like an
           | IRS mainframe?
        
         | Analemma_ wrote:
         | Cloud hardware lasts longer than you think. There's a
         | widespread misconception that it gets cycled out constantly,
         | but the truth is, new hardware almost always supplements,
         | rather than replaces, the old stuff. When I was at AWS we
         | occasionally had to code workarounds for very old SKUs.
        
         | jeffbee wrote:
         | These things last 10+ years in the cloud so you'd need to be
         | willing to buy something truly obsolete that's already been
         | stripped for spare parts. Also, I imagine you'd have to be
         | willing to cart off an entire rack, which won't be a standard
         | rack but a weird shape, and then you'll need somewhere to plug
         | it in, which won't be a plug as such but a point to which you'd
         | need to wire 600VAC, or whatever that cloud operator was using.
         | Finally, you're going to need some way to connect its weird
         | network to your plain old network.
         | 
         | Oh, and all the management features of the rack won't work
         | because I am sure they would wipe their proprietary software
         | before resale. Basically you're buying raw materials in an
         | inconvenient package.
        
           | xxpor wrote:
           | You're more likely to be able to pick up an Outposts rack,
           | which is at least designed to go into a more "normal"
           | datacenter.
        
         | qbasic_forever wrote:
         | Unfortunately if you got a rack of Amazon's custom ARM servers
         | you'd need access to their software to get it to boot. You need
         | the device tree description of the hardware to get a Linux
         | kernel booting and there's no standard for distributing or
         | discovering those in ARM boards.
        
           | monocasa wrote:
           | It's deeper than that since they probably have hardware root
           | of trust watching over boot. I'd be shocked if you could get
           | it to boot at all without Amazon's signing keys.
        
       | amelius wrote:
       | What does "Hello World" look like in the silicon world these
       | days? (Looking for a complete example containing everything
       | necessary to go from code to tapeout, I know multiple answers are
       | possible).
        
         | lnsru wrote:
         | The free asic program sponsored by big search engine:
         | https://efabless.com/open_shuttle_program However it might be
         | much more complex than simple "hello world".
        
         | reportingsjr wrote:
         | This is a pretty recent thing, but for a beginner in the VLSI
         | world I'd say this is a good "hello world":
         | https://tinytapeout.com/
        
         | zerohp wrote:
         | I don't think it exists for advanced nodes like the ones Amazon
         | is using.
         | 
         | What does "Hello World" look like for a skyscraper?
        
           | fragmede wrote:
           | More of an MVP than "hello world", but a 6 story building
           | that utilizes girders and concrete with an elevator, using
           | materials and construction techniques that would scale to a
           | 110+ story building.
        
       | rob-olmos wrote:
       | What's the next circle going to be? Mainframes become even more
       | specialized & powerful, and then the cloud builds new more
       | specialized silicon to match it?
        
       | brooksbp wrote:
       | > Where Have I Been? 2012 to 2022 around the world in a small
       | boat. Worked full time at AWS. Only in North America 3 to 4
       | times/year. Great to be back!
       | 
       | WFH is over folks.
       | 
       | Joking aside, that's awesome, and I hope some flexibility remains
       | for all. Especially for those with little kids and two working
       | parents.
        
       | sokoloff wrote:
       | The slides are interesting; I found the text commentary a good
       | supplement: https://perspectives.mvdirona.com/2022/11/hpts-2022/
        
       | jeffbee wrote:
       | I like the concrete numbers in this deck. Over 20 million nitros
       | installed, over 12GW power capacity. Gives us a chance to compare
       | scale with the other bigs.
        
         | nsteel wrote:
         | Genuinely curious, why do you want to directly compare? Does
         | this make them more attractive to work for?
        
           | jeffbee wrote:
           | I think it's interesting to see how big these clouds are in
           | absolute terms because it gives us an idea of when the cloud
           | has finished eating the world. We have global IT equipment
           | energy consumption estimates, and we have scattered data
           | points on cloud energy consumption. Looking at the two, you
           | can gauge the overall process.
           | 
           | Also as an investor I like to have a general idea of Amazon
           | vs. Google in terms of overall size, to combine with their
           | revenue figures, because that helps me understand how much of
           | Google is being sold as GCP and how much is being used by
           | Google itself.
        
       | latchkey wrote:
       | AWS uses 12,000MW of power.
        
       | evancox100 wrote:
       | For those wondering, the definition of HPTS: "High Performance
       | Transactions System (HPTS) is a invitational conference held once
       | every two years at the Asilomar Conference Center near Monterey
       | California"
        
       | pwarner wrote:
       | The cloud is the new mainframe. The difference vs the old
       | mainframe is it's so much more accessible to anyone. Barrier to
       | entry to build one will be high, but to consume is very low. I'm
       | trying to decide if lock in problem becomes bigger, but I think
       | where people follow modern software engineering best practice,
       | they can move if needed.
        
         | js8 wrote:
         | > The difference vs the old mainframe is it's so much more
         | accessible to anyone.
         | 
         | As someone who works on zSeries mainframes, I am not sure I
         | agree. For developers, this is true, no doubt. But for
         | organizations, in the cloud your data are locked away in ways
         | they are not on the on-prem mainframe.
         | 
         | Vendor lock-in seems to be similar - you use some middleware,
         | you're locked in.
         | 
         | With mainframe, there is more control over the infrastructure
         | than in cloud. I see our management (I work for MF utilities
         | vendor) constantly wrestle for control over our customer's
         | environment that would be easily given in cloud (e.g.
         | telemetry).
         | 
         | What's also interesting (but slowly changing to the worse), the
         | mainframe infrastructure (z/OS for instance) is quite open to
         | custom modification, IMHO more than cloud (but it depends on
         | type, IaaS/PaaS/SaaS).
        
           | akira2501 wrote:
           | > Vendor lock-in seems to be similar - you use some
           | middleware, you're locked in.
           | 
           | Is there that much difference between building your
           | infrastructure in DynamoDB vs using IBM DB/2? To me, they
           | seem to create similar levels of "lock in" and create an
           | equal barrier to switching out to a new system... and if you
           | want your data, you're going to dump it, reformat it, and
           | start over.
           | 
           | Or am I misunderstanding your point?
        
         | ausudhz wrote:
         | Without multi-year contacts.
         | 
         | I think the only problem reside in applications that require
         | longevity. Many apps nowadays get rewritten every now and then
         | (especially front ends and Middleware)
         | 
         | The problem is different for database and core systems.
        
           | pwarner wrote:
           | I think it still comes down to proper software engineering.
           | If you have good interfaces, abstractions, and automated
           | tests, you can move to new systems. I've seen teams struggle
           | to move from DB version x to x+1, taking many many months,
           | but it's because they have no idea if it works after they
           | upgrade. On the flip side you have people like Snowflake who
           | are building a database that runs across multiple clouds.
           | From the outside it appears both portable and optimized for
           | each platform. Thoughtful software engineering, with the
           | right abstractions and test automation are a big deal...
        
             | ausudhz wrote:
             | Some databases and enterprise application have penalizing
             | contract agreements that prevent cloud migrations from
             | happening.
        
       ___________________________________________________________________
       (page generated 2022-11-10 23:02 UTC)