[HN Gopher] Servers as they should be - shipping early 2022
       ___________________________________________________________________
        
       Servers as they should be - shipping early 2022
        
       Author : ykl
       Score  : 501 points
       Date   : 2021-05-26 18:45 UTC (4 hours ago)
        
 (HTM) web link (oxide.computer)
 (TXT) w3m dump (oxide.computer)
        
       | rcarmo wrote:
       | Well, congrats then! I've been waiting for news (and listening to
       | the "On The Metal" podcast) for a long while now, and this seems
       | like a great way to push the envelope on server hardware.
       | 
       | (plus I suspect there will be more to come...)
        
       | newaccount2021 wrote:
       | Bravo! Better servers for people who want to own their infra. Too
       | many people seek out cloud services just to get a modern control
       | plane. Server "UI" has been long neglected.
       | 
       | And finally, its nice to see people with brains building real
       | things with nary a mention of "blockchain".
        
       | transfire wrote:
       | Estimated price?
        
         | wmf wrote:
         | Cheaper than AWS Outposts I guess. (Cheaper per core, not
         | necessarily per rack.)
        
       | keyle wrote:
       | Side note, great web page, I couldn't stop scrolling like an
       | addict on visual adrenaline.
        
       | kaliszad wrote:
       | As for the website, some of the animations could use a spring
       | like movement profile to feel more physical. The website also
       | isn't reachable over IPv6, so I would be very careful with the
       | promised IPv6 capabilities of the server too ;-)
        
       | psanford wrote:
       | Congrats Oxide team! More competition in this space is always a
       | good thing.
       | 
       | I'm curious about management. Can the rack operate completely
       | standalone? I assume when you have multiple there will be some
       | management abstraction above the rack layer?
       | 
       | The closest direct equivalent that I can think of to this is AWS
       | outposts. Are there any others that I'm forgetting?
        
         | wmf wrote:
         | There are a bunch of enterprisey "private cloud" aka "converged
         | infrastructure" racks like VxRack, Nutanix, etc.
        
         | _delirium wrote:
         | The density they're getting here is significantly higher than
         | AWS Outposts, which is interesting. The top-end (~$600k) AWS
         | Outposts seem to max out at around 1k CPUs and 4.5 TB RAM in a
         | rack (e.g. 12x m5.24xlarge = 12x 384 GB), while this rack can
         | house 2k CPUs and 30 TB (!) RAM.
        
           | psanford wrote:
           | Outposts seem like a solution to the problem, "for regulatory
           | or compliance reasons we are required for data to reside and
           | be processed within a physical space we control." For that
           | problem, an organization that is otherwise on AWS might find
           | Outposts to be appealing. I can imagine an engineering team's
           | response to such a requirement as "Oh yeah? Fine, its but its
           | going to cost you $600k per year per rack!".
           | 
           | I believe Oxide is attempting to capture a much broader
           | market than that.
        
           | ptomato wrote:
           | it's better than that, actually, as an AWS vCPU is a
           | hyperthread, not a full core, and these would have 4096
           | hyperthreads.
        
           | kaliszad wrote:
           | Yes, well it isn't that dense either. As I have written, it's
           | 32 CPUs (16x 2 CPUs). 1 TB of RAM per CPU is not that huge a
           | deal, it's perhaps 16x 64 GB (Milan uses 8 channels, 2 DIMMs
           | per channel is reasonable), if you consider that is 16 GB of
           | RAM per core. In HPC, you would probably shrink it to 1/4 of
           | the volume (half width, 1 U dual socket server). Oxide
           | probably focuses on optimal thermal efficiency since their
           | limit isn't the space so much as the power density/ max.
           | power per rack in existing DCs, which they are already
           | pushing hard. (Of course they have lower power options too
           | but they probably will not use 2048 cores.)
        
             | Tuna-Fish wrote:
             | There are 32 half-width compute blades there, so probably
             | single socket servers with 64 cores each.
        
               | kaliszad wrote:
               | It does look like it. Perhaps it is easier for
               | maintenance, if you don't want to start with a full rack.
               | The upgrade granularity is higher.
        
             | spamizbad wrote:
             | The problem with pushing higher compute density is you're
             | running into the limits of what most DCs can provide in
             | terms of power and cooling for a single rack. Usually it's
             | specialized HPC facilities or hyperscalers pushing the
             | power and cooling to handle stuff like that. Those people
             | aren't likely Oxide's customers - they've already got their
             | own hardware solutions.
        
       | _jal wrote:
       | It sure looks pretty, but appears to be -
       | 
       | - dedicated to virtualization, done their way
       | 
       | - rather inflexible in hardware specs
       | 
       | - vendor-locked at the rack - if you have hardware from someone
       | else, it can't live in the same cabinet
       | 
       | I guess if you just want a pretty data center in a box and look
       | like what they consider a 'normal' enterprise to be, it might
       | appeal. But I'm not sure how many people asked for Apple-style
       | hardware in the DC.
        
         | wmf wrote:
         | A lot of customers are asking for private cloud.
        
         | ex_amazon_sde wrote:
         | Vendor lock-in at rack level and custom servers? Sounds like
         | blade servers.
         | 
         | A smart company would stay away from this kind of strong lock-
         | in.
        
         | zozbot234 wrote:
         | > - dedicated to virtualization, done their way
         | 
         | > - rather inflexible in hardware specs
         | 
         | > - vendor-locked at the rack - if you have hardware from
         | someone else, it can't live in the same cabinet
         | 
         | This describes legacy IBM platforms quite well. If they can
         | leverage hyperscaling tech to be better and cheaper than what
         | IBM is currently offering, that's enough to make it worthwhile.
        
         | JeremyNT wrote:
         | > _dedicated to virtualization, done their way_
         | 
         | This is a selling point - if it's actually better (which, why
         | not? most of the existing virtualization management solutions
         | either suck or are hugely expensive).
         | 
         | If it's not better, big deal? I'm assuming you could just throw
         | Linux on these things and run on the metal or use something
         | different, right? Given how much bcantrill (and other Oxide
         | team members) have discussed loving open hardware, I seriously
         | doubt they would intentionally try to lock down their own
         | product!
         | 
         | > _vendor-locked at the rack - if you have hardware from
         | someone else, it can 't live in the same cabinet_
         | 
         | This is aimed at players so big that they _want_ to buy at the
         | rack level and have no desire to ever touch or carve up
         | anything. It 's a niche market, but for them this is actually a
         | plus.
        
         | rapsey wrote:
         | Why is it important what kind of virtualization? It works and
         | since it is built for this hardware it will likely be more
         | reliable then anything you're putting together yourself.
         | 
         | The specs are damn good. When it is all top-of-the-line,
         | inflexibility is kind of a mute point. Where else are you going
         | to go?
         | 
         | > But I'm not sure how many people asked for Apple-style
         | hardware in the DC.
         | 
         | Well integrated, performant and reliable hardware that runs VMs
         | where you can put anything on it is pretty much all everyone
         | running their own hardware is looking for.
         | 
         | Honestly I am surprised how many here completely misunderstand
         | what their value proposition is.
        
           | [deleted]
        
           | _jal wrote:
           | > Why is it important what kind of virtualization?
           | 
           | Because if I ran this, would have to manage it. Given that I
           | have lots of virtualization to manage already, I would want
           | it to use the same tooling for rather obvious reasons.
           | 
           | > is pretty much all everyone running their own hardware is
           | looking for.
           | 
           | I don't think you talk to many people who do this, but as
           | someone who manages 8 figures worth of hardware, I can tell
           | you that is absolutely not true.
           | 
           | > The specs are damn good. When it is all top-of-the-line,
           | inflexibility is kind of a mute point. Where else are you
           | going to go?
           | 
           | To some hardware that actually fits my use case, that is
           | managable in an existing environment? Oh wait - I already
           | have that. I mean, seriously - do you think they're the only
           | shop selling nice machines?
           | 
           | The value-add is all wrong, unless you are a greenfield
           | deployment willing to bet it all on this particular single
           | vendor, and your needs match their offering.
        
             | adrianmonk wrote:
             | > _lots of virtualization to manage already, I would want
             | it to use the same tooling_
             | 
             | I'm not saying you would want to, but maybe their
             | expectation is that you'd plan to transition everything to
             | their system. Either gradually as part of the normal cycle
             | of replacing old hardware or all at once if you want to be
             | aggressive.
             | 
             |  _If_ their way is actually better, then it might make
             | sense. You 'd go through an annoying transition period but
             | be better off in the end.
             | 
             | The hardware options do seem limited, but maybe that would
             | change if their business takes off and they get enough
             | customers to justify it. They're definitely saying
             | simplicity is a good thing, but maybe that's just marketing
             | spin that sounds better than the alternative of saying
             | they're not yet in a position to offer that flexibility.
        
             | mappu wrote:
             | I don't see details on the API, but it seems likely you
             | could write a libvirt provider for it and use existing
             | virsh tooling (Cockpit / CloudStack / ...).
        
         | tyingq wrote:
         | _" But I'm not sure how many people asked for Apple-style
         | hardware in the DC."_
         | 
         | It's probably selling to the _" Amazon-style hardware in your
         | DC market"_, which I think should be fairly ripe. Building your
         | own private cloud from parts defeats a lot of the
         | purpose...avoiding your own plumbing.
        
       | fierro wrote:
       | When can I run Kubernetes on this to host my personal blog?
        
       | tw04 wrote:
       | The storage is always the difficult part in these architectures.
       | Are you distributing across all nodes? It appears that each sled
       | is an individual compute unit with 10 drives. Are the drives on a
       | proverbial island and only accessible to that local node, or is
       | there some distributed storage going on that you can talk about?
       | 
       | On paper with RDMA and NVMe-OF you could access any drive from
       | any compute unit... but that's easier said than done :)
        
       | MangoCoffee wrote:
       | >AMD MILAN
       | 
       | Intel is losing on the client and server. everyone is jumping
       | ship to either ARM or AMD for client/server. hopefully Intel new
       | engineer CEO can turn it around like AMD engineer CEO (Lisa Su)
        
         | dsr_ wrote:
         | Few companies buy on pure performance. Right now, AMD has the
         | performance kings _and_ the price /performance kings.
         | 
         | Intel could win price/performance, but they would need to
         | cannibalize their own low-end and mid-range market. If they
         | could make a good bet that they would have high yield in one
         | more cycle, that would make sense. If they don't think that
         | will happen, there's nothing much that will save them, and
         | they're extracting the money that they can right now.
        
           | StreamBright wrote:
           | I thought M1 had the best price/performance numbers, too bad
           | Apple does not sell CPUs.
        
             | dsr_ wrote:
             | Or, more relevant to this discussion, servers.
        
               | zozbot234 wrote:
               | They sell rack-mountable hardware (Mac Pros) running a
               | certified Unix OS. It's only a matter of time until those
               | are based on Apple Silicon too.
        
               | floatboth wrote:
               | Rack Mac Pros are targeted at racks that are mostly
               | filled with audio/video equipment. Apple really doesn't
               | seem to have _any_ interest in selling server products
               | again.
        
               | zozbot234 wrote:
               | So? It's still rack-mountable workstation-class hardware
               | that will probably be running on Apple Silicon at some
               | point. And it will probably be possible to boot Linux on
               | it, similar to existing M1 Macs. That's pretty
               | indistinguishable from many servers.
        
               | simtel20 wrote:
               | You seem to be thinking that a server and a workstation
               | are the same, ignoring that server skus need oob
               | management, apis, hardware support and so many other
               | things as table stakes
        
               | zozbot234 wrote:
               | Did Xserve have any of that stuff?
        
               | kbenson wrote:
               | No, CPUs is more relevant. Linux runs on M1, and if they
               | sold CPUs, someone would make a board they could be put
               | on that fit in standard server form factors. For this
               | type of comparison, people want CPUs, not the next
               | version of Xserve.
        
         | ksec wrote:
         | Intel still has ~90% of x86 Server Market in unit Shipment,
         | slightly higher in Revenue sold. And their renewed Roadmap from
         | Pat Gelsinger seems to bring a lot of their product forward (
         | Rightly so ).
         | 
         | That is speaking as someone who wants AMD to grab more market
         | shares ( and has been stating the same for nearly three years
         | and constantly being told off by all AMD fans they are doing
         | fine )
        
         | mhh__ wrote:
         | ARM I've seen evidence people are jumping ship too, but is the
         | same true of AMD? This is the best shot they're going to get at
         | it and I for one haven't heard all that much noise pro-AMD.
         | 
         | They make really nice chips, but what happens if BigCorpXYZ
         | just gets a quote from AMD and goes straight to Intel to get it
         | matched - i.e. the Cloud isn't that performance-intensive, so
         | now they get to stay on the Intel stack for less money.
        
       | zapita wrote:
       | Who is the target customer for this?
        
         | salmo wrote:
         | I get the target of HyperConverged infrastructure. It's a
         | pretty big market: potentially all private/colo datacenters.
         | And there are only a few players left. Dell/EMC/VMWare,
         | Nutanix, Cisco's schizophrenic offerings, a waning HP, cloud
         | providers trying to make a 'hybrid' play, etc. And most don't
         | buy one or two of these things. It's rows and rows.
         | 
         | But most of those are so entrenched and wrapped up in their
         | customers. I imagine the target here is actually acquisition,
         | it would just be too hard to get a foothold as an up-and-comer.
         | 
         | Also it usually means giving loaner gear to companies for an
         | extended period for them to evaluate pre-purchase, showing your
         | support, etc. That's a lot of up-front cost for someone without
         | a warchest.
         | 
         | I'm also kind of surprised by the site. It sells to geeks well,
         | but isn't the normal "look at our customers in key industry
         | segments!", "something something Gartner magic quadrant",
         | "whitepapers!" thing. Selling to execs on these things is
         | usually a matter of convincing them they're not making a "bad"
         | decision. They're "cool", but enough industry people agree with
         | them that it's not career limiting if it doesn't pan out.
         | 
         | I like the idea of the product, and it would be nice to have
         | another player. But it's like starting a new car company, and I
         | feel like they're selling to mechanics.
        
         | rapsey wrote:
         | Plenty of companies live outside the cloud on their own
         | hardware.
        
         | caeril wrote:
         | Cantrill has always said it's for people who want Facebook
         | class on-premise infrastructure but don't have a $900B market
         | cap and a hundred engineers designing and building custom
         | boxes.
         | 
         | Oh, and open firmware.
        
           | jeffbee wrote:
           | Can definitely see it for a company size of Dropbox, big
           | enough to already be working with ODMs, big enough to be
           | sensitive to the kind of headaches you get from a
           | heterogeneous fleet of ILOM processors designed by deranged
           | engineers.
        
             | qbasic_forever wrote:
             | Dropbox is big enough that they could just acquire Oxide
             | now before they even get to market. That might even be the
             | plan all along. I can't imagine there are more than like a
             | dozen companies that are their target market, i.e. big
             | enough to need Facebook-level datacenters but not big
             | enough (yet) to have that engineering team.
        
         | lumost wrote:
         | This level of integration surely won't come cheap, from what I
         | recall of Server purchasing a target price of ~200-500k per
         | rack would be expected on a TCO of roughly 2x the rack price
         | over 3 years. (Assuming you are buying from Quanta/SuperMicro
         | or other commodity integrator)
         | 
         | It's possible the prices are different now, but you would need
         | customers looking to drop > 1 million dollars in CapEx for the
         | management capabilities they are providing. Possibly non-cloud
         | Fortune-500?
        
           | kraig wrote:
           | They'd have to sell these at a significant loss to make up
           | for the risk any company would have to take to build out a DC
           | on first generation hardware from a startup.
        
           | riking wrote:
           | The core customer profile appears to be aging startups that
           | know how their software behaves and are currently way over 2k
           | cores on AWS.
        
             | benlivengood wrote:
             | Startups with no significant use of anything from AWS other
             | than EC2+EBS with an existing well-tested storage migration
             | procedure.
        
           | dcolkitt wrote:
           | That reminds me a lot of the Sun Microsystem mega servers
           | from 20 years ago. Those were kind of the cure-all solutions
           | to high scalability web services before Google et al.
           | pioneered cloud-like services on commodity software.
        
             | nickik wrote:
             | Its kind of like if you took those servers and put a cloud
             | like software layer on top and made many of them work
             | together.
        
         | atonse wrote:
         | I can see this also being appealing for large orgs with
         | sensitive data they don't want to put in a public cloud.
        
         | convolvatron wrote:
         | good question. not someone too large to want to pay the per-
         | node margins. not someone too small to want to pay .. not
         | someone who is satisfied with using VMs on a cloud provider.
         | not someone who is selling these as part of a turnkey solution
         | for whatever segment is left.
         | 
         | that said, I do feel persistently sad that we can't fix
         | structural problems because the market is such a gradient
         | descent world.
        
       | calvinmorrison wrote:
       | After the acquisition of Joyent by Samsung, who here is
       | interested in buying an extremely locked down hardware that seems
       | seems to be a next-gen SmartOS (provisioning etc), i.e.
       | rethinking networking and nodes all in holistic sense.
       | 
       | proposition is good, history is bad
        
       | gjvc wrote:
       | 0xide Computer Company deserves to do well.
       | 
       | This is a solid private-cloud play aimed at those corporations
       | (probably mainly financials, but other sectors too I'm sure...)
       | who don't want to outsource to the likes of AWS / GCP.
        
         | twoodfin wrote:
         | Not just "don't want to", I'd hope: They should be able to win
         | on the economics, too, assuming customers that care more about
         | TCO for a fixed or steady-growing workload rather than
         | elasticity.
        
         | numbsafari wrote:
         | Healthcare is definitely another.
         | 
         | Government, too (especially non-US).
        
           | tgoneaway wrote:
           | They are going to have to get their employees to stop burning
           | down federal courthouses first. They'll never get a FCL with
           | that history.
           | 
           | Steve is a well-known outspoken anarchist that applauds
           | violence for his politics and Bryan isn't much better with
           | his woke agenda.
        
       | alberth wrote:
       | Who is the target buyer?
       | 
       | Is this to compete against Nutanix / VCE?
        
         | etcet wrote:
         | We use Nutanix where I work and this has made everyone very
         | excited. Though they would need something similar to Nutanix CE
         | to make us switch entirely (i.e. the ability to run non-
         | production unsupported on commodity hardware).
        
         | fuzzylightbulb wrote:
         | I think it would more play in the space of a NetApp/Cisco
         | FlexPod or VCE's Vblock, but what those customers are really
         | purchasing is the certified validation of core enterprise apps
         | on a particular hardware/software stack, as well as the massive
         | engineering and support organizations that those companies can
         | bring to bear to validate firmware upgrade and to swoop in in
         | the event of an issue. You also seem to get a LOT more
         | flexibility.
         | 
         | I am not a hater in the least but I really am failing to
         | understand what is unique about this offering. It seems like
         | you have no options regarding the internals, and so scaling
         | compute separately from storage doesn't seem possible. I also
         | am very suspect about offerings like this that have not yet
         | released a second version of their key parts. Everyone says
         | that they are going to be backwards compatible, but then the
         | reality of managing heterogenous generations of gear in a
         | homogenous fashion strikes and you get weird behavior all
         | around.
         | 
         | Long story short, I would love to know what a customer of this
         | scale of physical infrastructure is getting with Oxide that
         | they would not be better served by going to one of the major
         | vendors.
        
           | Tuna-Fish wrote:
           | Firmware that does not suck?
           | 
           | Because that's something the current "major vendors" really
           | are irredeemably terrible at.
        
       | diwu1989 wrote:
       | Looks like a white labeled viking VDS2249R running custom
       | software.
        
         | ChrisMarshallNY wrote:
         | This is not my area of expertise, but it does look like
         | that[0].
         | 
         | That "custom software," though, is where the magic often lies.
         | As a software person that worked at hardware companies for most
         | of my career, I know all too well, how disrespectful hardware
         | people are of software. If they have a good software-respecting
         | management chain, then it might be pretty awesome.
         | 
         | [0] https://www.prnewswire.com/news-releases/viking-
         | enterprise-s...
        
           | selectodude wrote:
           | Well if Oxide's software stack is going to be open source, I
           | guess we'll get a good look into their secret sauce.
        
       | Sanguinaire wrote:
       | Looks fantastic, and the hardware specs appeal to me greatly -
       | but I'm not sure there is an actual market outside the "cult of
       | personality" bubble. A few SV wannabes will buy into this to
       | trade off a Twitter relationship with the Oxide founders - but
       | does anyone really see the IT teams at Daimler, Proctor & Gamble,
       | Morgan Stanley... et al - actually going for this over HPE/Dell
       | and AWS/Azure? We are a long way away from "Nobody ever got fired
       | for buying from Oxide".
        
         | nickik wrote:
         | Like everything else, start small, and grow. If you have a good
         | product that actually works.
        
         | tyingq wrote:
         | You wouldn't have to pitch it initially as a replacement for
         | your on-prem HPE/Dell. It could be pitched as a replacement for
         | the hosted private cloud you have from IBM, Oracle, etc, that
         | you're unhappy with.
        
       | loudmax wrote:
       | As I understand it, Oxide is going to have deep software
       | integration into their hardware. So the expectation isn't that
       | the servers in this rack will be running Windows or a generic
       | Linux distribution. In case anyone from Oxide is here, is my
       | understanding correct? And if so, will there be a way to run a
       | smaller version of an Oxide system, say for testing or
       | development, without purchasing an entire rack at a time?
       | 
       | Anyway, glad to finally get a glimpse of what Oxide has to offer.
       | Looking forward to seeing a lot more.
        
         | 2trill2spill wrote:
         | My understanding is you will use a API to provision virtual
         | machines on top of the Oxide hypervisor/software stack, which
         | is bhyve running on Illumos. So you can still just run your
         | favorite Linux distro or windows or a BSD if you want[1].
         | 
         | [1]: https://soundcloud.com/user-760920229/why-your-servers-
         | suck-...
        
           | SSLy wrote:
           | bhyve? what happened to the KVM port?
        
         | jhickok wrote:
         | Agreed, I would love to hear more about the management plane.
         | I'm glad it's API-driven, but I still have some questions about
         | things like which hypervisor they are using.
         | 
         | If it's a custom software stack, might be nice to get a
         | miniature dev-kit!
        
           | kaliszad wrote:
           | They will use Illumos with Bhyve, @bcantrill said it in a
           | podcast just a few months ago. I have linked it somewhere in
           | my comments (look at my profile).
        
           | [deleted]
        
       | kaliszad wrote:
       | "Only" 2048 CPU cores per rack is actually not that much by
       | nowadays standards - its 16 U of 2x 64 core CPUs. Perhaps is
       | could be more U if they used the lower core counts but e.g.
       | higher frequency per core SKUs but I don't think they do. (And
       | the picture kind of confirms it). They use 2U servers though so
       | they are able to use lower speed but bigger fans and have more
       | expansion cards and 2,5" form factor drives perhaps. The of
       | course have to fit storage, which needs lots of CPU PCIe lanes
       | for all the NVMe storage and networking (probably 2 or 4 U) and
       | power conversion to power the bus bar and more somewhere. They
       | probably use the 42 U+ standard 19" racks to fit in standard
       | customers DCs. They also don't have such a high power budget as
       | custom DCs for cloud providers do.
       | 
       | 1 PB of flash is quite a bit but you could get perhaps 5x as much
       | with HDDs probably (even with a relatively low density of 40x 12
       | x 12 TB). The problem really is I think, they wouldn't be able to
       | write the HDD firmware in Rust in time (or at all, because no HDD
       | manufacturer would sell an HDD to them without making sure their
       | proprietary firmware is used). SSDs don't necessarily have this
       | property as they are much more like the other components of a
       | modern server.
        
       | stefan_ wrote:
       | Why are we hard coupling the hardware to the software? The whole
       | secret of the success of M1 and ARM in servers is that lots of
       | software has long ago stopped being hyper-aware of what hardware
       | it is running on.
       | 
       | What software are we talking about anyways? It's all incredibly
       | vague, but it seems to reach all the way into the Kubernetes
       | sphere. Why would I run this over something I can use on my next
       | job?
        
         | monocasa wrote:
         | It's probs cheaper than AWS if you already have on prem infra.
         | AWS has pretty damn good margins.
         | 
         | And the idea of "these racks are my kubernetes cluster and are
         | supported by the OEM as such" has a lot of value to a lot of
         | the medium sized IT departments I've run across.
         | 
         | Can you expand on what you mean on "coupling the hardware to
         | the software"?
        
         | alanbernstein wrote:
         | Is Apple really a good counterexample for the success of
         | integrating software and hardware?
        
         | tediousdemise wrote:
         | > Why are we hard coupling the hardware to the software? The
         | whole secret of the success of M1 and ARM in servers is that
         | lots of software has long ago stopped being hyper-aware of what
         | hardware it is running on.
         | 
         | The software running on M1 is a bespoke fit for it. That's why
         | the performance in macOS on M1 is phenomenal. It was custom
         | made to execute optimally on it.
        
         | rcxdude wrote:
         | It's basically a mini-ec2 in your server room. The software and
         | hardware give you a platform to deploy VMs configured however
         | you want.
        
       | ukd1 wrote:
       | Wow, I love the graphics on this website. Seems like it'd could
       | be functional for actual interfaces - is that the plan?
       | 
       | Congrats Bryan et al.
        
       | yabones wrote:
       | Aesthetically, absolutely love it.
       | 
       | In reality I would never want this type of hardware... It reminds
       | me of the old boat anchor bladecenter rigs we used to use. They
       | were great, up until you had to replace one of the blades after
       | the support was up. It's not always practical to replace hardware
       | every 3 years like we're supposed to, so this type of stuff
       | sticks around and gets some barnacles.
       | 
       | What would be fantastic would be if the entire industry committed
       | to an open spec for large chassis like this with a standardized
       | networking and storage overlay... But that would never happen
       | because vendor lock-in is the big money maker in 'enterprise'.
       | 
       | But wow, absolutely gorgeous machines.
        
         | zozbot234 wrote:
         | > What would be fantastic would be if the entire industry
         | committed to an open spec for large chassis like this with a
         | standardized networking and storage overlay
         | 
         | Isn't the Open Compute Project supposed to be working on that
         | kind of stuff?
        
       | fsociety wrote:
       | I kind of want one of these in my home, I wonder what the price
       | range will be.
        
         | salmo wrote:
         | Now I just like the idea of you telling the electrician that
         | you need a 3-phase >5kVA PDU. :)
        
         | gwbrooks wrote:
         | I'm adding one of these to the if-I-win-the-lottery list. Nifty
         | home media server AND it'll probably keep the house warm during
         | the winter.
        
         | softfalcon wrote:
         | "2048 x86 cores per rack"
         | 
         | Probably a lot more than any sane person would pay for a home
         | server.
         | 
         | But I won't stop you from trying! Wouldn't it be cool to have
         | that plugged into your local network?
        
           | bayindirh wrote:
           | I'd get one. Would connect hot side to home HVAC and just run
           | scalability tests of my code. That'd be hot, nice and
           | expensive.
           | 
           | And possibly noisy. Yes, noisy.
        
         | ptomato wrote:
         | for what they're advertising there, on the order of $1mm, based
         | on underlying hardware costs.
        
           | kaliszad wrote:
           | Actually, standard Azure rack is on the order of $1.1 mm of
           | hardware depending on SKU if I am not mistaken. So I would
           | guess, it could be more like $2 mm. There is the aspect of
           | management and other vendors Dell/EMC + VMware like you to
           | pay way more than the hardware costs for e.g. VXRail/ vSphere
           | licences. That is the real target.
        
             | salmo wrote:
             | And maintenance. I like the idea of having to give them a
             | key to your house to replace a disk when the SMART alert
             | phones home. :)
        
       | boulos wrote:
       | First, congrats!
       | 
       | But second, I'd love to understand the compute vs storage
       | tradeoff chosen here. Looking at the (pretty!) picture [1], I was
       | shocked to see "Wow, it's mostly storage?". Is that from going
       | all flash?
       | 
       | Heading to https://oxide.computer/product for more details,
       | lists:
       | 
       | - 2048 cores
       | 
       | - 30 TB of memory
       | 
       | - 1024 TB of flash (1 PiB)
       | 
       | Given how much of the rack is storage, I'm not sure which Milan
       | was chosen (and so whether that's 2048 threads or 4196 [edit:
       | real cores, 4196 threads]), but it seems like visually 4U is
       | compute? [edit: nope] Is that a mistake on my part, because dual-
       | socket Milan at 128 threads per socket is 256 threads per server,
       | so you need at least 8 servers to hit 2048 "somethings", or do
       | the storage nodes also have Milans [would make sense] and their
       | compute is included [also fine!] -- and so similarly that's how
       | you get a funky _30_ TiB of memory?
       | 
       | [Top-level edit from below: the _green_ stuff are the nodes,
       | including the compute. The 4U near the middle is the fiber]
       | 
       | P.S.: the "NETWORK SPEED 100 GB/S" in all caps / CSS loses the
       | presumably 100 _Gbps_ (though the value in the HTML is 100 gb /s
       | which is also unclear).
       | 
       | [1]
       | https://oxide.computer/_next/image?url=%2Fimages%2Frenders%2...
        
         | neurotixz wrote:
         | Power footprint also confirms that the compute density is
         | pretty low.
         | 
         | We built a few racks of Supermicro AMD servers (4 X computes in
         | 2U), and we load tested it to 23kva peak usage (about 1/2 full
         | with nthat type of nodes only, our DC would let us go further)
         | 
         | Were also over 1 PB of disks (unclear how much of this is
         | redundancy), also in NVMe (15.36 TB x 24 in 2U is a lot of
         | storage...)
         | 
         | Other then that not a bad concept, not sure of a premium they
         | will charge or what will be comparable on price.
        
         | jsolson wrote:
         | +1 to congrats -- my read on this:
         | 
         | - There's a bunch of RJ45 up top that I don't quite understand
         | :)
         | 
         | - A bunch of storage sleds
         | 
         | - A compute sled, 100G QSFP switch, compute sled sandwich
         | 
         | - Power distribution (rectifiers, I'd think, unless it's AC to
         | the trays?)
         | 
         | - Another CSC sandwich
         | 
         | - More storage.
         | 
         | I assume in reality we'd have many more cables making things
         | less pretty, given the number of front-facing QSFPs on those
         | ToRs.
        
           | kaliszad wrote:
           | They use a bus bar design. That is what @bcantrill also said
           | in an interview.
        
         | ptomato wrote:
         | It looks like they're doing 2u half width nodes, so I'd
         | strongly suspect each node is 1TB of ram, one epyc 7713p, and
         | 10 3.2TB u.2/u.3 drives.
         | 
         | eta: also suspect 30TB total just means they're leaving 64GB
         | ram for the hypervisor OS on each node.
        
           | kaliszad wrote:
           | Leaving that RAM for ZFS L2 ARC perhaps? I don't think they
           | would use Illumos as the hypervisor OS without also using
           | OpenZFS with it. They also need some for management, the
           | control UI, a DB for metrics and more.
           | 
           | Btw. if I count correctly, they have 20 SSD slots per node
           | (if a node is full width) and 16 nodes. They would need 2 TB
           | to reach 1 PB of "raw" capacity with the obvious redundancy
           | overhead of ~ 20%.
           | 
           | It is also quite possible, they don't use ZFS at all and use
           | e.g. Ceph or something like it but I don't think that is the
           | case, because that wouldn't be cantrillian. :-) E.g. using
           | Minio, they can provide something S3 like on top of a cluster
           | of ZFS storage nodes too but they most likely get better
           | latency with local ZFS and not a distributed filesystem.
           | Financial institutions especially seem to be part of the
           | target here and there latency can be king.
        
             | ptomato wrote:
             | I'm fairly confident the nodes are half width; if you look
             | at the latches it very much would appear you can pull out
             | half of every 2u at once, and if you look at the rear
             | there's 2 net cables going into each side.
        
         | richardwhiuk wrote:
         | Suspect each node is both storage and compute.
         | 
         | Guessing they aren't counting threads (they say "cores"), so 64
         | cores per socket, 128 cores per server, 16 servers => 2048
         | cores.
        
           | boulos wrote:
           | Duh! I got tricked by the things near the PDU as "oh, these
           | must be the pure-compute nodes".
           | 
           | So maybe that's the better question: what are the 4U worth of
           | stuff surrounding the power? More networking stuff?
           | Management stuff? (There was some swivel to the back of the
           | rack / with networking, but I can't find it now)
           | 
           | Edit: Ahh! The rotating view is on /product and so that ~4U
           | is the fiber. (Hat tip to Jon Olson, too)
        
             | samstave wrote:
             | Control-plane most likely, and having a mid-centered PDU
             | probably adds to heat on the upper stack, which shortens
             | life over time.
             | 
             | As someone who has designed quite a few datacenters, whats
             | more interesting to me in this evolution of computing is
             | the reduction in cabling.
             | 
             | Cabling in a DC is a huge suck on all aspects - plastics,
             | power, blah blah blah - the list is long....
             | 
             | But there are a LOT of cabling companies that do LV out
             | there - so the point is that when these types of systems
             | get more "obelisk" like, are many of these companies going
             | to die? (I'm looking at you Cray and SGI.)
             | 
             | When I worked at Intel - I had a friend who was a proc
             | designer at MIPS - and we talked about rack insertion and a
             | global back-plane for the rack (which we all know to be
             | common now) - but this was ~1997 or so... but when I built
             | the Brocade HQ - cables were still massive and it was an
             | art to properly dress them.
             | 
             | Lucas was the same - so many human work hours spent on just
             | cable mgmt...
             | 
             | Their diagrams of system resiliency is odd in my opinion:
             | 
             | https://i.imgur.com/GB0fzIl.png
             | 
             | That looks like a ton of failures that they can
             | negotiate...
             | 
             | Whats weird is the SPF isn't going to be in your
             | DC/HQ/Whatever - its going to be outside - this is why we
             | have always sought +2+ carrier ISPs or built private
             | infra...
             | 
             | A freaking semi truck crashed into a telephone pole in
             | Sacramento the other day and wiped comcast off the map to
             | half the region.
             | 
             | https://sacramento.cbslocal.com/2021/05/25/citrus-heights-
             | an...
             | 
             | Thats ONE fiber line that brought down 100K+ connections...
             | 
             | ---
             | 
             | EDIT: I guess what I am actually saying is that this entire
             | marketing strat is to convince any companies that * _"
             | failure is imminent and please buy things that are going to
             | fail, but don't worry because you bought plenty more things
             | to live beyond the epic failure that these devices will
             | have"*_
             | 
             | ---
             | 
             | Not to discredit anything this company has going for its
             | product - but their name is literally "RUST" (*oxide*) ---
             | which we all know is what kills metal.
             | 
             | And what do we call servers: * _Bare Metal*_
        
       | ebeip90 wrote:
       | I wonder if they used Monodraw[1] to create their diagrams?
       | 
       | Looks like they did!
       | 
       | [1]: https://monodraw.helftone.com
        
       | benlivengood wrote:
       | It's interesting that the RAM/CPU ratio is about double the
       | default shapes from AWS/GCP. In practice I have generally seen
       | those shapes run on the low side of CPU utilization for most
       | workloads, so I think the choice makes sense.
       | 
       | I'm curious if ARC will be running with primarycache=metadata to
       | rely on low latency storage and in-VM cache, otherwise I could
       | see ARC using a fair bit of that RAM overhead in the hosts.
        
       | notacoward wrote:
       | All NVMe seems like a good _starting_ point, but I 'd hope that
       | some day there will be a more capacity-oriented variant for
       | people who actually know what they're doing with exabyte-scale
       | storage.
        
       | [deleted]
        
       | hlandau wrote:
       | "Attests the software version is the version that is valid and
       | shipped by the Oxide Computer Company"
       | 
       | So in other words these servers will implement restrictive code
       | signing practices and will be vendor-controlled, not owner-
       | controlled?
       | 
       | This is not my idea of "secure", and really in the wake of things
       | like the Solarwinds or RSA hacks it shouldn't be anyone's idea of
       | secure. Vendor-holds-the-keys is not an acceptable security
       | model.
       | 
       | A comment below mentions open firmware, open firmware is useless
       | without the right to deploy modified versions of it.
       | 
       | Happy to take clarification on this.
        
         | gruez wrote:
         | "Attest" refers to
         | https://en.wikipedia.org/wiki/Trusted_Computing#REMOTE-
         | ATTES..., not forced code signing
        
           | hlandau wrote:
           | I'm familiar with the concept. Does this mean that
           | attestation to a different root of trust than Oxide will also
           | be feasible, and that this is just a default?
        
             | hobofan wrote:
             | It's an "enterprise" product, so you can be certain that
             | some amount of extra money will buy you that capability (if
             | it's not already included).
        
             | gruez wrote:
             | Oxide makes the hardware, so it makes sense to use them as
             | the root of trust since you already have to trust them to
             | not make backdoored hardware. Why bother adding more
             | parties? Also, for remote attestation to make sense, it
             | needs to be done in the hardware itself (ie. keys burned
             | into the silicon). I'm not sure how that's supposed to work
             | if you add your own keys, or whether that would even make
             | sense.
        
               | hlandau wrote:
               | "it needs to be done in the hardware itself (ie. keys
               | burned into the silicon)" - this isn't true; this is
               | confusing Trusted Boot and Secure Boot, which are not the
               | same thing (nor is it the only way of implementing Secure
               | Boot).
               | 
               | Owner-controlled remote attestation is entirely viable,
               | e.g. Talos II is capable of this with a FlexVer module.
        
               | gruez wrote:
               | > "it needs to be done in the hardware itself (ie. keys
               | burned into the silicon)" - this isn't true; this is
               | confusing Trusted Boot and Secure Boot, which are not the
               | same thing (nor is it the only way of implementing Secure
               | Boot).
               | 
               | I meant as opposed to keys/signing done in software.
               | 
               | >Owner-controlled remote attestation is entirely viable,
               | e.g. Talos II is capable of this with a FlexVer module.
               | 
               | I skimmed the product brief[1] and it looks like it's
               | basically a TPM that has a secure communications channel
               | (as opposed to LPC which can be MITMed)? I'm not really
               | sure how this is an improvement, because you're still
               | relying on the hardware vendor to send the PCR values. So
               | at the end of the day you still have to trust the
               | hardware vendor, although the signing is done by you, but
               | I'm not really sure how this adds any benefit.
               | 
               | [1] https://www.raptorengineering.com/TALOS/documentation
               | /flexve...
        
         | mmphosis wrote:
         | Open source firmware, vs "Open Firmware". The oxide.computer
         | web page mentions open source firmware.
         | 
         |  _Secure boot chain
         | 
         | Our boot flow is secure by default. Our firmware is open source
         | and attestable._
         | 
         | There is a link to "Explore Repos." Is coreboot the open source
         | firmware?
         | 
         | https://github.com/oxidecomputer/coreboot
         | 
         | Open Firmware is different than coreboot.
         | 
         | https://en.wikipedia.org/wiki/Open_Firmware
        
       | tom_mellior wrote:
       | Something about this website takes my machine to 500% CPU load
       | (Firefox/Linux). Good thing we will get 2048 cores soon...
        
         | faitswulff wrote:
         | Ha! So I'm not the only one. My wife thought I had fired up a
         | video game.
        
       | agentdrtran wrote:
       | I would love a mini version of this for homelabs.
        
         | wmf wrote:
         | If the software stack ends up being open source someone could
         | make a name for themself by porting it to run on Linux + random
         | hardware.
        
       | [deleted]
        
       | discardable_dan wrote:
       | I don't know if I have a use for something like this, but the
       | website aesthetics are just plain awesome.
        
         | tyingq wrote:
         | Yeah, I noticed that too. The green wireframe looking stuff is
         | actually text in spans/divs next to, or overlayed on pictures.
         | The little "nodes" are this character, for example: [?]. The
         | effect is pretty unique.
        
           | goodpoint wrote:
           | Looks like simple ascii art, typically used by CLI tools.
        
       | jmartrican wrote:
       | Why the elevation constraint?
       | 
       | "The elevation of the room where the rack is installed must be
       | below 10,005 feet (3,050 meters)."
        
         | mikey_p wrote:
         | Perhaps a constraint on cooling effectiveness due to air
         | density?
        
         | dralley wrote:
         | At 10,000 the air pressure is 1/3 less than at sea level, less
         | density means less capacity for carrying heat, so the cooling
         | might not be sufficient.
        
           | notacoward wrote:
           | Bingo. I've personally had to deal with this in other high-
           | density systems. Less cooling not only has the obvious
           | effects, but also reduces PS efficiency which can cause other
           | problems. Cosmic-ray-induced memory errors can also be a
           | problem at those altitudes (or even half that). That's a bit
           | easier to deal with in principle, but the rate of ECC
           | scrubbing required can start to impact performance. Stack
           | that on top of thermal CPU throttling, and you'll have a
           | system that's just slower than it should be. Just as
           | importantly, the slowdown will be _uneven_ across components,
           | so it 's effectively a different kind of system when you're
           | debugging low-level issues.
           | 
           | I think it's a good sign that they're aware of the additional
           | support issues associated with higher altitude. Shows that
           | they've really thought things through.
        
         | ptomato wrote:
         | probably air density cooling solution is qualified for.
        
       | todd8 wrote:
       | This looks great. From a business perspective, I would be
       | concerned that it would be hard to prevent companies like Dell
       | from entering this space as a competitor quite rapidly.
        
         | wmf wrote:
         | Enterprise vendors are culturally incapable of building
         | opinionated hardware or software. That's a moat, but which side
         | are the customers on?
        
           | AlphaSite wrote:
           | Isn't this basically: https://www.vmware.com/products/vmc-on-
           | dell-emc.html
           | 
           | Or https://www.hpe.com/us/en/greenlake.html
           | 
           | Disclaimer vmware employee, but I dont work in this area.
        
             | mlindner wrote:
             | Sure, but you don't own those.
        
       | ThinkBeat wrote:
       | Hmm.
       | 
       | They basically reinvented mainframes. Seems it has a lot in
       | common with Z series.
       | 
       | Scalable locked in hardware, virtualization, reliability,
       | engineered for hardware swaps, upgrades.
       | 
       | A proprietary operating system (?) from what someone said.
       | (Offshoot of Solaris +++ (???) By that I mean that most of it, or
       | all of it might be open sourced forks, but it will be an OS only
       | meant to run on their systems.
       | 
       | (It would be fun to get it working at home, on a couple of PCs or
       | a bunch of PIs)
       | 
       | They lack specialized processors to offload some workloads to.
       | 
       | Perhaps in modern terms shelfs of GPUs or a shelf fast FPGA , DSP
       | processors. The possibilities are huge.
       | 
       | I didn't find any mention of from what I read.
       | 
       | They also lack the gigantic legacy effort to be compatible, which
       | is a good thing.
        
         | zozbot234 wrote:
         | Their approach to reliability isn't quite on par with
         | mainframes, AIUI. At least, not yet. And the programming model
         | is also quite different - a mainframe can seamlessly scale from
         | lots of tiny VM workloads (what Oxide seems to be going for) to
         | large vertically-scaled shared-everything SSI, and anything in
         | between.
        
       | corysama wrote:
       | If you haven't been listening to their "On The Metal" podcast you
       | are really missing out! https://oxide.computer/podcasts
       | 
       | It's all fun stories from people doing amazing things with
       | computer hardware and low level software. Like Ring-Sub-Zero and
       | DRAM driver level software.
        
       | ksec wrote:
       | May be instead of asking target market or audience, who are their
       | competitors?
       | 
       | (Edit: Previous Discussions
       | https://news.ycombinator.com/item?id=21682360 )
       | 
       | Also thinking if the Website is not finished? All the "Read More"
       | actually hide very little information, if so why hide it? And
       | doesn't seems to explain the company very well. Seems like we
       | need to listen to their PodCast to find out what is going on. (
       | Edit: Found a Youtube Video about it
       | https://www.youtube.com/watch?v=vvZA9n3e5pc )
       | 
       | >Get the most efficient power rating and network speeds of
       | 100GBps without the pain of cable
       | 
       | 100GBps would be impressive, 100Gbps would be ... not much?
       | 
       | A interesting thing is all the terminal like graphics are
       | actually HTML/CSS and not images.
        
       | renewiltord wrote:
       | I'm an idiot. I thought this was like the SGI UV300 where you'd
       | view the whole thing as a single computer and everything would be
       | NUMA'd away. It looks like it's not like that, though.
        
       | trhway wrote:
       | >The elevation of the room where the rack is installed must be
       | below 10,005 feet (3,050 meters).
       | 
       | seems to be excluding Bolivia and probably Mars too.
        
       | nickik wrote:
       | So, does anybody think it would be overkill to put Kubernetes on
       | this to host my blog?
        
       | rbanffy wrote:
       | Looks interesting. I wonder if the high integration makes it
       | behave as a single-image machine with that many cores.
        
         | wmf wrote:
         | No, single system image is really expensive (see Superdome
         | Flex) and most workloads can't justify that cost.
        
           | rbanffy wrote:
           | The OS can still do cost-based memory allocation considering
           | the latencies of going between nodes. These Milan chips have
           | tons of memory controllers for local memory and compute nodes
           | can allocate all those PCIe channels to talk to a shared
           | memory module (IBM's OMI goes in that direction - a little
           | bit of extra latency, but lots of bandwidth and ability to go
           | a little bit further than DDR4/5 can go). I think the bigger
           | POWER9 boxes do this kind of thing. Migrating processes to
           | off-board cores is silly in this case, but core/socket/drawer
           | pinning can go a long way towards making this seamless while
           | enabling applications that wouldn't be feasible in more
           | mundane boxes.
        
             | zozbot234 wrote:
             | > The OS can still do cost-based memory allocation
             | considering the latencies of going between nodes.
             | 
             | That's a rather seamless extension of what OS's have to do
             | already in order to deal with NUMA. Pinning can definitely
             | be worthwhile since the default pointless shuttling of
             | workloads across cores is _already_ killing performance on
             | existing NUMA platforms. But that could be addressed more
             | elegantly by adjusting some tuning knobs and still allowing
             | for migration in rare cases.
        
           | e12e wrote:
           | Aftermarket oxide rack running DragonFlyBSD, anyone ? ;)
        
           | zozbot234 wrote:
           | Could one reimplement SSI at the OS layer, similar to
           | existing distributed OS's? Distributed shared memory is
           | usually dismissed because of the overhead involved in typical
           | scenarios, but this kind of hardware platform might make it
           | feasible.
        
             | wmf wrote:
             | There was Mosix at the OS layer in the 1990s and Virtual
             | Iron at the hypervisor layer in the aughts. I think the
             | cost and performance of software SSI just doesn't intersect
             | with demand anywhere.
        
               | e12e wrote:
               | Plan9 on an oxide rack might get you close to an SSI
               | work-a-like - but I think it's unlikely it'd be a
               | practical use of the hw?
        
               | zozbot234 wrote:
               | AIUI, Plan9 is not quite fully SSI. It is a distributed
               | OS, and gets quite close, but it's missing support for
               | distributed memory (i.e. software-implemented shared
               | memory exposing a single global address space); it also
               | does not do process checkpointing and auto-migration,
               | without which you don't really have a "global" system
               | image.
        
               | rbanffy wrote:
               | Mosix and VirtualIron worked at a time 1GBps ethernet was
               | in its infancy. Today 10GBps are consumer grade and
               | 40GBps can go over Cat 8 copper, roughly equivalent to a
               | DDR4-4000 channel.
               | 
               | Not great, but this is near COTS hardware. They can do
               | significantly better than that.
        
               | wmf wrote:
               | Uh no, DDR4-4000 (which servers can't use BTW) is ~256
               | Gigabits per second. Latency is also a killer; optimized
               | Infiniband is ~1 us which is 10x slower than local RAM at
               | ~100 ns.
        
       | ohazi wrote:
       | > Our firmware is open source. We will be transparent about bug
       | fixes. No longer will you be gaslit by vendors about bugs being
       | fixed but not see results or proof.
       | 
       | There are lots of reasons to be enthusiastic about Oxide but for
       | me, this one takes the cake. I hope they are successful, and I
       | hope this attitude spreads far and wide.
        
       | nodesocket wrote:
       | I have a few legacy HP Proliant (cheap eBay) rackmount servers in
       | my office closet. Oxide looks awesome, but obviously not targeted
       | for home / small business use. I was hoping they would offer
       | single-u servers.
        
       | slownews45 wrote:
       | How can they offer a secure boot solution under the GPLv3 - my
       | understanding is the anti tivoization clauses means they need to
       | release their keys or allow admins and hackers and others to
       | escape the secure boot chain if they are physically in front of
       | the machine or own the machine.
        
         | mlindner wrote:
         | I don't know anything, but I remember hearing it's secure boot
         | with only their releases, if you want to run your own software
         | it's not secure boot anymore, but you're free to run whatever
         | you want.
        
       | atonse wrote:
       | This looks interesting (although I'm not in the target market,
       | too small)...
       | 
       | But if I were looking at this, judging from the quality of people
       | they've amassed in their engineering team, is there any chance
       | they won't be acquired in 6 months?
       | 
       | To anyone looking to take a bet on this, what is the answer to
       | "what's your plan for when your stellar team gets acquired?" And
       | what answer will satisfy that buyer?
       | 
       | Update: Adding another question, does this "environment" (where
       | any really great product with great talent in it can be acquired
       | very quickly) have a chilling effect on purchases for products
       | like this?
       | 
       | Hopefully some Oxide people can answer :-)
        
         | paxys wrote:
         | Going by that logic, you should never take a chance on a bad
         | company because they are bad, and a good company because they
         | are too good and might get acquired. So should you just never
         | rely on a small company for anything?
        
           | atonse wrote:
           | That's the question I was genuinely asking. Do longer-term
           | minded buyers think this way? Our company is too small and
           | just use AWS, we're just not perspective buyers. But I'm
           | trying to understand the mindset of a CapEx style buyer whose
           | timelines are multiple years.
           | 
           | This team is, by all measures, going to hit it out of the
           | park. There's just a solid amount of talent, experience and
           | insight all-round.
           | 
           | And to be clear, I am not at all disparaging teams that get
           | acquired - that would be silly. I'm just saying that we are
           | in an environment these days where very few of these kinds of
           | companies get a chance to grow before being acquired and WE
           | are the ones that lose even though the people working at the
           | company rightfully earn a nice payout.
           | 
           | I have the same "fear" about Tailscale, a company whose
           | product we love and have started using, and are about to
           | purchase.
           | 
           | But the fact that a member of the founding team themselves
           | answered my message above in plain english (not surprising),
           | is honestly refreshing.
        
         | bcantrill wrote:
         | Hi! So, at every step -- from conception to funding to building
         | the team and now building the product -- we have done so to
         | build a big, successful public company. Not only do we (the
         | founders) share that conviction, but it is shared by our
         | investors and employees as well. For better or for ill, we are
         | -- as we memorably declared to one investor -- ride or die.
         | 
         | Also, if it's of any solace, I really don't think any of the
         | existing players would be terribly interested in buying a
         | company that has so thoroughly and unequivocally rejected so
         | many of their accrued decisions! ;) I'm pretty sure their due
         | diligence would reveal that we have taken a first principles
         | approach here that is anathema to the iterative one they have
         | taken for decades -- and indeed, these companies have shown
         | time and time again that they don't want to risk their existing
         | product lines to a fresh approach, no matter how badly
         | customers want it.
        
           | tw04 wrote:
           | Isn't that what some of your old Sun compatriots thought with
           | DSSD? :)
           | 
           | Congrats on the announcement, here's hoping you're right!
           | This looks too interesting to be swallowed by Oracle or HPe.
        
             | bcantrill wrote:
             | Fortunately, some of those same DSSD folks have joined us
             | at Oxide -- and let's just say that they are of like mind
             | with respect to Oxide's approach. ;)
        
           | atonse wrote:
           | That is awesome to hear. :-) Wishing you all a ton of
           | success!
        
           | [deleted]
        
           | timClicks wrote:
           | Brian, well done for the launch. I am in awe of your team's
           | audacity, patience and execution.
        
           | newsclues wrote:
           | Unfortunately you didn't pivot to becoming a podcast company.
           | But this looks cool
           | 
           | I hope you drop a new episode soon
        
             | bcantrill wrote:
             | We're getting there! My second shot was yesterday, and
             | Steve and Jess are both completely done -- so we expect to
             | get back to the garage soon! In the meantime, fellow Oxide
             | engineer Adam Leventhal and I have been doing a Twitter
             | Space every Monday at 5p Pacific; so far, it's been
             | incredible with some amazing people dropping by -- come
             | hang out!
        
           | chuckdries wrote:
           | Your approach to pay is really refreshing an attractive as an
           | engineer, and also seems like the exact type of thing most VC
           | or larger tech firms would really hate. That alone feels like
           | evidence of your conviction
        
             | bcantrill wrote:
             | Ha! Well, I think our investors think we're _very_
             | idiosyncratic -- but they also can 't help but admire the
             | results: a singular team, drawn in part by not having to
             | worry about the Game of Thrones that is most corporate comp
             | structures. ;)
        
               | kaliszad wrote:
               | Smaller teams will always win the communication overhead
               | comparison even without thinking about organizational
               | trees and therefore indirections too much. Communication
               | is one of the biggest problems in organizations and the
               | society, so more direct and therefore clearer
               | communication can make the organization more efficient
               | and keep the spirits high. It also doesn't hurt to have a
               | team made only of extremely senior engineers or other
               | professionals in their field. Even better, if those
               | engineers are great personalities too. There is only one
               | culprit: you have to have a very capable driver to put
               | this powerful engine to good use so to say. If you drive
               | the powerful engine in the wrong direction, you are
               | actually putting more, not less distance between the
               | destination and your current position. It seems, the goal
               | for Oxide Computer is clear and I wholeheartedly wish you
               | the best of luck.
        
             | fierro wrote:
             | https://oxide.computer/blog/compensation-as-a-reflection-
             | of-...
        
       | bri3d wrote:
       | It seems like a lot of Oxide information is currently hiding out
       | in podcasts and other media - does anyone know how the AuthN,
       | AuthZ, ACL system is going to work?
       | 
       | One of the most powerful elements of the trust root system is in
       | audit ability and access control for both service-to-service and
       | human-to-system aspects and I'm really interested in seeing how
       | this plays out.
       | 
       | For example, a service mesh where hosts can be identified
       | securely and authorized in a specific role unlocks a lot of low-
       | friction service-to-service security. I'm curious what Oxide plan
       | to provide in this space API and SDK wise.
       | 
       | I see some Zanzibar related projects on their GitHub, so it can
       | be assumed the ACL system will be based on the principles there -
       | but that's more a framework than an implementation.
        
       | mindfulplay wrote:
       | Congrats to Oxide computer!
       | 
       | Excited to see tech startups do actual tech instead of chasing VC
       | funded growth hacking.
       | 
       | I wonder what sort of enterprise customers this targets..
       | (definitely not for individual devs)
        
       | throwkeep wrote:
       | This looks great. Really well thought out, beautifully designed
       | and presented.
        
       ___________________________________________________________________
       (page generated 2021-05-26 23:01 UTC)