[HN Gopher] Helios: A distribution of Illumos powering the Oxide...
       ___________________________________________________________________
        
       Helios: A distribution of Illumos powering the Oxide Rack
        
       Author : eduction
       Score  : 276 points
       Date   : 2024-01-29 16:47 UTC (6 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | sneak wrote:
       | I know they're ex-Sun, but is there any real technical benefit
       | for choosing not-Linux (for their business value prop)?
       | 
       | I know of the technical benefits of illumos over linux, but does
       | that actually matter to the customers who are buying these?
       | Aren't they opening a whole can of worms for ideology/tradition
       | that won't sell any more computers?
       | 
       | As someone who runs Linux container workloads, the fact that this
       | is fundamentally not-Linux (yes I know it runs Linux binaries
       | unmodified) would be a reason against buying it, not for.
        
         | kardianos wrote:
         | In one podcast, the reason given was staff familiarity and
         | owning the full stack, not just the kernel I believe.
        
         | vvern wrote:
         | > yes I know it runs Linux binaries unmodified
         | 
         | Is it that it runs Linux binaries unmodified or that it runs
         | vms and manages VMs which run Linux, and as an end-user, that's
         | what you run your software in?
        
           | tonyarkles wrote:
           | As far as I recall it's not a VM. They run in "LX Branded
           | Zones" which does require a Linux userland so that the
           | binaries can find their libraries etc but Zones are more like
           | "better cgroups than cgroups, a decade earlier" than VMs.
        
             | bcantrill wrote:
             | No, it's a VM, running a bhyve-based hypervisor,
             | Propolis.[0] LX branded zones were/are great -- but for
             | absolute fidelity one really needs VMs.
             | 
             | [0] https://github.com/oxidecomputer/propolis
        
               | bpye wrote:
               | Do you have a solution for running containers
               | (Kubernetes, etc)? Are you spinning up a Linux VM to run
               | the containers in there, doing VM per container, or
               | something else?
        
               | panick21_ wrote:
               | Costumers can decide I would assume. Most likely you
               | install you install some Kubernetes and then just have
               | multible VMs distributed across the rack. And then run
               | multible Pods in each node.
               | 
               | VM per container seems like a waist unless you need that
               | extra isolation.
        
               | bpye wrote:
               | I wondered if there was any support for running
               | containers built in - something like EKS/AKS/GKE/Cloud
               | Run/etc - but looking at the docs it appears not.
               | 
               | I agree that VM per container can be wasteful - though
               | something like Firecracker at least helps with start
               | time.
        
           | bcantrill wrote:
           | It runs VMs -- so it doesn't just run Linux binaries
           | unmodified, it runs Linux kernels unmodified (and, for that
           | matter, Windows, FreeBSD, OpenBSD, etc.).
        
         | pjmlp wrote:
         | Their customers run virtualised OS on top of this.
         | 
         | This is no different from Azure Host OS, Bottlerocket, Flatcar
         | or whatever.
         | 
         | This maters to them, as knowing the whole stack, some of the
         | kernel code is still theirs from Sun days, and making it
         | available matters to the customers that want source code access
         | for security assement reasons.
        
         | skullone wrote:
         | It seems healthy to have options, almost like the universe is
         | healing a bit after oracle bought Sun. I can't imagine better
         | hands bringing the oxide system together than that team. As an
         | engineer who works entirely with Linux anymore, I pine for the
         | days of another strong Unix in the mix to run high value
         | workloads on. Comparing openvswitch on Linux, to say, the
         | crossbow SDN facility on Solaris, I'd take crossbow any day.
         | Nothing "wrong" with Linux, but it is sorely lacking in "master
         | plan" levels of cohesion with all the tooling taking their own
         | path, often bringing complexity that requires even for
         | abstraction with yet more complicated tooling on top.
        
         | Extigy wrote:
         | Perhaps Illumos is particularly well suited for a
         | Hypervisor/Cloud platform due to work upstreamed by Joyent
         | originally for SmartOS?
        
         | shusaaafuejdn wrote:
         | As far as performance and feature set, probably not anymore (I
         | would have answered differently 10 years ago, and if I am wrong
         | today would love to be educated about it).
         | 
         | However, if we are considering code quality, which I consider
         | important if you are actually going to be maintaining it
         | yourself as oxide will have to do since they need
         | customizations, then most of the proprietary Unix sources are
         | just superior imo. That is, they have better organization, more
         | consistency in standards, etc. The BSDs are slightly better in
         | this regard as well, it really isn't a proprietary vs open
         | source issue, it's more about the insane size of the Linux
         | kernel project making strict standards enforcement difficult if
         | not impossible the further you get from the very core system
         | components.
         | 
         | Irregardless of them being ex-Sun (and I am not ex Sun), if I
         | needed a custom OS for a product I was working on, Linux would
         | be close to the last Unix based OS source tree I would try to
         | do it with, only after all other options failed for whatever
         | reason. And that's not even taking into account the licensing,
         | which is a whole other can of worms.
        
         | bcantrill wrote:
         | Keep in mind that Helios is really just an implementation
         | detail of the rack; like Hubris[0], it's not something visible
         | to the user or to applications. (The user of the rack
         | provisions VMs.)
         | 
         | As for why an illumos derivative and not something else, we
         | expanded on this a bit in our Q&A when we shipped our first
         | rack[1] -- and we will expand on it again in the (recorded)
         | discussion that we will have later today.[2]
         | 
         | [0] https://hubris.oxide.computer/
         | 
         | [1] https://www.youtube.com/watch?v=5P5Mk_IggE0&t=2556s
         | 
         | [2] https://mastodon.social/@bcantrill/111840269356297809
        
           | kaliszad wrote:
           | Perhaps you could talk a bit about the distributed storage
           | based on Crucible with ZFS as the backing storage tonight. I
           | would really love to hear some of the details and challenges
           | there.
        
             | bcantrill wrote:
             | Yes! Crucible[0] is on our list of upcoming episodes. We
             | can touch on it tonight, but it's really deserving of its
             | own deep dive!
             | 
             | [0] https://github.com/oxidecomputer/crucible
        
               | panick21_ wrote:
               | The timing of your podcast is the least convenient thing
               | ever for us poor Europeans. And then the brutal wait the
               | next day until its uploaded.
               | 
               | The only thing I miss about Twitter Spaces is that you
               | could listen the morning after.
        
               | kaliszad wrote:
               | Yes (hello from Czechia), however there will always be
               | somebody who this is inconvenient for. Also, I have to
               | confess I was at times immersed in other work that I made
               | a few Oxide and Friends live. I might stay up tonight.
               | 
               | I am looking forward to the crucible episode. It sounds
               | like it could be a startup on its own, it wouldn't be the
               | first distributed file/ storage system company.
        
         | quux wrote:
         | Aren't they also ex-Joyent? Joyent ran customer VMs in prod on
         | Illumos for many years so there's a lot of experience there.
        
           | steveklabnik wrote:
           | Many people, including part of the founding team, are ex-
           | Joyent, yes. Some also worked at Sun, on the operating
           | systems that illumos is ultimately derived from.
        
           | littlestymaar wrote:
           | bcantrill used to work at Sun then became CTO at Joyent, so
           | the reason why Joyent ran Illumos is probably the same reason
           | as why Oxide is, because Cantrill likes it and judges that
           | it's a good fit for what they are doing.
        
             | steveklabnik wrote:
             | As I elaborated above, bcantrill did not decree that we
             | must use illumos. Technical decisions are not handed down
             | from above at Oxide.
        
               | littlestymaar wrote:
               | I saw your comment[1] after I wrote mine, but I'm not
               | saying that he's forcing you guys to use it (that would
               | not a good way of being a CTO at a start-up...), but that
               | doesn't prevent him from advocating for solutions he
               | believes in.
               | 
               | Would you say that Oxide would have chosen Illumos if he
               | wasn't part of the company?
               | 
               | [1]: https://news.ycombinator.com/item?id=39180706
        
               | sunshowers wrote:
               | (I work at Oxide.)
               | 
               | Bryan is just one out of several illumos experts here. If
               | none of those were around, sure, maybe we wouldn't have
               | picked illumos -- but then we'd be unrecognizably
               | different.
               | 
               | I came into Oxide with a Linux background and zero
               | knowledge of illumos. Learning about DTrace especially
               | has been great.
        
               | steveklabnik wrote:
               | > Would you say that Oxide would have chosen Illumos if
               | he wasn't part of the company?
               | 
               | I don't know how to respond to this question, because to
               | me it reads like "if things were completely different,
               | what would they be like?" I have no idea if you could
               | even argue that a company could be the same company with
               | different founders.
               | 
               | What I can say is that this line of questioning still
               | makes me feel like you're implying that this choice was
               | made simply based on preference. It was not. I am
               | employee #17 at Oxide, and the decision still wasn't made
               | by the time I joined. But again, the choice was made
               | based on a number of technical factors. The RFD wasn't
               | even authored by Bryan, but instead by four other folks
               | at Oxide. We all (well, everyone who wanted to, I say
               | "we" because I in fact did) wrote out the pros and cons
               | of both, and we weighed it like we would weigh any
               | technical decision: that is, not as a battle of sports
               | teams, but as a "hey we need to drive some screws: should
               | we use a screwdriver, a hammer, or something else?" sort
               | of nuts-and-bolts engineering decision.
        
               | littlestymaar wrote:
               | > we weighed it like we would weigh any technical
               | decision: that is, not as a battle of sports teams, but
               | as a "hey we need to drive some screws: should we use a
               | screwdriver, a hammer, or something else?" sort of nuts-
               | and-bolts engineering decision.
               | 
               | I'm not saying otherwise.
               | 
               | In fact, when I wrote my original comment, I actually
               | rewrote it multiple time to be sure it wouldn't suggest I
               | was thinking it was some sort of irrational decision
               | (that's why I added the "it's a good fit for what they
               | are doing"), but given your reaction it looks like I
               | failed. Written language is hard, especially in a foreign
               | language, sorry about that.
        
               | steveklabnik wrote:
               | It's all good! I re-wrote what I wrote multiple times as
               | well. Communication is hard. I appreciate you taking the
               | effort, sorry to have misunderstood.
               | 
               | Heck, there's a great little mistake of communication in
               | the title: this isn't just "intended" to power the rack,
               | it does power the rack! But they said that because we
               | said that in the README, because that line in the README
               | was written before it ended up happening. Oops!
        
         | jeffbee wrote:
         | Do you have the same gut reaction to ESXi?
        
           | stonogo wrote:
           | I sure do. We've finally got to a place where we don't need
           | weird hardware tricks to containerize workloads -- this is
           | why a lot of shops pursue docker-like ops for production.
           | When I buy hardware, long-term maintenance is a factor, and
           | when my whole operations fleet relies on ESX, or in this case
           | a Solaris fork, I'm now beholden to one company for support
           | at that layer. Buying a rack of Supermicro gear and running
           | RHEL or SLES with containerized orchestration on top means I
           | can, in a pinch, hire experts anywhere to work on my systems.
           | 
           | I have no reason to believe Oxide would be anything but
           | responsive and effective in supporting their systems, but
           | introducing bespoke software this deep in the stack severely
           | curtails my options if things get bad.
        
             | apendleton wrote:
             | I think the value proposition they're offering is a
             | carefully integrated system where everything has been
             | thoroughly engineered/tested to work with everything else,
             | down to writing custom firmware to guarantee that it's all
             | ship-shape, so that customers don't have to touch any of
             | the innards, and will probably just treat them as a black
             | box. It seems like it's chock-full of stuff that they
             | custom-built and that nobody else would be familiar with,
             | by design. If that's not what you want, this probably isn't
             | the product for you.
        
             | jeffbee wrote:
             | I can somewhat see your point, but in my experience you
             | can't rely on RHEL or whatever vendor Linux to correctly
             | bring up random OEM hardware. You will slowly discover all
             | of the quirks, like it didn't initialize the platform EDAC
             | the way you expected, or it didn't resolve some weird IRQ
             | issue, etc. Nothing about my experience leads me to believe
             | Linux will JFW on a given box, so I don't feel like Linux
             | has an advantage in this regard, or that niche operating
             | systems have a disadvantage. Certainly I feel like a first-
             | party OS from the hardware vendor is going to have a lot of
             | advantages.
        
         | steveklabnik wrote:
         | > does that actually matter to the customers who are buying
         | these?
         | 
         | It's not like we specifically say "oh btw there's illumos
         | inside and that's why you should buy the rack." It's not a
         | customer-facing detail of the product. I'm sure most will never
         | even know that this is the case.
         | 
         | What customers do care about is that the rack is efficient,
         | reliable, suits their needs, etc. Choosing illumos instead of
         | Linux here is a choice made to help effectively deliver on that
         | value. This does not mean that you couldn't build a similar
         | product on top of Linux inherently, by the way, just that we
         | decided illumos was more fit for purpose.
         | 
         | This decision was made with the team, in the form of an RFD[1].
         | It's #26, though it is not currently public. The two choices
         | that were seriously considered were KVM on Linux, and bhyve on
         | illumos. It is pretty long. In the end, a path must be chosen,
         | and we chose our path. I do not work on this part of the
         | product, but I haven't seen any reason to believe it has been a
         | hindrance, and probably is actually the right call.
         | 
         | > the fact that this is fundamentally not-Linux (yes I know it
         | runs Linux binaries unmodified) would be a reason against
         | buying it, not for.
         | 
         | I am curious why, if you feel like elaborating. EDIT: oh just
         | saw your comment down here:
         | https://news.ycombinator.com/item?id=39180814
         | 
         | 1: https://rfd.shared.oxide.computer/
        
           | wmf wrote:
           | The Linux vs. Illumos decision seems to be downstream of a
           | more fundamental decision to make VMs the narrow waist of the
           | Oxide system. That's what I'm curious about.
        
             | amluto wrote:
             | Especially since Oxide has a big fancy firmware stack. I
             | would expect this stack to be able to do an excellent job
             | of securely allocating bare-metal (i.e. VMX _root_ on x86
             | or EL2 if Oxide ever goes ARM) resources.
             | 
             | This would allow workloads on Oxide to run their own VMs,
             | to safely use PCIe devices without dealing with interrupt
             | redirection, etc.
        
               | wmf wrote:
               | I'm not affiliated with Oxide but I don't think you can
               | put Crucible and VPC/OPTE in firmware. Without a DPU
               | those components have to run in the hypervisor.
        
               | amluto wrote:
               | Possibly not.
               | 
               | But I do wonder why cloud and cloud-like systems aren't
               | more aggressive about splitting the infrastructure and
               | tenant portions of each server into different pieces of
               | hardware, e.g. DPU. A DPU could look DPU could look like
               | a PCIe target exposing NVMe and a NIC, for example.
               | 
               | Obviously this would be an even more custom design than
               | Oxide currently has, but Oxide doesn't seem particularly
               | shy about such things.
        
           | throwawaaarrgh wrote:
           | A team should always pick the tools they are most familiar
           | with. They will always have better results with that, than
           | trying to use something they understand less. With this in
           | mind, using their own stack is a perfectly adequate choice.
           | Factors outside their team will determine if that works out
           | in the long term.
        
             | wmf wrote:
             | A handful of the team are more familiar with Illumos and
             | the next hundred people they hire after that will be more
             | familiar with Linux.
        
               | throwawaaarrgh wrote:
               | A lot of people out there claim to know Linux, yet few
               | can prove it. OTOH, if they gain a cult following with
               | lots of people using their stack, those people might
               | become more familiar with their stack than most Linux
               | people are with theirs. They could grow a captive base of
               | prospective hires.
               | 
               | That's not the big concern though. The big concern is
               | whether vendor integration and certification becomes a
               | stumbling block. You can hire any monkey to write good-
               | enough code, but that doesn't give you millions in
               | return. Partnerships with vendors and compliance
               | certifications can give you hundreds of millions. The
               | harder that is, the farther the money is. A totally
               | custom, foreign stack can make it harder, or not; it
               | depends how they allocate their human capital and
               | business strategy, whether they can convince vendors to
               | partner, and clients to buy in. Anything very different
               | is a risk that's hard to ignore.
        
               | steveklabnik wrote:
               | To be clear, we at the time had already hired people with
               | deep familiarity with Linux at the time this decision was
               | made. In particular, Laura Abbott, as one example.
               | 
               | It is true that the number of developers that know Linux
               | is larger than the ones that know illumos. But this is
               | also true of the number of developers who know C as the
               | ones who know Rust. Just like some folks need to be
               | onboarded to Rust, some will need to be onboarded to
               | illumos. That is of course part of the tradeoff.
        
               | Jtsummers wrote:
               | If your hiring decisions are always based on what people
               | are currently familiar with, you'll always be stuck in
               | the past. You may not even be able to use present day
               | tooling and systems because they could be too new to hire
               | people for.
               | 
               | You're much better off hiring people who are capable of
               | learning, and then giving them the opportunities to learn
               | and advance their knowledge and skills.
        
               | pjmlp wrote:
               | As someone that knows UNIX since 1993, starting with
               | Xenix, many that are familiar with Linux, are actually
               | familiar with a specific Linux distribution, as the Linux
               | wars took over UNIX wars.
               | 
               | That being the case, knowing yet another UNIX cousin
               | isn't that big deal.
        
             | steveklabnik wrote:
             | I do not personally agree with this. I do think that
             | familiarity is a factor to consider, but would not give it
             | this degree of importance.
             | 
             | It also was not discussed as a factor in the RFD.
        
         | thinkingkong wrote:
         | This has been / will be the market education challenge; Its the
         | same one Joyent had with SmartOS. Theyre correctly pointing out
         | that the end user or operator will basically never interact
         | with this layer, but it does cause some knee-jerk reactions.
         | All that said, there are some pretty great technical benefits
         | to using illumos derived systems the least of which is the
         | teams familiarity and ability to do real diagnosis on
         | production issues. I wont put words in anyones mouth but I
         | suspect thats going to be critical for them as they support
         | customer deployments w/o direct physical access.
        
         | spamizbad wrote:
         | Seems strange to me too but it sounds like the end-users
         | basically never interact with this - it's just firmware humming
         | along in the background. As long as its open-source and
         | reasonably well documented its already lightyears ahead of what
         | else is out there.
        
         | greggyb wrote:
         | If you're running in one of the big 3 cloud providers, the
         | bottom-level hypervisors are not-linux. This is equivalent. Are
         | you anti-AWS or anti-Azure for the same reason?
         | 
         | This is the substrate upon which you will run any virtualized
         | infrastructure.
        
           | qmarchi wrote:
           | Small note, that's not true for Google Cloud, which runs on
           | top of Linux, though modified.
           | 
           | Disclaimer: Former Googler, Cloud Support
        
             | refulgentis wrote:
             | Another Xoogler here: any idea what they mean by it's not
             | Linux at the bottom for other providers? Like, surely it's
             | _some_ common OS? Either my binaries wouldn't run or AWS is
             | reimplementing Linux so they can, which seems odd.
             | 
             | Or are they just saying that the VM my binary runs on might
             | be some predictable Linux version, but the underlying thing
             | launching the VM could be anything?
        
               | qmarchi wrote:
               | Correct, that the Hypervisor isn't running Linux.
               | 
               | I think the only provider where that would make sense
               | would be Microsoft, where they have their own OS.
        
               | p_l wrote:
               | Old AWS used to be Xen, Nitro afaik uses customised VMM
               | and I don't recall if it's not a custom OS or hosted on
               | top of something.
               | 
               | Azure is Hyper-V underneath IIRC, a custom variant at
               | least (remember Windows Server Nano? IIRC it was the
               | closest you could get to running it), with sometimes
               | weird things like network cards running Linux and
               | integrating with Windows' built-in SDN facility.
               | 
               | Rest of the bigger ones is mainly Linux with occasional
               | Xen and such, but sometimes you can encounter non-trivial
               | VMware deployments.
        
               | zokier wrote:
               | Nitro is supposed to be this super customized version of
               | KVM.
        
               | bpye wrote:
               | Azure runs a version of Windows, see:
               | 
               | https://techcommunity.microsoft.com/t5/windows-os-
               | platform-b...
        
               | bewaretheirs wrote:
               | When your programs are running on a VM, the linux that
               | loads and runs your binaries is not at the bottom; that
               | linux image runs inside a virtual machine which is
               | constructed and supervised by a hypervisor which sits
               | underneath it all. That hypervisor may run on the bare
               | machine (or what passes for a bare machine what with all
               | the sub-ring-zero crud out there), or may run on top of
               | another OS which could be linux or something else. And
               | even if there is linux in the middle and linux at the
               | bottom they could be completely different versions of
               | linux from releases made years apart.
        
               | antod wrote:
               | _> Or are they just saying that the VM my binary runs on
               | might be some predictable Linux version, but the
               | underlying thing launching the VM could be anything?_
               | 
               | Yup. eg with Xen the hypervisor wasn't Linux, even if the
               | privileged management VM (dom0) was Linux (or optionally
               | NetBSD in the early days). The very small Xen hypervisor
               | running on the bare metal was not a general purpose OS,
               | and didn't expose any interface itself - it was well
               | hidden and relied on dom0 for administration.
        
             | bewaretheirs wrote:
             | As I understand it, there's linux running on the Google
             | Cloud hardware but the virtualized networking and storage
             | stacks in Google Cloud are google proprietary and largely
             | bypass linux -- in the case of networking see the "Snap: a
             | Microkernel Approach to Host Networking" paper.
             | 
             | In contrast, it appears that Oxide is committing to open-
             | source the equivalent pieces of their virtualization
             | platform.
        
           | wmf wrote:
           | I suspect a lot of people would (irrationally) freak out if
           | they saw how the public cloud works because it's so different
           | from "best practices". Oxide would probably trigger people
           | less if they never mentioned Illumos but that's not really an
           | option when it's open source.
        
           | tptacek wrote:
           | I don't about EC2 but Lambda and Fargate are presumably
           | Firecracker, which is Linux KVM.
        
             | zokier wrote:
             | AWS "Nitro" hypervisor which powers EC2 is their (very
             | customized) KVM.
             | 
             | https://docs.aws.amazon.com/whitepapers/latest/security-
             | desi...
        
         | StillBored wrote:
         | Linux is a nightmare in the embedded/appliance space because
         | one ends up just having platform engineers who spend their day
         | fixing problems with the latest kernels, drivers, core
         | libraries, etc, that the actual application depends on.
         | 
         | Or one goes the route of 99% of the IoT/etc vendors, and never
         | update the base OS and pray that there aren't any active
         | exploits targeting it.
         | 
         | This is why a lot of medium-sized companies cried about Centos,
         | which allowed them to largely stick to a fairly stable platform
         | that was getting security updates without having to actually
         | pay/run a full blown RHEL/etc install. Every ten years or so
         | they had to revisit all the dependencies, but that is a far
         | easier problem than dealing with a year or two update cycle,
         | which is too short when the qualification timeframe for some of
         | these systems is 6+ months long.
         | 
         | So, this is almost exclusively a Linux problem; any of the
         | *BSD/etc. alternatives give you almost all of what Linux
         | provides without this constant breakage.
        
           | bcantrill wrote:
           | This is a really, really good point -- and is a result of the
           | model of Linux being only a kernel (and not system libraries,
           | commands, etc.). It means that any real use of Linux is not
           | merely signing up for kernel maintenance (which itself can be
           | arduous) but _also_ must make decisions around every _other_
           | aspect of the system (each with its own communities, release
           | management, etc.). This act _is_ the act of creating a
           | distribution -- and it 's a huge burden to take on. Both
           | illumos and the BSD derivatives make this significantly
           | easier by simply including much more of the system within
           | their scope: they are not merely kernels, but also system
           | libraries and commands.
           | 
           | This weighed heavily in our own calculus, so I'm glad you
           | brought it up!
        
             | trhway wrote:
             | >including much more of the system within their scope: they
             | are not merely kernels, but also system libraries and
             | commands.
             | 
             | giving limited resources of the dev team it may lead to
             | limited support of the system outside of the narrow set of
             | officially supported/certified hardware with that support
             | falling behind on modern hardware, as it happened with Sun,
             | and vendor lock-in as a result into overpriced and low
             | performing hardware.
             | 
             | There is a reason that back then at Solaris dev there was a
             | joke about embedding Linux kernel as a universal driver for
             | Solaris kernel in order to get reasonable support for the
             | hardware around.
        
               | mardifoufs wrote:
               | Well they aren't burdened by having to make their own
               | processors, like Sun had to do, or their own full custom
               | chips in general. They just have to support the selection
               | of hardware they pick, and they have complete oversight
               | of what hardware runs on their racks. So I'm not sure if
               | the sun comparison is relevant here, since they can still
               | pick top of the line hardware. Just not _any_ hardware
        
               | trhway wrote:
               | Any issues with funding or whatever, and their customers
               | would get locked in on the yesterday's "top of the line
               | hardware" (reminds how Oracle used lawyers to force HP to
               | continue support Itanic). Sun was 50K persons company,
               | and they struggled to support even reasonably wide set of
               | hardware. Vendor lock in is like a Newton law in this
               | industry.
        
               | cross wrote:
               | This is less of an issue for us at Oxide, since we
               | control the hardware (and it is all modern hardware; just
               | a relatively small subset of what exists out there). Part
               | of Sun's issue was that it was tied not just to a
               | software ecosystem, but also to an all-but-proprietary
               | hardware architecture and surrounding platform. Sun
               | eventually tried to move beyond SPARC and SBus/MBus, but
               | they really only succeeded in the latter, not the former.
        
               | linksnapzz wrote:
               | >that support falling behind on modern hardware, as it
               | happened with Sun, and vendor lock-in as a result into
               | overpriced and low performing hardware.
               | 
               | The Oxide hw is using available AMD SKUs for CPU.
        
           | GrumpySloth wrote:
           | CentOS wasn't used in embedded systems.
        
             | dralley wrote:
             | Sure it was. So is RHEL.
             | 
             | Embedded isn't limited to devices equal or less powerful /
             | expensive than the Raspberry Pi.
        
           | pjmlp wrote:
           | Interesting that you bring up embedded/appliance space, as I
           | have noticed there are plenty of FOSS alternatives coming up,
           | key features not being Linux based, and not using GPL derived
           | licenses.
           | 
           | FreeRTOS, Nuttx, Zephyr, mbed, Azure RTOS,...
        
         | NexRebular wrote:
         | Not everything needs to be linux. Besides, if monocultures are
         | supposed to be harmful, why is linux being thrown to everything
         | nowadays? Very dangerous to have a single point of failure in
         | (critical) applications.
        
         | mardifoufs wrote:
         | I think it's a good idea to have more choice, especially in
         | OSS. A Linux mono culture isn't any better than a chromium mono
         | culture. They might be able to do stuff that just isn't
         | practical if they stuck with Linux. They are also probably more
         | familiar with illumos, or at least familiar enough to know that
         | they can use it to do more than with linux
        
         | moondev wrote:
         | The main drawbacks to me are
         | 
         | 1. No support for nested virtualization, so running a vm inside
         | your vm is not available. This prevents use of projects such as
         | kubevirt or firecracker on a Linux guest, and WSL2 on a Windows
         | guest.
         | 
         | 2. No GPU support
         | 
         | If the base hypervisor was Linux, it would be way more capable
         | for users it seems. I also wonder if internally Linux is used
         | for development of the platform itself so they can create
         | "virtual" racks to dogfood the product without full blown
         | physical racks.
         | 
         | With all that said, I do not know the roadmap and admittedly
         | there are already quite a few existing platforms built on kvm,
         | so as their hypervisor improves and becomes more capable it
         | could potentially become strategic advantage.
        
           | steveklabnik wrote:
           | > I also wonder if internally Linux is used for development
           | of the platform itself
           | 
           | Developers at Oxide work on whatever platform they'd like, as
           | long as they can do their work. I will say I am in the
           | minority as a Windows user though, most are on some form of
           | Unix.
           | 
           | > so they can create "virtual" racks to dogfood the product
           | without full blown physical racks.
           | 
           | So one of the reasons why Rust is such an advantage for us is
           | its strong cross-platform support: you can run a simulated
           | version of the control plane on Mac, Linux, and Illumos,
           | without a physical rack. The non-simulated version must run
           | on Helios. [1]
           | 
           | That said we do have a rack in the office (literally named
           | dogfood) that employees can use for various things if they
           | wish.
           | 
           | 1: https://github.com/oxidecomputer/omicron?tab=readme-ov-
           | file#...
        
             | moondev wrote:
             | Interesting thanks for the insight.
             | 
             | > I will say I am in the minority as a Windows user though,
             | most are on some form of Unix.
             | 
             | Now i'm imagining Helios inside WSI - Windows Subsystem for
             | illumos
        
               | steveklabnik wrote:
               | You're welcome. I will give you one more fun anecdote
               | here: when I came to Oxide, nobody in my corner of the
               | company was using Windows. And hubris and humility almost
               | Just Worked: we had one build system issue that was using
               | strings instead of the path APIs, but as soon as I fixed
               | those, it all worked. bcantrill remarked that if you had
               | gone back in time and told him long ago that some of his
               | code would Just Work on Windows, he would have called you
               | a liar, and it's one of the things that validates our
               | decisions to go with Rust over C as the default language
               | for development inside Oxide.
               | 
               | > Now i'm imagining Helios inside WSI - Windows Subsystem
               | for illumos
               | 
               | That would be pretty funny, ha! IIRC something about
               | simulated omicron doesn't work inside WSL, but since I
               | don't work on it actively, I haven't bothered to try and
               | patch that up. I think I tried one time, I don't remember
               | specifically what the issue was, as I don't generally use
               | WSL for development, so it's a bit foreign to me as well.
        
               | panick21_ wrote:
               | > that was using strings instead of the path API
               | 
               | Man you can't let Brain live that one down can you?
               | 
               | :)
        
               | steveklabnik wrote:
               | I didn't bother to git blame the code, I myself do this
               | from time to time :)
        
             | fragmede wrote:
             | How is Oxide for GPU-heavy workloads?
        
               | steveklabnik wrote:
               | There are no GPUs in the rack, so pretty bad, haha.
               | 
               | We certainly understand that there's space in the market
               | for a GPU-focused product, but that's a different one
               | than the one we're starting the company off with. There's
               | additional challenge with how we as a company desire
               | openness, and GPUs are incredibly proprietary. We'll see
               | what the future brings. Luckily for us many people still
               | desire good old classic CPU compute.
        
       | milon wrote:
       | I'm glad this is out, i'm going to deploy this locally and learn
       | as much about it as possible. Oxide is pretty much the company I
       | dream to work at, both for the tech stack, plus the people
       | working there. Thank you Oxide team!
        
         | refulgentis wrote:
         | Can you get me excited? I spent 20 seconds browsing the
         | homepage and walked away with "so the idea is vertical
         | integration for on-premise server purchases? On custom OS? Why?
         | Why would people pay a premium?"
         | 
         | But immediately got myself to "what does a server OS do anyway,
         | doesn't it just launch VMs? You don't need Linux, just the
         | ability to launch Linux VMs"
         | 
         | Tell me more? :)
        
           | throwup238 wrote:
           | The best elevator pitch I've heard is "AWS APIs for on-prem
           | datacenters". They make turn-key managed racks that behave
           | just like a commercial cloud would with all the APIs for VM,
           | storage, and network provisioning and integration you'd
           | expect from AWS, except made to deploy in your company's
           | datacenter under your control.
        
             | kortilla wrote:
             | That's the elevator pitch for open stack
        
               | steveklabnik wrote:
               | You are not wrong that OpenStack is sort of similar in a
               | sense, but the difference is that Oxide is a hardware +
               | software product, and OpenStack is purely software.
        
             | capitol_ wrote:
             | That just sounds like a bunch of api's on top of linux.
        
               | throwup238 wrote:
               | Just like Dropbox is a bunch of APIs on top of FTP.
        
             | magnawave wrote:
             | I guess the wildcard is price.
             | 
             | AWS's pricing model works kinda at their OMG eyewatering
             | scale - aka all the custom hardware they design is highly
             | cost optimized, but just doing custom hardware has a
             | notable cost. This is easily covered by their scale, to
             | make for their famous margins. [during their low scale
             | times, they did use a good bit of HP/Dell, etc]
             | 
             | Oxide seems to be no different (super custom hardware) only
             | major difference being the "in your datacenter" part. Since
             | you own the cost of your datacenter, Oxide has to come in a
             | _lot_ cheaper to even compete with AWS, but how do you do
             | that with low volume [and from the look of it not-cost
             | optimized, but instead fairly tank-like] bespoke hardware?
             | Feels like the pricing  / customer fundamentals are going
             | to be pretty rough here outside perhaps a few verticals.
        
               | sarlalian wrote:
               | Datacenter costs are weird. The first big cost is having
               | a datacenter. However once you have the space, power,
               | cooling and that part makes sense, then the actual
               | hardware going into it can have a pretty decent premium
               | and still be highly competitive with AWS. It will also
               | depend heavily on what you are doing and producing, if
               | the answer to that is a large amount of data, and it
               | needs to transit out of AWS, suddenly the cost of a
               | pretty large datacenter is really cheap in comparison.
               | AWS egress fees have a markup that will make your
               | accountants panic. From a hardware standpoint, once you
               | need GPU compute or large amounts of RAM, the prices get
               | pretty dumb as well.
        
           | milon wrote:
           | Having a solid on-prem rack product to me is a great thing. I
           | like IaaS services a lot, don't get me wrong, and I think
           | they're the right pick for a bunch of cases, but on-prem
           | servers also have their "place in the sun", so to speak :) I
           | could present any number of justifications that I don't think
           | I'm qualified enough to defend, but the gist is that at the
           | bare minimum, I'm glad the option exists.
           | 
           | As to why I'm _personally_ excited: I enjoy the amount of
           | control having such an on-prem rack would afford me, and
           | there surely could be a great amount of cost-savings and
           | energy-savings in many scenarios. Sometimes, you just need a
           | rack to deploy services for your local business. I like the
           | prospect of decentralizing infrastructure, applying all the
           | things we 've learned with IaaSes.
        
           | SteveNuts wrote:
           | It seems like the folks on HN tend to think the world runs on
           | AWS (I'm not trying to say they don't have a huge market
           | share), but many huge enterprises still run their own
           | datacenters and buy ungodly amounts of hardware.
           | 
           | The products that are on the market for an AWS-like
           | experience on-prem are still fairly horrible. A lot of times
           | the solutions are collaborations between vendors, which makes
           | support a huge pain (finger pointing between companies).
           | 
           | Or, a particular vendor might only have compute and storage,
           | but no offering for SDN and vice-versa. This sucks because
           | then you have two bespoke things to manage and hope they work
           | together correct.
           | 
           | These companies want a full AWS experience in their
           | datacenter, and so far this looks to be the most promising
           | without dedicating huge amounts of resources to something
           | like Openstack.
        
             | lijok wrote:
             | Wouldn't a "full AWS experience in their datacenter" be AWS
             | Outpost?
        
               | mustache_kimono wrote:
               | > "full AWS experience in their datacenter"
               | 
               | ... Including the bill!
        
               | mardifoufs wrote:
               | Is AWS outpost truly a full AWS stack/experience? I
               | thought it wasn't actually meant to be a "data center in
               | a box" experience, but more so a way to run some
               | workloads locally when you are already using AWS for
               | everything else.
        
             | adfm wrote:
             | With DHH and others promoting a post-SaaS approach
             | (once.com, etc.) we might see hardware refresh as cost-
             | cutting. Astronomical compute bills and lack of granularity
             | bring all things cloudy into sharp focus.
        
             | _zoltan_ wrote:
             | OpenStack is pretty smooth sailing these days and I bet you
             | it would be much cheaper to just get 3 FTEs for your
             | OpenStack install than an Oxide rack
        
               | linksnapzz wrote:
               | Where, exactly, are you getting these 3FTEs qualified to
               | touch production OpenStack infra, for more than a year,
               | where their aggregate cost is less than a rack of
               | equipment?
        
               | gtirloni wrote:
               | The rack doesn't require FTEs?
        
               | linksnapzz wrote:
               | Not three of them; it ought to be about as difficult to
               | administer as a single rack of hw, +Vsphere, if that.
        
               | _zoltan_ wrote:
               | if you need OpenStack you're not running one rack, but a
               | couple dozen.
        
             | refulgentis wrote:
             | The "(finger pointing between companies)" took me from
             | confusion to 100% understanding, was at Google until
             | recently. It was astonishing to me that it was universally
             | acceptable to fingerpoint if it was outside your immediate
             | group of ~80 people.*
             | 
             | Took me from "why would people go with this over Dell?" to
             | "holy shit, I'm expecting Dell to do software and make
             | nvidia/red hat/etc/etc etc/etc etc etc help out. lol!"
             | 
             | * also, how destructive it is. never, ever, ever let ppl
             | talk shit about other ppl. There's a difference between
             | "ugh, honestly, it seems like they're focused on release
             | 11.0 this year" and "ughh they're usless idk what they're
             | thinking??? stupid product anyway" and for whatever reason,
             | B made you normal, A made you a tryhard pedant
        
           | throwawaaarrgh wrote:
           | It's a mainframe. If you can't get excited for mainframes
           | it'll be hard to be excited about this.
           | 
           | IllumOS is the OS/360 to Oxide's System/360. (It won't get
           | that popular but it's a fair enough comparison for
           | illustrative purposes)
        
             | panick21_ wrote:
             | Except that it use the same standard CPU as commodity
             | machine. Doesn't have much of the extra reliability stuff.
             | It can go from vertical to horizontal scaling. The OS is
             | open source Unix. And yeah its not like a mainframe at all
             | really.
        
             | linksnapzz wrote:
             | It's a mainframe, for people who do not, actually, know
             | what a mainframe is or does.
        
           | mustache_kimono wrote:
           | > so the idea is vertical integration for on-premise server
           | purchases? On custom OS? Why? Why would people pay a premium?
           | 
           | As I understand it, re: vertical integration, the term is
           | actually "hyperconverged". Here, that means it's designed at
           | the level of the rack. Like -- there aren't per compute unit
           | redundant power supplies. There is one DC bus bar conversion
           | for the rack. There is an integrated switch designed by
           | Oxide. There is one company to blame when anything inside the
           | box isn't working.
           | 
           | In addition, the pitch is they're using open source Rust-
           | based firmware for many of the core components (the base
           | board management controller/service processor, and root of
           | trust), and the box presents a cloud like API to provision.
           | 
           | If the problem is: I'm running lots of VMs in the cloud. I'm
           | used to the cloud. I like the way the cloud works, but I need
           | an on-prem cloud, this makes that much easier than other DIY
           | ways to achieve (OMG we need a team of people to build us a
           | cloud...).
        
             | steveklabnik wrote:
             | The terminology in this space is confusing, but
             | "hyperconverged" isn't really what we're doing. I wrote
             | about the differences here:
             | https://news.ycombinator.com/item?id=30688865
             | 
             | (That said I think other than saying "hyperconverged" your
             | broad points are correct.)
        
           | lijok wrote:
           | > Why would people pay a premium?
           | 
           | I would pay a premium just to not have to deal with HPE,
           | DELL, etc
        
             | _zoltan_ wrote:
             | Dell's been nothing but fantastic for us (compute, not
             | storage.)
        
               | sarlalian wrote:
               | Dell is a mixed bag depending on how well the individual
               | region you are dealing with is doing overall. Things were
               | great for us, but something changed and now getting good
               | support for hardware failures has been a nightmare of
               | jumping through hoops, time zone handoffs to other teams,
               | and forced on-site techs to replace a stick of ram.
        
           | adamnemecek wrote:
           | One company making both HW and SW generally leads to really
           | good, integrated experiences. See e.g. Apple.
        
             | 0cf8612b2e1e wrote:
             | I am really hoping the broader industry takes note. By
             | owning the platform, the Oxide team was able to dump legacy
             | stuff that no longer makes sense.
        
         | EvanAnderson wrote:
         | I'm excited to see how this compares to SmartOS. I'm pretty
         | heavily invested in SmartOS in my personal infrastructure but
         | its future, post-Joyent acquisition, has been worrying me.
         | 
         | I really wish I did work for an org big enough to use Oxide's
         | gear. Not having to futz around with bogus IBM PC AT-type
         | compatibility edifice, janky BMCs and iDRACs, hardware RAID
         | controllers, etc, would be so unbelievably nice.
        
           | _rs wrote:
           | I had been using SmartOS for a long time but finally had to
           | bite the bullet and give up. I ended up deciding on Proxmox
           | on a ZFS root and am quite happy with it.
        
             | geek_at wrote:
             | the nice thing about the Proxmox + ZFS setup is that it
             | works and is even recommended without using hardware raid
             | controllers. Less headaches either way.
             | 
             | I recently wrote a guide [1] how to use proxmox with ZFS
             | over iSCSI so you can use the snapshot features from a SAN
             | 
             | [1] https://blog.haschek.at/2023/zfs-over-iscsi-in-
             | proxmox.html
        
             | rjzzleep wrote:
             | I feel the same. I used a SmartOS distro called Danube
             | Cloud for a long time and am looking to move and looked at
             | Harvester[1] and OpenNebula, but with everything I know
             | about Kubernetes(and LongHorn) I'm reluctant to use
             | something so heavily based on Kubernetes.
             | 
             | At its peak I reached out multiple times to Joyent to fix
             | their EFI support for virtualization. The Danube team had
             | similar experiences with them, working on live migrations
             | for VMs, and a few months back I did a rebase of the
             | platform image to a more recent illumos stack.
             | 
             | Two of the fundamental issues with Illumos is that they
             | don't seem to understand that they need to fix the
             | horrendous platform build to get community support to keep
             | up with the pace of development of other OS's. The platform
             | build is a huge nasty mess of custom shell scripts, file
             | based status snapshots, which includes the entire userspace
             | in the kernel build. Basically if your openssl version is
             | out of wack the entire thing will fail. Not because it has
             | to, but because it was never adapted to modern needs of
             | someone just wanting to hack on a kernel. It's fixable, but
             | I don't see any desire to fix it, and even if that desire
             | eventually shows up it might just be too little, too late.
             | 
             | [1] https://harvesterhci.io/
        
             | icybox wrote:
             | I've been running smartos at least since 2015 where I co-
             | located my server. There have been times where I felt like
             | giving up, but people like danmcd, jperkin and others
             | always stepped in and fixed what needed to be fixed for LX
             | to be usable and working. (Keeping java updated and running
             | is hard, uphill battle. Thanks!) I always ran a mixture of
             | OS and LX zones and bcantrill's t-shirt with "Save the
             | whales, kill your VM" made sense. I've used zones in
             | Solaris 10 even before and they just click with me.
             | FreeBSD's jails are nice, but far from it. And linux's
             | cgroups are a joke. And using KVM/VMs for security
             | containerization is just insane. At dayjob, I've
             | implemented multiple proxmox clusters, because we're linux
             | shop and there's no way to "sell" smartos or tritonDC to
             | die-hard debian colleagues, but I've managed to sell them
             | ZFS. With personal stuff, I like my systems to take care of
             | themselves without constant babysitting and SmartOS or
             | OpenBSD provide just that. I don't dislike windows, I love
             | UNIX. You could really feel those extra 20y UNIX had
             | compared to linux. I migrated all my stuff to proxmox for
             | like 2 months. And then went back to SmartOS, because there
             | was something missing ... probably elegance, sanity,
             | simplicity or even something you'd call "hack value".
        
           | nwilkens wrote:
           | SmartOS is being actively developed since the aquisition from
           | Joyent[1] in April 2022.
           | 
           | We've released a new version every two weeks post
           | acquisition, and are continuing to develop and invest.
           | 
           | We also hold office hours events roughly every two weeks on
           | Discord[2], and would love for you to stop by and ask any
           | questions, or just listen along!
           | 
           | [1]: https://www.tritondatacenter.com/blog/a-new-chapter-
           | begins-f... [2]: https://discord.gg/v4NwA3Hqay
        
       | busterarm wrote:
       | Not that I'm not rooting for Oxide, but their product is still so
       | niche and early stage that I can't imagine any actual businesses
       | buying their stuff for a long time. They only just shipped their
       | first rack to their first customer at the end of last summer and
       | it's Idaho National Laboratory. State research institutions are
       | basically the only entities positioned to gamble on this right
       | now.
        
         | __float wrote:
         | We have historically had private institutions with impactful
         | research labs. Are there any of those still kicking?
        
         | lijok wrote:
         | This describes every single product in its early days in
         | existence. If you're planning to launch any other way, you've
         | doomed the company before you even launched. Lucky few survive,
         | in spite of, and that's what contributes to the 9/10 startups
         | statistic.
         | 
         | Lazer focus on the first set of customers that will help you
         | cross the chasm. Only then mass market.
        
         | steveklabnik wrote:
         | Just a small note, but from when we announced this back in
         | October, two customers were mentioned:
         | https://oxide.computer/blog/oxide-unveils-the-worlds-first-c...
         | 
         | > Oxide customers include the Idaho National Laboratory as well
         | as a global financial services organization. Additional
         | installments at Fortune 1000 enterprises will be completed in
         | the coming months.
        
         | elzbardico wrote:
         | Large financial institutions surprisingly are good customers
         | for new, still-untested computing technology.
         | 
         | I would not get surprised if Oxide next customers were a few
         | giant banks and funds.
        
           | __d wrote:
           | In my experience, some financial institutions have a very
           | good understanding of risk.
           | 
           | They are able to identify, and most importantly, quantify
           | risk in a way that many businesses cannot.
           | 
           | Consequently, they're able to take risks with new
           | hardware/software that other companies shy away from.
        
         | chologrande wrote:
         | I work at a recently IPO'd tech company. Oxide was a strong
         | consideration for us when evaluating on prem. The pitch lands
         | among folks who still think "on prem.... ew".
         | 
         | Looks like a cloud like experience on your own hardware.
         | 
         | If only it were as cheap as dell...
        
         | RandomChance wrote:
         | My company looked at them, and we were very impressed with the
         | product. The only issue was that they are built for general
         | compute and we really needed the option for faster processors.
        
         | newsclues wrote:
         | I hope they sooner or later release a smaller, cheaper, homelab
         | product for people to learn or for startups that will lead to
         | future rack sales or workers.
        
           | steveklabnik wrote:
           | This is a common request and we absolutely understand the
           | desire, but I suspect such a thing, if ever, will be a long
           | time off. Given that the product is designed as an entire
           | rack, doing something like this would effectively be a
           | different product for a different vertical, and we have to
           | focus on our current business. Honestly it's kind of
           | frustrating not being able to reciprocate the enthusiasm back
           | in more than just words, but it is what it is.
        
             | newsclues wrote:
             | I appreciate the response, I totally understand and don't
             | expect it to materialize soon, but am still hopeful that
             | someday it will be a possibility.
        
         | technofiend wrote:
         | It is somewhat niche, but Broadcom's purchase of VMWare now
         | puts 0xide closer to Nutanix in that you can go buy a fully
         | supported virtualization platform from a vendor who welcomes
         | your business. I don't know the actual number, but it _seems_
         | Broadcom is only interested in enterprise customers with huge
         | annual spends.
        
       | temptemptemp111 wrote:
       | What even is Oxide Computer? It makes no sense - it was
       | publicized with all sorts of anti-blob, freedom, and posts about
       | management engines and a sort of alternative to RaptorCS/IBM
       | (which now has blobs again)... Yet most of that stuff is now
       | buried/removed and Oxide Computer is just a hardware platform
       | with unnecessary lock-in. For the bunker of the rich to be able
       | to run their own mini-cloud? Sure. For anything else it seems
       | like a bad design.
        
         | throwawaaarrgh wrote:
         | It's a mainframe. You use it like you use mainframes, but
         | probably easier, as they're adopting more modern functionality.
         | You won't be aware you're using it, just like you aren't aware
         | when you use a zSystem.
        
         | steveklabnik wrote:
         | > Yet most of that stuff is now buried/removed
         | 
         | Nothing has changed with regards to our anti-blob and pro-open
         | source stances. I am not sure what you're referring to here.
         | 
         | > with unnecessary lock-in.
         | 
         | What lock-in are you referring to here? The way that things run
         | on the rack is via virtual machines, you can run virtual
         | machines on many providers. We even have a terraform provider
         | so that you can use familiar tools instead of the API directly,
         | if you believe that is lock-in (and that stuff is all also
         | fully open source).
        
           | temptemptemp111 wrote:
           | I don't expect anyone to see my comments unless they're
           | really looking since I've been shadow banned for many years
           | now - so I appreciate your reply.
           | 
           | To be clearer regarding my questions:
           | 
           | - What happened to Project X (supposedly coreboot++ for
           | latest AMD CPUs)? It seems dead, despite being more reported
           | on than Oxide's attempts in working with AMD (to achieve the
           | same outcomes, presumably - what's the difference?). Loads of
           | well meaning people have approached this with virtue,
           | innocence and skills; perhaps another approach is needed that
           | fully respects the dynamic between the user, the chip
           | manufacturers and the governments and banks they're in debt
           | to.
           | 
           | - Does Oxide attempt to sandbox, completely remove or 'verify
           | as benign' aspects like the PSP? For example, if someone
           | could verify that the PSP cannot possibly be affected over
           | the network, then peace of mind could be more affordable
           | regarding things like supply chain attacks and bad actors
           | with AMD/Intel/Apple management engine secrets.
           | 
           | Not referring to software lock-in, just hardware. And it
           | isn't very nefarious like other hardware lock-in
           | (serialization, see Rossmann Group). Just hardware on the
           | rack-level: replacing oxide gear & upgrading oxide gear (not
           | sure about repair, that could be easy). And if the offering
           | were of a less blobby architecture, then many of us would be
           | happy to pay a bit more for the hardware as a system.
           | However, if the hardware platform is FOSS, then it won't be
           | unnecessarily difficult to mix and match and integrate the
           | Oxide gear with other DC-class gear.
        
             | steveklabnik wrote:
             | So, your comment was not dead when I saw it. This reply
             | was, but apparently now has been vouched for.
             | 
             | > What happened to Project X (supposedly coreboot++ for
             | latest AMD CPUs)?
             | 
             | I don't recall what you're referring to specifically, maybe
             | this was a thing before I started at Oxide. I do know that
             | we deliberately decided to not go with coreboot. I believe
             | the equivalent component would be phbl[1]. It boots illumos
             | directly. Bryan gave a talk about how we boot[1][2] with
             | more reasoning and context.
             | 
             | > Does Oxide attempt to sandbox, completely remove or
             | 'verify as benign' aspects like the PSP?
             | 
             | The general attitude is still "remove or work around every
             | binary blob possible," but the PSP is unfortunately not
             | able to be worked around.
             | 
             | > However, if the hardware platform is FOSS
             | 
             | We fully intend to do this, by the way. Just haven't yet.
             | It'll come.
             | 
             | 1: https://github.com/oxidecomputer/phbl
             | 
             | 2: https://www.osfc.io/2022/talks/i-have-come-to-bury-the-
             | bios-...
             | 
             | 2: https://news.ycombinator.com/item?id=33145411
        
               | temptemptemp111 wrote:
               | To see more on "Project X", see the Phoronix article on
               | it. At the very least, it would be resourceful if the
               | Oxide devs had a chat with the Project X devs who have
               | since given up - learnings can be had and time can be
               | saved. And yes, coreboot itself is now untennable, but is
               | also kind of a slang for the a category of deblobbed
               | software.
        
       | lifeisstillgood wrote:
       | I would be interested in _how did you first hear of Oxide_.
       | 
       | I somehow landed on their podcast because it covered <whatever
       | the hell I thought was interesting at that moment>.
       | 
       | The podcast is for me amazeballs marketing - it does everything
       | but sell their product (might be a good idea to add a pitch in
       | for each out-tro!)
       | 
       | I mean they talk about it, like "we had such a tough time getting
       | the compiler to do something something and then veer off to
       | discuss back in the day stories.
       | 
       | Ah never mind. Keep talking guys hope it works out
        
         | Hackbraten wrote:
         | Was following @jessfraz on Twitter back then, so I got word of
         | Oxide when they first announced it there.
        
           | __d wrote:
           | Between Jess, Bryan, and Adam, it was hard to miss :-)
        
         | zengid wrote:
         | If you listen to their original podcast 'On The Metal' it was
         | infamous for it's overly repeated use of 2 or 3 pre-recorded
         | self promotions, so much so that a fan recorded their own
         | commercial for them to air.
         | 
         | 'Oxide and Friends' however isn't really what I would consider
         | a podcast, but a recording of live "spaces" or group calls,
         | beginning on Twitter and now happening in Discord. IMO it's not
         | really best consumed as a podcast, but rather to participate in
         | live. If you tune in live you'll pick up on the vibe of the
         | recordings a lot better.
         | 
         | https://oxide.computer/podcasts/oxide-and-friends
        
         | ahmedfromtunis wrote:
         | For me, it was when Pentagram showcased their branding when
         | Oxide was first announced.
        
       | nubinetwork wrote:
       | I was hoping for this since they announced the server rack...
       | nobody wants a paperweight if (God forbid) oxide were to go out
       | of business.
        
         | steveklabnik wrote:
         | To be clear about it, the "paperweight problem" is very
         | important to us as well. It's worth remembering that the MPL
         | doesn't care if a copy is posted openly on GitHub or not, and
         | (I am not a lawyer!) we have obligations to our customers under
         | it regardless if non-customers can browse the code.
        
       | alberth wrote:
       | License
       | 
       | MPL 2.0 is an interesting license choice, for an operating
       | system.
       | 
       | EDIT: why the downvotes?
        
         | steveklabnik wrote:
         | Quoting from an RFD co-authored by bcantrill and myself
         | describing Oxide's policies around open source:
         | 
         | > For any new Oxide-created software, the MPL 2.0 should
         | generally be the license of choice. The exception to this
         | should be any software that is a part of a larger ecosystem
         | that has a prevailing license, in which case that prevailing
         | license may be used.
         | 
         | EDIT: I also am confused about why you are downvoted. Are there
         | any major operating systems distributions that are MPL
         | licensed? I can't think of any off the top of my head. Beyond
         | that it's a simple question.
        
           | alberth wrote:
           | > [if Oxide-created software] is a part of a larger ecosystem
           | that has a prevailing license, in which case that prevailing
           | license may be used
           | 
           | How does that work if the prevailing is BSD/MIT/ISC?
           | 
           | You're saying that Oxide can then be licensed under
           | BSD/MIT/ISC?
        
             | steveklabnik wrote:
             | So I decided to cut off my quote but the next line has the
             | answer:
             | 
             | > For example, Rust crates are generally dual-licensed as
             | MIT/Apache 2.
             | 
             | We often produce components that we share with the broader
             | open source world. For example, dropshot[1] is our in-house
             | web framework, but we publish it as a standalone package.
             | It is licensed under Apache-2.0 instead of MPL 2.0 because
             | the norm in the Rust ecosystem is Apache and not MPL.
             | 
             | > You're saying that Oxide can then be licensed under
             | BSD/MIT/ISC?
             | 
             | I am saying that we do not have one single license across
             | the company. Some components are probably BSD/MIT/ISC
             | licensed somewhere, and I guarantee that some third party
             | dependencies we use are licensed under those licenses.
             | That's different from "you could choose to take it under
             | BSD," which I didn't mean to imply, sorry about that!
             | 
             | 1: https://crates.io/crates/dropshot
        
           | cosmic_quanta wrote:
           | I had to look up RFD, and I like the idea!
           | 
           | https://oxide.computer/blog/rfd-1-requests-for-discussion
        
             | steveklabnik wrote:
             | Ah thanks! Yeah I should have mentioned this in my comment,
             | thank you for adding the context.
             | 
             | By the way, you can browse public RFDs here:
             | https://rfd.shared.oxide.computer/
             | 
             | I didn't include any links to any RFDs in my comments today
             | because I have only been referencing non-public ones.
        
       | codethief wrote:
       | Can anyone ELI5 what Oxide's offer is? I've looked at their
       | website and still got no clue. Is it hardware + software I can
       | purchase and use on-premise? Is it a PaaS / yet another cloud
       | provider?
        
         | chrishare wrote:
         | On-prem, fully-integrated compute and storage solution with
         | cloud-like APIs to provision resources, all with a commitment
         | to open source.
        
           | mkoubaa wrote:
           | Mainframe 2.0
        
         | steveklabnik wrote:
         | I believe you're being downvoted because there is already a big
         | thread about this here, though I think that's a bit unfair to
         | you. I haven't posted in that thread yet because I wanted to
         | let others say what is meaningful about the product to them,
         | but this seems like a good place to put my reply. Regardless of
         | all that: it is hardware + software you can purchase and use
         | on-premise, that's correct.
         | 
         | The differentiator from virtually all existing on-prem cloud
         | products is that we are a single vendor who has designed the
         | hardware and software (which is as open source as we can
         | possibly make it, by the way, hence announcements like this) to
         | work well together. Most products combine various other
         | products from various vendors, and are effectively selling you
         | integration. We believe that that leads to all kinds of
         | problems that our product solves.
         | 
         | Another factor here is that we only have two SKUs: a half rack
         | and a full rack. You don't buy Oxide 1U at a time, you buy it a
         | rack at a time. By designing the entire rack as a cohesive
         | unit, we can do a lot of things that you simply cannot do in
         | the 1U form factor. There is a running joke that we talk about
         | our fans all the time, and it's true. Because our sleds have a
         | larger form factor than a traditional 1U, we can use larger
         | fans. This means we can run them at a lower RPM, which means
         | power savings. That's the deliberate design choice. But we also
         | have gained accidental benefits from doing things like this:
         | lower RPM also means that our servers are way quieter than
         | others. That's pretty neat. Some early prospective customers
         | literally asked if the thing is on when it was demo'd to them,
         | because it's so quiet. Is that a reason to buy a server? Not
         | necessarily, but it's just a fun example of some of the things
         | that end up happening when you re-think a product as a whole,
         | rather than as an integration exercise.
        
       | criddell wrote:
       | I'm unfamiliar with illumos so I went to their webpage and the
       | very first thing it says is:
       | 
       | > illumos is a Unix operating system
       | 
       | Is illumos an actual Unix (like macOS) or a Unix-like OS (like
       | GNU/Linux)?
        
         | chucky_z wrote:
         | Actual Unix. I believe it is in the Solaris family.
        
         | steveklabnik wrote:
         | Actual Unix. Wikipedia is pretty good:
         | https://en.wikipedia.org/wiki/Illumos
         | 
         | > It is based on OpenSolaris, which was based on System V
         | Release 4 (SVR4) and the Berkeley Software Distribution (BSD).
         | Illumos comprises a kernel, device drivers, system libraries,
         | and utility software for system administration. This core is
         | now the base for many different open-sourced Illumos
         | distributions, in a similar way in which the Linux kernel is
         | used in different Linux distributions.
        
         | Thoreandan wrote:
         | Nobody's paid to have it pass Open Group Unix Branding
         | certification tests
         | 
         | https://www.opengroup.org/openbrand/register/
         | 
         | so it can't use the UNIX(tm) trade mark.
         | 
         | But it's got the AT&T Unix kernel & userland sources contained
         | in it.
         | 
         | PDP-11 Unix System III: https://www.tuhs.org/cgi-
         | bin/utree.pl?file=SysIII/usr/src/ut...
         | 
         | IllumOS: https://github.com/illumos/illumos-
         | gate/blob/b8169dedfa435c0...
        
       | internetter wrote:
       | I really want one of these racks in my bedroom. Unfortunately,
       | somehow I think I couldn't afford one ;)
        
       | ahmedfromtunis wrote:
       | It is great that the software is open-source, but would be you
       | useful to be deployed on other hardware?
       | 
       | And what would happen if, for whatever reason, a company can no
       | longer purchase Oxide racks, will it need to start over its
       | infra, or can it just build around Oxide hardware?
        
       | mihaic wrote:
       | I'm really curious: what kind of workload would companies want to
       | run on a custom Unix that isn't Linux/Mac/BSD?
       | 
       | I'm rooting for more mature OS diversity, I just have no idea who
       | the end users would be and what their needs would look like.
        
         | epistasis wrote:
         | ZFS is native on illumos, and the containerization equivalent,
         | etc, is pretty great.
         | 
         | There's a good argument that your servers in the cloud don't
         | need to be on the same OS, as long as you can hire enough
         | talent to work on them.
        
       ___________________________________________________________________
       (page generated 2024-01-29 23:00 UTC)