[HN Gopher] Oxide at Home: Propolis Says Hello
___________________________________________________________________
Oxide at Home: Propolis Says Hello
Author : xena
Score : 187 points
Date : 2022-03-14 12:11 UTC (10 hours ago)
(HTM) web link (artemis.sh)
(TXT) w3m dump (artemis.sh)
| nwilkens wrote:
| I am very excited about what Oxide is doing, including how the
| work is being open sourced, and upstreamed.
|
| I also love that they continued to bet on Illumos and am looking
| forward to the continued growth and development in the Illumos
| space.
| asdfljk3ljk wrote:
| gennarro wrote:
| Unrelated: But anyone remember xoxide.com? The computer building
| site from the early 2000s? One of my favorite sites of all time.
| dcre wrote:
| Oxide is hiring in the following areas: electrical engineering,
| security, embedded, control plane + API, internal systems
| automation, dev tools, and product design. (I work there.)
|
| https://oxide.computer/careers
| dls2016 wrote:
| Curious... if I'm still in the "triage" bucket after 6 weeks
| should I assume the ship has sailed? I was _really_ hoping to
| hear back one way or the other!
| dcre wrote:
| No, you will hear back. We're trying really hard to keep to
| the 6 weeks thing but sometimes we don't succeed.
| [deleted]
| 0des wrote:
| Please get back to them today, this is a prompt or cue to
| rescue this one from falling through the cracks. You never
| know...
| robocat wrote:
| I would think that startups would want to bias for people
| that can make good decisions quickly, within a short
| decision time. Perhaps hardware startups want more
| conservative employees?
|
| Also in my experience, a fabulous candidate is sometimes
| only available for a very short time window (they either
| have become available due to unforeseen circumstances, or
| they are snapped up by a faster mover).
|
| Is six weeks fast in your opinion?
| rcarmo wrote:
| Kudos. I've been looking at that stuff but have lacked the
| bandwidth to even think about getting it to run (wish I could,
| really).
| psanford wrote:
| I've not looked at Illumos distros seriously in a long time, but
| the reason given for rejecting OmniOS seems really strange to me.
| My general impression was that OmniOS was a tight, server
| oriented distro that had a sane release schedule and up to date
| security patches. Who cares if they use the word "enterprise" in
| their marketing copy?
| asdfljk3ljk wrote:
| nickdothutton wrote:
| There is still a place for fully integrated and engineered
| systems. If you have the need for high levels of concurrency,
| performance, availability, and need to know... not guess... what
| your opex is going to look like. I'm a user and supporter of
| cloud providers, but there are some fat, fat margins being booked
| there. Not every company can have a storage team, network team,
| and compute team to integrate those things properly.
|
| 0xide is one of the few really interesting new tech companies out
| there.
|
| I have to admit some bias, as I was involved with a company
| offering a "poor mans vBlock" around 2010. We didn't grow fast
| large but we never once lost a deal against commodity hardware
| vendors. They were easy to beat.
| BluSyn wrote:
| Is it weird that I'm insanely excited about Oxide as a product,
| even though I have absolutely no need or use case for it?
| mxuribe wrote:
| Same here!
| yjftsjthsd-h wrote:
| Not at all; they're building a cool tech stack, but the only
| thing they sell is super expensive hardware that no individual
| - and not even that many businesses! - is likely to be able to
| afford.
| bcantrill wrote:
| So, the only thing really inherent about our price point is
| that we're selling compute by the rack: as it turns out, a
| whole rack of server-class CPUs (and its accompanying DRAM,
| flash, NICs, and switching ASICs) is pretty expensive! But
| this doesn't mean that it's a luxury good: especially because
| customers won't need to buy software separately (as one does
| now for hypervisor, control plane, storage software, etc.),
| the Oxide rack will be very much cost competitive with extant
| enterprise solutions.
|
| Cost competitive as it may be, it doesn't mean that it hits
| the price point for a home lab, sadly. One of the (many)
| advantages of an open source stack is allowing people to do
| these kinds of experiments on their own; looking forward to
| getting our schematics out there too!
| eaasen wrote:
| It also turns out that not many people have 3-phase power
| and can support a heat/power load of 15kW in their homes ;)
| semi-extrinsic wrote:
| I actually suspect it would be a lot easier to support
| 15kW of power in my home than 15 kW of cooling.
|
| I know several people with 2x 240V 32A 3-phase in their
| garage, that's 20+ kW at any reasonable power factor. But
| a 15 kW cooler that would work in summer would annoy the
| hell out of any neighbours living closer than a mile.
| 0des wrote:
| Simple solution: Turn those neighbours into shareholders
| and they can sleep to the sound of money all summer long
| :)
| mwcampbell wrote:
| Where does this leave companies that would like to take
| advantage of fully integrated software and hardware (yes,
| intentionally referring to your old project at Sun), but
| don't need a full rack's worth of computing power (and
| maybe never will), and don't have the in-house skills to
| roll their own? Or do you think that what you're selling
| really only has significant benefits at a large scale?
| dijit wrote:
| I think the intention is that those people are better
| served with consolidated cloud providers? -- or even
| single digits of physical colocated servers.
|
| It would be nice to have a known pricepoint from a cloud
| provider which once exceed you ask the question: "Should
| we buy a rack and COLO it?" Even if the answer is "no"
| it's still good to have that option.
|
| ---
|
| The thing is: Datacenter technology has moved on from
| 2011 (when I was getting into Datacenters), but only for
| the big companies. (Google, Facebook, Netflix); I think
| Oxide is bringing the benefits of a "hyperscale"
| deployment to "normal" (IE; single/double-digit rack)
| customers.
|
| Some of those things such as much more efficient DC
| converters, so not every machine needs to do it's own
| AC/DC conversion.
| mwcampbell wrote:
| What's kind of messed up, at least for tiny companies
| like mine, is that renting an ugly PC-based dedicated
| server from a company like OVH is currently cheaper than
| paying for the equivalent computing power (edit: and
| outgoing data transfer) from a hyperscale cloud provider
| like AWS, even though the hyperscalers are probably using
| both space and power more efficiently than the likes of
| OVH. My cofounder will definitely not get on board with
| paying more to get the same (or less) computing power,
| just for the knowledge that we're (probably) using less
| energy. I don't know what the answer is; maybe we need
| some kind of regulation to make sure that the
| externalities of running a mostly idle box are properly
| factored into what we pay?
| dijit wrote:
| You're amortising a lot of software developers and
| sysadmins with your AWS bill. It's also in-trend so a bit
| premium.
|
| They're not reasonably equivalent. But I don't doubt that
| Amazon is laughing to the bank still.
| zozbot234 wrote:
| > renting an ugly PC-based dedicated server from a
| company like OVH is currently cheaper than renting the
| equivalent computing power from a hyperscale cloud
| provider like AWS
|
| That's not surprising, you're basically paying for
| scalability. An idle box doesn't even necessarily "waste"
| all that much energy if it's truly idle, since "deep"
| power-saving states are used pretty much everywhere these
| days.
| mwcampbell wrote:
| Sure, the CPU may enter a power-saving state, but
| presumably for each box, there's a minimum level of power
| consumption for things like the motherboard, BMC, RAM,
| and case fan(s). The reason why AWS bare-metal instances
| are absurdly expensive compared to OVH dedicated servers
| is that AWS packs more computing power into each box. So
| for each core and gigabyte of RAM, I would guess AWS is
| using less power (edit: especially when idle), because
| they don't have the overhead of lots of small boxes. Yet
| I can have one of those small boxes to myself for less
| than I'd have to pay for the equivalent computing power
| and bandwidth from AWS.
| zozbot234 wrote:
| Interestingly, I believe that unused DIMM modules _could_
| be powered down if the hardware bothered to support that.
| Linux has to support memory hotplug anyway because it 's
| long been in use on mainframe platforms, so the basic OS-
| level support is there already. Since it's not being
| addressed in any way by hardware makers, my guess is that
| RAM power use in idle states is low enough that it
| basically doesn't matter.
| jjav wrote:
| I'm a huge fanboy of Oxide, hope they succeed, the world needs
| more of this.
|
| I'm very sad about what Silicon Valley has become. We speak of
| "tech companies" but they mostly no longer exist. What are the
| big names in Silicon Valley now? Advertising companies, for the
| most part. A movie company. Online shopping. Social whatevers.
| None of these companies sell tech products, they are not tech
| companies. Sure, they use tech internally but so do law offices
| and supermarkets, those aren't tech companies either.
|
| I miss the Silicon Valley of actual tech companies. Sun, SGI,
| HP (the actual tech HP of back then), etc. Apple survives, but
| focused on consumer-level stuff which I don't find interesting.
| Oracle is around, but they were always more of a lawyer shop
| than a tech company. Real hardcode tech companies, do any exist
| anymore?
|
| Oxide is such fresh air, exciting!
|
| Every week or so I'm about to send an application, I really
| want to work there. My partner would kill me though, so I
| haven't. (They have a flat pay scale that, when living in
| Silicon Valley, would make it very difficult to support a
| family.. so I'm stuck cheering from the sidelines.)
| EvanAnderson wrote:
| I'm enthusiastic about their products and the company in
| general, too. I don't often feel like I'd like to be an
| employee, but Oxide sounds like it would a very exciting gig
| (but I lack any skill set to remotely justify even contacting
| them-- I don't think they're looking for heavily opinionated
| Windows / Linux sysadmins >smile<).
|
| Their gear is targeted at way larger-scale than I'll ever get
| to use (what with the size of environments I work in). What I
| hear about their attitudes re: firmware, for example, makes me
| wish that I could have their gear instead of the iDRAC's,
| PERCs, and other closed-source roach-motel hardware I'm stuck
| with.
|
| I'm young enough that I just missed the era of computers that
| Oxide evokes. I put in a couple DEC Alpha-based machines in the
| late 90s and got a glimpse of what it might be like to have a
| vendor who provides a completely integrated hardware/software
| stack and "ecosystem". I'm sure there was operational advantage
| to being a "DEC shop" or a "Sun shop". The PC market crushed
| that old school model by wringing out the margin necessary to
| make that kind of company work. I'd love to see Oxide make a go
| of it, though.
| bityard wrote:
| Nope, it's proof that their marketing team is doing a great
| job.
| bcantrill wrote:
| Especially because there isn't one!
| yjftsjthsd-h wrote:
| Well, on paper:) You, personally, are an amazing marketing
| department no matter what your official title is; "The Soul
| of a New Machine", for instance, is brilliant at getting
| mindshare. To be fair, I'm fairly sure you don't think of
| what you're doing as marketing, but the only difference I
| see is that this is much more natural/sincere than 99.99%
| of similar efforts - you're actually just that passionate
| and good at sharing your passion.
| bcantrill wrote:
| Ha, entirely fair! When we raised our initial round, we
| said that our podcasting microphones were the marketing
| department, which proved prophetic.
| skadamat wrote:
| This is low key the Developer Relations playbook! I heard
| about Oxide thru the awesome On the Metal podcast y'all
| started :]
| caslon wrote:
| They're a tech company that makes actual technology. You're
| allowed to be excited. Better technology upstream has a habit
| of floating down the stack in some form or another, even when
| lawyers make it hard (ZFS was released under a GPL-incompatible
| license (some people will argue intentionally), yet has
| influenced over twenty years of filesystem design in the Linux
| ecosystem for things like btrfs, coincidentally also owned from
| an IP standpoint largely by Oracle, for example).
|
| Who knows? In twenty years, we could see something cool come
| out of this, like better U-Boot tooling. Or maybe they'll be
| purchased by Oracle, which would if nothing else be funny this
| time.
| TimTheTinker wrote:
| > Or maybe they'll be purchased by Oracle, which would if
| nothing else be funny this time.
|
| Oracle will likely be _very_ interested in Oxide, but I
| suspect Bryan Cantrill would do everything in his power to
| prevent that happening. He 's seen the lawn mower in action
| before and knows not to anthropomorphize it :)
| EvanAnderson wrote:
| That is the really cool thing about them-- they're actually
| _making new computers_ and the software and firmware to go
| with them.
|
| Everything else "new" seems to be a rehash of the IBM PC (my
| "server" has an ISA-- ahem-- "LPC" bus... >sigh<). It's so
| refreshing to see something actually new.
|
| The same goes with software and firmware. Any "new" systems
| software the last 10 years seems to be thin management
| veneers over real technologies like the Linux kernel, KVM and
| containers, GNU userland, etc. And it all ends up running on
| the same cruddy BMC's, "lights-out" controllers, embedded
| RAID controllers, etc.
|
| I get a little bit of excitement at ARM-based server
| platforms (and RISC-V, for that matter) but everything there
| seems to be at even less of an "enterprise" level (from a
| reliability, serviceability, and management perspective) than
| the PC-based servers I already loathe.
| zozbot234 wrote:
| KVM and containerization are not just "thin management
| veneers", they enable all sorts of new features.
| EvanAnderson wrote:
| I'm sorry I wasn't clear. KVM and containers are the
| technology. The "new" stuff I'm talking about are thin
| management veneers over these features.
| zozbot234 wrote:
| Strictly speaking, kernel-level namespaces are the
| technology. "Containers" are a pattern based on kernel-
| level namespaces, and "thin management veneers" help make
| sense of the underlying technology and implement that
| pattern.
| rob74 wrote:
| Only tangentially related: what I find strange about "Oxide"
| (styled as "0xide" - the first character is a zero) is that they
| got very close to actually having a valid hex number in
| C/C++/Rust notation (0x1de) as a logo, but stopped short...
| tonoto wrote:
| It still amazes me that Oxide actually managed to grab the
| perfect PCI vendor ID
|
| https://pcisig.com/membership/member-companies?combine=01de
| monocasa wrote:
| That's fantastic. And here I thought that Intel's vendor ID
| of 0x8086 was cute.
| kaoladataveez wrote:
| yes it's crazy!
| BaconPackets wrote:
| I'm currently reading up on this, but I'm struggling to match a
| use case.
|
| It's not Openstack. It's not VMware. It's not kubernetes. It's
| not proxmox. It's not Xen. It's not Anthos. It's not GCDE. It's
| not Outposts.
|
| So who and what is it for? Where is the use case that none of
| these other products fit the bill?
|
| Especially for an on premise use case.
| steveklabnik wrote:
| This article is about technical details of the product that
| aren't user-facing.
|
| The business is fairly straightforward: we sell computers, a
| rack at a time. You as a customer can buy a rack, and put it in
| your data center. The rack offers an in-browser management
| console, built on top of an API you can use too. You use these
| tools to set up virtual machines. You can then use those VMs
| however you want. You get the cloud deployment model but with
| the "I buy the servers" ownership model.
|
| There's a few different advantages depending on how you want to
| look at it.
|
| Starting from a rack as the smallest unit rather than 1U brings
| a lot of advantages, but there aren't really vendors currently
| selling these sorts of things, instead "the hyperscalers" have
| internal teams building stuff like this. There are a lot of
| organizations who want hyperscale style servers but aren't
| going to start a division to begin making them themselves.
|
| Another advantage is that everything is designed to work with
| the rest of it: you (or the OEM you're buying from) are not
| cobbling together a bunch of hardware, firmware, and software
| solutions from disparate vendors and hoping the whole thing
| works. Think "Apple," or "Sun," rather than "IBM PC
| Compatible." This is easier for users, as well as allows us to
| build systems we believe are more reliable.
|
| There's also smaller things, like "as much as possible
| everything is open source/free software," which matters to some
| folks (and allows for interesting things like the above blog
| post to happen!) and is less important to others.
| BaconPackets wrote:
| Thanks! That gives me a bit more context.
| steveklabnik wrote:
| You're welcome! Sorry you're being downvoted, no idea
| what's up with that, it's a reasonable question. Sometimes
| our stuff can seem opaque, but that's because we're mostly
| focused on shipping right now, rather than marketing.
| Always happy to talk about stuff, though.
| BaconPackets wrote:
| :shrug: That's fine, I would always rather have a
| conversation. Thanks for your time!
| [deleted]
| proxysna wrote:
| Glad to see oxide on HN. Sick stack.
| stock_toaster wrote:
| I use vm-bhyve on FreeBSD current. The little bit of propolis
| described in the post reminds me of it a bit, but in rust (with a
| service/api interface) instead of just cli and shell. Sounds
| neat!
|
| I wonder how hard it would be to port to FreeBSD.
| ArchOversight wrote:
| There's some changes they have made to bhyve that would need to
| get ported to FreeBSD first.
| cmdrk wrote:
| I get that Oxide has a lot of ex-Joyent folks, but I can't help
| but wonder how much the choice of a Solaris-derived OS will
| hobble adoption, support for devices, etc. In many ways this
| feels like SmartOS all over again - a killer concept that will be
| eclipsed by a technically inferior but more tractable (for
| contributors, for management) solution.
| wmf wrote:
| If the Oxide stack is good someone could make a name for
| themselves by porting it to Linux to get wider hardware
| support.
| zozbot234 wrote:
| It might make even more sense to run a cutting-edge
| distributed OS on the actual Oxide hardware. With rack-scale
| platforms like this it could be feasible to do SSI with
| distributed memory across multiple "nodes". Current "cloud"
| platforms like Kubernetes are already planning on including
| support for automated checkpointing and migration, which is
| sort of the first step prior to going fully SSI.
| bitbckt wrote:
| I miss IRIX, too. :)
| yjftsjthsd-h wrote:
| Depends how deeply integrated it is; if nothing else, I
| suspect that bhyve and kvm have sufficiently different APIs
| that it would be at least quite annoying to paper over the
| differences.
| porker wrote:
| As someone who got massively excited by SmartOS, only to see
| adoption never reach even the minimal levels I hoped for - yes,
| I hear you.
|
| What would the hosting story look like now if 8(?) years ago
| 25% of servers had adopted SmartOS?
| EvanAnderson wrote:
| I got massively excited by SmartOS, too. Coming from a
| vSphere and Hyper-V world I found it to be a joy to use. The
| way that Illumos zones leverage ZFS is really, really cool.
| The tooling in SmartOS was very nice to use, too.
|
| I never used it in production anywhere, admittedly. I also
| never got a chance to try out Triton. I'm on the fence about
| whether or not I keep my SmartOS hosts in my home network now
| that Illumos is a second-class citizen when it comes to
| OpenZFS.
| NexRebular wrote:
| We run Triton and individual SmartOS boxes in production.
| They have pretty much replaced all of our VMWare and linux-
| based hypervisors save some very specific use cases, mostly
| relating to GPU passthrough and NVIDIA.
|
| The case with OpenZFS does worry me as well. I fear the
| developers start slowly introducing linuxisms thus
| sacrificing portability and stability for the great
| penguin.
| anaisbetts wrote:
| I thought this too, but if the goal of the control plane
| software and blade host OS is solely to create VMs and not to
| actually be a general-purpose OS, this probably doesn't matter
| as much?
| mise_en_place wrote:
| Idk, KVM/libvirt/qemu has worked fine for me. It is very
| lightweight compared to say VMWare. If I don't want VMs I
| could use Docker/containerd.
|
| What problem does Oxide solve exactly?
| rfoo wrote:
| Sell you a few racks of servers, ready to use out of the
| box, I guess?
| yjftsjthsd-h wrote:
| > but I can't help but wonder how much the choice of a Solaris-
| derived OS will hobble adoption, support for devices, etc.
|
| Does it matter? Oxide is building the whole stack - their own
| hardware with their own firmware to run their own OS with their
| own virtualization layer. They don't _need_ support for
| arbitrary devices, because they control the hardware.
| qbasic_forever wrote:
| I really worry about a startup taking a massive bet on their
| own custom hardware now in 2022. The world was much, much
| different in December 2019 when Oxide started than it is now.
| Let's hope the investment cash keeps flowing and the hardware
| gets to folks that purchased it.
| bsder wrote:
| Right now is, in fact, the _best_ time to be betting on
| custom hardware.
|
| Moore's Law has been dead for a while. Getting
| "performance" now requires design and architecture again
| rather than just sitting back for 18 months and letting
| Moore's Law kill your competitor.
|
| The big problem right now is that custom _chip_ hardware is
| still too stupidly expensive because of EDA software. Fab
| runs are sub $50K, but EDA software is greater than 100K
| per seat and goes up rapidly from that.
| zozbot234 wrote:
| Do you really need proprietary EDA tools to get started
| on designing custom chips? Higher-level design languages
| like Chisel are showing a lot of potential right now,
| with full CPU cores being designed entirely in such
| languages. Of course EDA will be needed once the high-
| level design has to be ported to any specific hardware-
| fabbing process, but that step should still be relatively
| simple since most potential defects in the high-level
| design will have been shaken out by then.
| bsder wrote:
| > Do you really need proprietary EDA tools to get started
| on designing custom chips?
|
| Yes, actually, you do.
|
| The "interesting" bits in chip design aren't the digital
| parts--the interesting bits are all analog.
|
| A RISC core is an undergraduate exercise in digital
| design and synthesis in any HDL--even just straight
| Verilog or VHDL. It's a boring exercise for anyone with a
| bit of industry experience as we have infinite and cheap
| digital transistors. (This is part of the reason I regard
| RISC-V as a bit interesting but not that exciting. It's
| fine, but the "RISC" part isn't where we needed
| innovation and standardization--we needed that in the
| _peripherals_.)
|
| However, the interfaces are where things break down. Most
| communication is now wireless (WiFi, BLE, NB-IoT) and
| that's all RF (radio frequency) analog. Interfacing
| generally requires analog to digital systems (ADCs and
| DACs) and those are, obviously, analog. Even high-speed
| serial stuff requires signal integrity and termination
| systems--all of that requires parasitic extraction for
| modeling--yet more analog. And MEMS are even worse as
| they require _mechanical_ modeling inside your analog
| simulation.
|
| If your system needs to run on a coin cell battery,
| that's _genuinely_ low power and you are optimizing even
| the digital bits in the analog domain in order to cut
| your energy consumption. This means that nominally
| "digital" blocks like clocks and clock trees now become
| tradeoffs in the analog space. How does your debugging
| unit work when the chip is in sleep?--most vendors just
| punt and turn the chip completely on when debugging but
| that screws up your ability to take power measurements.
| And many of your purely digital blocks now have "power
| on/power off" behavior that you need to model when your
| chip switches from active to sleep to hibernate.
|
| All this is why I roll my eyes every time some group
| implements "design initiatives" for "digital" VLSI
| design--"digital" VLSI is "mostly solved" and has been
| for years (what people behind these initiatives are
| _really_ complaining about is that good VLSI designers
| are _expensive_ --not that digital VLSI design is
| difficult). The key point is _analog_ design (even and
| especially for high performance digital) with simulation
| modeling along with parasitic extraction being the
| blockers. Until one of these "design initiatives"
| attacks the analog parasitic extraction and modeling,
| they're just hot air. (Of course, you can turn that
| statement around and say that someone attacking analog
| parasitic extraction means they are _VERY_ serious and
| _VERY_ interesting.)
| zozbot234 wrote:
| > It's a boring exercise for anyone with a bit of
| industry experience as we have infinite and cheap digital
| transistors.
|
| Having "infinite and cheap" transistors is what makes
| hardware design _not_ boring. It means designs in the
| digital domain are now just as complex as the largest
| software systems we work with, while still being mission-
| critical for obvious reasons (if the floating point
| division unit you etched into your latest batch of chips
| is buggy and getting totally wrong results, you can 't
| exactly ship a _software_ bugfix to billions of chips in
| the field). This is exactly where we would expect
| shifting to higher-level languages to be quite
| worthwhile. Simple RISC cores are neither here nor there;
| practical multicore, superscalar, vector, DSP, AI etc.
| etc. is going to be a _lot_ more complex than that.
|
| Complicated analog stuff can hopefully be abstracted out
| as self-contained modules shipped as 'IP blocks',
| including the ADC and DAC components.
| zozbot234 wrote:
| Why? If anything, commodity/non-custom hardware is what's
| hurting right now. Fat margins on hardware imply a kind of
| inherent flexibility that can be used to weather even
| extreme shocks.
| qbasic_forever wrote:
| There are plenty of commodity chips that go into making a
| full server rack. If any little power regulator, etc. is
| backordered for months and years it's just more
| unexpected pain. And that's before we even get to the
| problems of entire factories shutting down, just look at
| what's happening to Apple & Foxconn of all companies in
| Shenzhen this week. If the big players are struggling the
| small fries are in for pain too.
| bcantrill wrote:
| The supply chain crisis is very, very real, but we are
| blessed with absolutely terrific operations folks coming
| from a wide range of industrial backgrounds (e.g., Apple,
| Lenovo, GE, P&G). They have pulled absolute supply chain
| miracles (knocking loudly on wood!) -- but we have also
| had the luxury of relatively small quantities (we're not
| buying millions of anything) and new design, where we can
| factor in lead times.
|
| tl;dr: Smaller players are able to do things that larger
| players can't -- which isn't to minimize how challenging
| it currently is!
| 0des wrote:
| Just curious, are you all working out of the same place
| or all remote? Curious about hardware startups and how
| that works. Thanks
| steveklabnik wrote:
| We have an office, but many people aren't in the Bay Area
| (myself included). Not everyone is doing hardware, and
| some folks who do have nice home setups they enjoy
| working with. It's a spectrum, basically.
| 0des wrote:
| Thanks steve
| wmf wrote:
| They still have to write a bunch of drivers that they'd get
| for free with Linux. Clearly they think the tradeoff is worth
| it but it's not obvious why.
| KerrAvon wrote:
| Possible better security/performance through better
| architecture?
| wmf wrote:
| From what I've seen so far the same architecture could be
| achieved on Linux (e.g. Firecracker or Intel Cloud
| Hypervisor). To get great performance you often need to
| get elbow-deep in somebody else's driver and that may be
| just as much work as writing your own drivers.
| NexRebular wrote:
| Not everything has to run linux. There's enough of it in
| IT already.
| pjmlp wrote:
| We don't need Linux monoculture.
| yardie wrote:
| Love what they are doing. And as a cloud and on-prem supporter I
| get what they are trying to accomplish.
|
| If you haven't heard you should check out their podcast, "On the
| Metal [0]" It's a truly a gift especially if you are an elder
| millenial.
|
| [0] https://oxide.computer/podcasts
| jjav wrote:
| The podcast is awesome. Wish they would continue with more
| episodes!
| mwcampbell wrote:
| They've been doing Twitter Spaces for several months now,
| with recordings and show notes here:
| https://github.com/oxidecomputer/twitter-spaces Disclosure: I
| was the main speaker on one of their spaces.
| RealityVoid wrote:
| Whaaaaaaattt. And I've been here waiting for a new podcast,
| like an idiot... when they had this... Thanks for the info!
| NexRebular wrote:
| I wonder if that stack would run on a HPe bladesystem. Definitely
| something to try out after hours...
___________________________________________________________________
(page generated 2022-03-14 23:01 UTC)