[HN Gopher] Illumos to drop SPARC Support
___________________________________________________________________
Illumos to drop SPARC Support
Author : octotoad
Score : 128 points
Date : 2021-05-07 10:22 UTC (12 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| cbmuser wrote:
| Why don't they just upgrade GCC to a more recent version. GCC
| still actively supports SPARC to this date and Rust support is
| also present and while not perfect, it definitely works.
|
| So, while I don't really have a problem with removing SPARC
| support from Illumos which I wouldn't be using on SPARC systems
| anyway, the reasons mentioned in the document aren't convincing
| me at all.
|
| FWIW, we still support sparc64 in Debian Ports:
|
| > https://cdimage.debian.org/cdimage/ports/current/
| sgift wrote:
| How would upgrading GCC help with the problem that they don't
| have SPARC machines available for building Illumos?
| cbmuser wrote:
| Well, they could reach out to the Debian or Gentoo community
| and ask for help. We would be happy to help them.
| spijdar wrote:
| It wouldn't, but they state that one of the benefits of
| dropping SPARC is being able to "[retire] the now-ancient GCC
| 4.4.4 shadow compiler that remains chiefly to support the
| SPARC platform"
|
| I'm guessing the problem isn't that newer GCC lacks SPARC
| support, but that their (now very old and bitrotted) SPARC
| support relies on some kind of undefined behavior or nuance
| of GCC 4 that prevents newer versions from building the
| kernel.
| 1over137 wrote:
| Illumos is a BSD, so perhaps, like other BSDs, they don't
| want to rely on GPLv3 projects like (current) gcc.
| spijdar wrote:
| Illumos is a fork of Solaris, itself a UNIX System V
| derived OS, very much not a BSD, though taking some code
| from BSD. I'm not sure what license they make new
| additions under, but the core is CDDL licensed and not
| BSD licensed, anyway.
|
| (side tangent FWIW, NetBSD has no qualms with using GPLv3
| GCC, only Free/Open do)
|
| (Edit for another historical tangent: Sun helped create
| System V release 4, which specifically combined element
| of older SysV with BSD. Additionally, Solaris/"SunOS 5"'s
| predecessor SunOS 4 _was_ a straight-up BSD. So Solaris
| is a pretty BSD-y UNIX, in a way...)
| rjsw wrote:
| NetBSD can optionally be built with LLVM/Clang. It can be
| cross-compiled using either toolchain.
| spijdar wrote:
| Well, yes -- it's nice and portable like that. I think at
| some point they got it building with PCC and TCC too?
|
| My point is unlike the other BSDs they haven't made a
| point of deprecating/removing GCC from the source tree,
| or even using LLVM by default where they can. For third
| parties worried about GPLv3, you can easily delete all
| GPLv3 code with a rm -rf src/external/gpl3 :)
| rjsw wrote:
| LLVM doesn't support all the CPU architectures that GCC
| does, ones missing include 68k, VAX and SH3.
| octotoad wrote:
| Illumos is from the System V family tree, as it is a
| direct descendant of OpenSolaris.
| ch_123 wrote:
| While I don't follow Illumos closely, I know that there was
| a project to remove the dependency on the Sun Studio
| compiler suite, so it's possible that the reliance on the
| old gcc version has something to do with that.
| ptribble wrote:
| Because it's not a stock gcc. It's specially modified to do
| things quite differently on SPARC. If it was a case of "just"
| upgrading we would have done it long ago.
|
| As one of approximately 2 people who actually build illumos on
| SPARC, I can testify that the whole thing is enough of a
| maintenance burden that it's causing major problems.
|
| (And I'll just fork a copy and use that; it's not as though the
| SPARC ecosystem is a moving target.)
| yjftsjthsd-h wrote:
| > (And I'll just fork a copy and use that; it's not as though
| the SPARC ecosystem is a moving target.)
|
| This was actually one of my first questions on seeing the
| announcement - is Tribblix SPARC going to continue, or will
| this upstream change eventually EOL that as well?
| ptribble wrote:
| Tribblix SPARC will continue. While you lose any new
| features that go into illumos upstream, at least you don't
| keep getting broken by changes going into illumos upstream.
|
| (This isn't a commitment for all time, naturally. At some
| point the SPARC hardware I have will stop working. But it
| turns out to be solidly built and impressively reliable.)
| yjftsjthsd-h wrote:
| So for the foreseeable future you intend to just freeze
| SPARC Tribblix on the last Illumos to support SPARC? I
| suppose it should stay ABI compatible with other
| components ~forever, so that shouldn't even hold back
| other pieces of the system.
| tytso wrote:
| I hope Debian manages to keep sparc64 in Debian Ports. Just
| last night, I was fixed an alignment portability issue for
| e2fsprogs which _only_ showed up on sparc64. The "non-
| portable" code worked just fine on all of the officially
| supported Debian architectures, and the only one where one of
| the alignment problem showed up was sparc64[1]. (Two of the
| alignment problems did show up when running a 32-bit arm chroot
| on a 64-bit arm kernel, but one of them, a failure in the
| regression test j_recovery_fast_commit, only be reproduced on
| sparc64.)
|
| Sparc64 support is rocky; yes, it has a modern gcc, but stack
| unwinding in gdb is totally broken. (Not sure if that's a gdb
| or a gcc problem, but just try building some trivial program
| where main() calls some trivial function, and then try setting
| a breakpoint inside that function, and then try getting a stack
| trace.) This made finding the root cause of the alignment bug-
| induced crash much harder, but at least sparc64 served as a
| canary in the coal mine. Supporting niche architectures is
| great from a software quality perspective.
|
| [1]
| https://buildd.debian.org/status/fetch.php?pkg=e2fsprogs&arc...
| rbanffy wrote:
| A fun way to make Oracle donate a machine would be to make an
| official POWER port.
|
| That's a lot of work and I don't see IBM making a machine
| available.
| cbmuser wrote:
| > A fun way to make Oracle donate a machine would be to make an
| official POWER port.
|
| We tried to convince Oracle to donate a SPARC machine for
| Debian but that unfortunately never happened for various
| reasons.
| spijdar wrote:
| There are plenty of people who would donate a virtual machine
| or remote work environment for this sort of thing, if anyone
| was (seriously) interested in porting. Realistically, I doubt
| anyone wants to put that kind of effort in. Getting FreeBSD and
| more recently OpenBSD working on PowerNV platforms has taken a
| very large effort by the community, and those OSes already had
| some support for POWER/PowerPC. Illumos only has the x86 and
| (very bitrotted) SPARC support AFAIK, I believe there have been
| some attempts at bringing up ARM but don't see anything
| substantial.
|
| More on topic, as someone sentimental for SPARC hardware and to
| an extent solaris, this is sad to see, but it feels like just a
| formality. I don't think illumos has worked properly on SPARC
| hardware since ... Ever? There were a few illumos distributions
| with SPARC builds but I always had trouble getting them to run
| on even a T2, seems there was little work done post rebranding
| from OpenSolaris for SPARC. Linux and OpenBSD have been much,
| much better on SPARC than Illumos, tinged with bitter irony...
| 4ad wrote:
| Convince Oracle to donate a machine... for an Oracle Solaris
| competitor?! Oracle was the one that shut down OpenSolaris in
| the first place!
| rbanffy wrote:
| The only way they'd do it is if someone made a Solaris-killer
| that ran on a SPARC-killer that could run an Oracle-killer
| database.
|
| But IBM wouldn't help someone to build an AIX-killer OS.
| hulitu wrote:
| This is really sad. The world is heading to a duopoly x86 - arm.
| Alpha is dead, Mips is almost dead, PA-RISC is dead, POWER is too
| expensive and RISC-V is mostly nice to have.
| macspoofing wrote:
| It was only a few years ago that the world was heading to an
| x86 monopoly.
| spamizbad wrote:
| A lot of these architecture have some drawbacks in modern
| times.
|
| Alpha's loosey-goosey memory model makes multithreaded code on
| SMP systems more challenging. Linux utilizes its Alpha port as
| a worst-case testbed for data race conditions in its kernel.
|
| SPARC's register windows are anachronistic and complicate the
| implementation of CPUs, and I'd guess also make it more
| difficult to build OoOE cores (so many SPARC chips are in-
| order, why?)
|
| POWER isn't so bad though. It's open enough where you could
| build your own lower-cost core if you wanted. There's nothing
| intrinsic to the ISA that would mandate an expensive chip other
| than volume constraints.
|
| PA-RISC put up some great numbers back in the day but between
| the Compaq acquisition (bringing with it Alpha) and Itanium it
| was chronically under-resourced. They had a great core in the
| early 90s and basically just incrementally tweaked it until its
| death.
| classichasclass wrote:
| You could even build your own Power ISA system with
| Microwatt, which is fully synthesizeable and growing by leaps
| and bounds.
|
| https://github.com/antonblanchard/microwatt
|
| (Disclaimer: minor contributor)
|
| I really liked PA-RISC. I thought it was a clean ISA with
| good performance at the time and avoided many of the pitfalls
| of other implementations. I think HP didn't want to pour lots
| of money into it to keep it competitive, though, and was
| happy to bail out for Itanium when it was viable. My big
| C8000 is a power hungry titan, makes the Quad G5 seem
| thrifty.
| monocasa wrote:
| IDK, I never really liked PA-RISC, but to be fair I was
| always able to look at it from a hindsight perspective.
| Looking back it seems to have most of the RISC issues that
| complicate modern ISA design. Like branch delay slots,
| having a multiply instruction wasn't RISCy enough for it to
| bother with, etc.
| userbinator wrote:
| ...and MIPS has the weird branch delay slots as well as
| pretty horrible code density.
|
| If you look at ARM, particularly the 64-bit version, you'll
| notice it attempts to squeeze multiple operations into a
| single 32-bit "instruction". It's still called RISC, but not
| really "reduced" anymore.
| mastax wrote:
| Nowadays RISC seems to mean "load-store architecture" but I
| think the term should be left in the 90s. CS curriculum is
| slow to evolve.
| klelatti wrote:
| Not sure anyone sees "pure" RISC as being an advantage
| these days though. Didn't Intel demonstrate that you could
| get RISC-like performance from a CISC ISA even with all the
| drawbacks of x86 (instruction decoding complexity etc).
| asveikau wrote:
| > Linux utilizes its Alpha port as a worst-case testbed for
| data race conditions in its kernel.
|
| Is that still true in the present tense? Anybody doing this
| in 2021? Seems like alpha has been dead for a long time.
| mhh__ wrote:
| I know that GCC for example is tested on a bunch of wacky
| old - read: obsolete ;) - hardware, so it's certainly
| possible that the same is true for Linux.
| bonzini wrote:
| Not Linux but the Linux formal memory model. The idea is
| that the compiler optimizations can be as nasty as the
| Alpha out of order execution engine and cache. The Linux
| code has to cater for these optimizations even though it
| will not result in an actual assembly instruction on
| anything except the Alpha. Problem is, on Alpha there's
| indeed an actual price to pay in performance for that
| nastiness.
| asveikau wrote:
| I suspect you misunderstand the question. My question is
| if anybody is presently using alpha hardware to verify
| such correctness. I understand memory models and barriers
| etc. and that alpha is one of the most relaxed on this
| front, that it historically influenced the kernel code
| and was previously very important test hardware. But the
| hardware is now very dated, to the point where it might
| not be good test hardware.
| bonzini wrote:
| The answer to that question is no, but the Alpha is still
| considered the least common denominator even though the
| hardware is obsolete. When people write litmus tests for
| the Linux memory model they are _still_ validated against
| Alpha semantics, because compiler optimizations have the
| same reordering effects as the weird caches of Alpha
| processors.
|
| (The stroke of genius of the C++11 memory model, compared
| to the older Java memory model, was that reordering could
| be treated the same way no matter if performed by
| processors or compilers).
| Tepix wrote:
| Keep an eye on Tenstorrent AI cpus...
| jhbadger wrote:
| On the plus side, at least it looks like it _will_ be a
| duopoly. For a long time it looked like x86 would eat all other
| architectures.
| tyingq wrote:
| It's mildly interesting to me that there's now really no
| notable big-endian systems left, yet that's still the network
| byte order. I wonder what the math is for the amount of global
| wasted CPU cycles on byte-swapping for things that would do a
| fair amount of that...DNS for example.
| cbmuser wrote:
| > It's mildly interesting to me that there's now really no
| notable big-endian systems left
|
| That's not correct. s390x is big-endian and well supported in
| all enterprise distributions such as SLE, RHEL as well as
| Debian and Ubuntu.
| macksd wrote:
| Though as we recently learned, it's considered sufficiently
| "fringe" by a big chunk of the development community that
| it's not that big a deal to drop support for it. (Not to
| imply IBM couldn't be sponsoring development for it more).
| cbmuser wrote:
| But s390x support isn't dropped anywhere. On the
| contrary, IBM spends a lot of money and efforts to make
| sure it is well supported by free software.
| mastax wrote:
| If you're talking about the python cryptography fiasco
| that was dropping support for S390 (31-bit architecture
| discontinued in 1999). S390X (64-bit architecture
| introduced in 2000) is supported by Rust, though not
| necessarily by Python Cryptography.
|
| Incidentally Rust's continued support for S390X is driven
| primarily by cuviper who works for Red Hat (even before
| the IBM acquisition).
| tyingq wrote:
| Notable in terms of global cpu capacity. Linux on zSeries
| is interesting, but only makes financial sense in some
| pretty limited scenarios.
| monocasa wrote:
| I know of at least one micro arch that'll fuse load and byte
| swap instructions into a reverse endian load. There's still
| probably a detectable overhead, but it's not the end of the
| world to hack on later.
| zdw wrote:
| Many CPU's have load/store instructions that perform the
| network byte order swap with no/minimal overhead.
|
| Serialization formats like JSON/YAML/protobuf/etc. would be
| much more costly by comparison.
| pabs3 wrote:
| IBM mainframes are big endian, all the Linux distros support
| them too.
| kens wrote:
| IBM mainframes are big endian essentially because punch
| cards are big endian.
|
| (Punch cards are big endian because the number 123 is
| punched as "123". So that's the order a decimal number will
| be stored in memory. The System/360 mainframes (1964) had a
| lot of support for decimal numbers and it would be kind of
| bizarre to store decimal numbers big-endian and binary
| numbers little-endian so everything was big endian. IBM's
| current mainframes are compatible with S/360.
|
| On the other hand, in a serial computer, you operate on one
| bit at a time, so you need to start with the smallest bit
| for arithmetic. The Intel 8008 was a copy of a serial TTL
| computer, the Datapoint 2200, so it was little-endian. x86
| is based on the 8008, so it kept the little-endian
| architecture.)
| classichasclass wrote:
| Power ISA systems are still bi-endian and many systems run
| big. In fact, the low level OPAL interfaces _require_ you to
| be in big-endian mode, AIX and i are still BE, and both
| FreeBSD and OpenBSD have BE flavours for current PowerNV
| systems. Even a few Linux distros run big (Adelie comes to
| mind). They 're definitely a minority but they're still
| around.
| cestith wrote:
| Power ISA includes an endianness switch in the spec. Power
| and Power64 are all BE and LE. Most Linux distros only
| support modern versions on LE though. Debian has a BE port
| but it's not considered a primary release target.
|
| The last PPC64 release of Ubuntu was 16.04 which is now out
| of support by about a month. Even on that, the two major
| web browsers didn't support building on the platform for a
| long time.
|
| Yes, it can be done if you want enough to do it.
|
| https://catfox.life/2018/11/03/clearing-confusion-
| regarding-... for more info
| tyingq wrote:
| Yes, I wasn't claiming that no big endian systems exist.
| Just that they are overwhelmingly in the minority now, and
| so the number of ASM byte swapping ops happening is mildy
| amusing.
| pabs3 wrote:
| IIRC ARM devices can also be big-endian and GCC can even
| generate big endian 64-bit ARM code:
|
| https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html
| tyingq wrote:
| Yeah, you can find an ARM big endian distribution of, for
| example NetBSD. No Linux that I can find. Apparently boot
| issues are a bit tricky.
| Nursie wrote:
| You used to be able to get debian arm-be, but that was a
| good 15 years ago.
| erk__ wrote:
| I have found a Gentoo distribution with big endian for
| the raspberry pi 3, so it is out there
| https://github.com/zeldin/linux-1/releases
| rjsw wrote:
| I'm fairly sure that NetBSD/arm switches to big endian
| once the kernel is running, the boot process is
| unchanged.
| tyingq wrote:
| The thing holding it back for Rpi4 is UEFI+ACPI, so I
| assume there's some boot process changes.
| rvp-x wrote:
| ACPI is problematic in big endian.
| bombcar wrote:
| There are various open-source and white-box network switches
| and routers - do any of them run big-endian? If not, it must
| be a solved problem (perhaps by fast-path dedicated ASICs).
| insaneirish wrote:
| > If not, it must be a solved problem (perhaps by fast-path
| dedicated ASICs).
|
| Correct. The data plane of all 'real' networking is done in
| ASICs and/or NPUs.
| 1_person wrote:
| Surprisingly less true these days.
|
| Increasingly things seem to be moving towards ASICs for
| switching and general purpose CPUs (usually with a lot of
| support from the NIC offload capabilities) for routing,
| even in 'real' networking hardware.
|
| The vast majority of fabric ASICs would never actually
| utilize additional TCAM necessary to support full tables
| at line rate in hardware because top of rack switches do
| not have that many addressable targets, so it's a wasted
| cost.
|
| And with DPDK optimized software implementations are
| achieving zero drop line rate for even 100G+ interfaces
| for much, much lower cost than full table routing ASICs
| married to fabric ASICs in a chassis switch.
|
| It's not something a lot of users are aware of -- they
| often think they've bought an ASIC-based router! -- but
| essentially all of the big vendors entry and mid-level
| devices are software routers, and they're even trying to
| figure out how to sell their NOS experience on whitebox
| hardware without undercutting their branded hardware.
| kps wrote:
| Last NPU I worked with (admittedly 10+ years ago) was
| little endian! It used load/store-swapped instructions.
| (Why? I can only guess that they licensed a little-endian
| CPU core for other reasons.)
| mavhc wrote:
| x86: 8086 1978 x64: 1999
|
| ARM: 1985 ARM64: 2011
|
| RISC V: 2010
|
| It took x86 about 10 years (1988) to become the most popular,
| and until 2005 to cause Apple to switch (another 17 years)
|
| It took ARM about 25 years (2010) to become the most popular,
| and until 2020 to cause Apple to switch (another 10 years)
| alanfranz wrote:
| "Switch" to what? Apple is one of the founders of ARM and
| still holds ARM shares IIRC
| mavhc wrote:
| They sold their 40% stake in ARM when they were short of
| cash.
|
| Switch from Power PC to Intel, and then from Intel to ARM.
| I'm using Apple as a tipping point, to when the new
| architecture was so much better than the old it completely
| took over. Obviously with 90% of Apple devices being ARM
| already it was an easier choice for them this time. But as
| each Architecture gets more power as the market is many
| times bigger, it may be more difficult for the new entrant.
|
| That's why RISC V's win (if it occurs) will be because it's
| Open Source. Linux won in 30 years against everyone else
| due to that.
| cestith wrote:
| Apple switched from Motorola 68k to PowerPC, too, and Sun
| switched from 68k to SPARC. The Amiga, NeXT, early Palm
| devices, and the ST were also using members of the 68k
| family. That's an ISA born in 1979 and largely replacing
| (and inspired by) the 6800 (1974) which had a 16-bit
| address bus and 8-bit memory bus and its (binary
| incompatible but with the same assembly language) little
| brother the 6809 (1978). The Tandy Color Computer and the
| Dragon were notable 6809 systems.
|
| That, of course, is just with the Mac since Apple
| previously used variants of the MOS 6502 (1975 and
| allegedly an illicit clone of the MC6800). Apple, Atari,
| Acorn, Commodore (the owner of MOS for several years),
| BBC, Oric, and Nintendo used it in multiple systems each.
| Apple, Acorn, and Nintendo built additional systems on
| its updated sibling the WDC65816 series (1983).
|
| The the 6800/6809/Hitachi 6300/68k/Dragonball/Coldfire
| dynasty and the bastard MOS6502/WDC65816 families were
| collectively basically the ARM of their day in a way.
| Everyone targeting low priced or power-sipping was
| building platforms around them at one time or another.
| Acorn went from a customer to a major competitor and
| successor.
|
| It should be noted that the PowerPC and the whole POWER
| ISA multi-platform family was largely inspired by Apple
| in the first place. They were talking to IBM about a new
| platform and invited Motorola to the talks as their long-
| time processor provider. They formed the "AIM Alliance"
| that eventually morphed into the POWER Foundation and
| OpenPOWER initiatives. I can't really speak to how much
| of POWER ISA is inspired by Motorola's own "RISC"
| processor, the 88000 series.
| klelatti wrote:
| On the Apple specific case I think any move to RISC V
| would be because it would want more control than it has
| with Arm. It could then take the RISC V ISA in the
| direction it wants.
|
| I'm guessing it already has a lot of influence over Arm
| though and there are other factors that strongly act in
| favour of staying with Arm.
|
| If Nvidia takes over Arm though and starts making life
| difficult for the ecosystem then that could change ....
| mrweasel wrote:
| Apple is really interesting, with chip design being moved
| inhouse and the ease of which they seem to switch
| architecture they could move away from ARM if the Nvidia
| purchase happens. I think they'd want to avoid it, at
| least for the next 10 years.
|
| It would be interesting to know how important the ARM
| instruction set is to Apple.
| zepto wrote:
| > they could move away from ARM if the Nvidia purchase
| happens.
|
| The Nvidia purchase is irrelevant to Apple. They have a
| license that won't be impacted.
|
| The only thing that would make them move away would be a
| performance bottleneck in the architecture that
| necessitates a shift.
| klelatti wrote:
| > Still leaves Apple open to potential Nvidia's changes
| to the ISA's direction (and the ISA won't stand still). I
| assume a full fork of the ISA isn't on the cards even for
| Apple.
| zepto wrote:
| > I assume a full fork of the ISA isn't on the cards even
| for Apple.
|
| Why do you assume this?
| klelatti wrote:
| If you've seen their license then happy to be corrected
| but typically an architecture license wouldn't permit
| them to do precisely what they want with the ISA with no
| restrictions whatsoever.
| zepto wrote:
| I haven't seen their license but they founded ARM and
| even though they don't retain any ownership of the
| business I have heard that they retain a license that
| allows them to do pretty much whatever they like with the
| architecture in their own products.
| classichasclass wrote:
| Pretty sure Apple has a permanent ARM license. They'll
| watch what happens with Nvidia, but it doesn't really
| affect them because, as you say, all the secret sauce is
| in-house.
| klelatti wrote:
| Does that give them control over the direction of the ISA
| - suspect not. Don't think it's really the case then that
| they're unaffected by the Nvidia takeover.
| lizknope wrote:
| Most companies buy the ARM CPU RTL or an existing
| hardened core for their chip.
|
| Large companies like Apple have an architectural license
| and implement the entire instruction set on their own.
|
| I worked for a couple of companies with ARM architectural
| licenses and there was a large ARM compliance suite of
| tests that had to be run and pass before you could claim
| that you made an ARM instruction set compatible CPU.
|
| I have heard that Apple does not claim ARM compatibility
| and doesn't run the compliance suite which allows them a
| few shortcuts and other optimizations. Apple only cares
| about running Mac OS and iOS on their hardware so if they
| were incompatible with Linux/ARM or Windows/ARM they
| wouldn't really care.
|
| I haven't been able to verify this. Linux/ARM seems to be
| running okay so far on the new Apple M1 chips.
|
| I don't know if Apple would be affected much if Nvidia
| buys ARM. Their architecture license to implement from
| scracth is probably forever but maybe not
| cestith wrote:
| I suspect Apple's rights might be spelled out more in the
| sales contract to SoftBank than in a separate license
| agreement. Acorn created the ARM processor, but Apple is
| a cofounder of ARM Holdings (Acorn, Apple, and VLSI
| Technogology).
| klelatti wrote:
| Interesting - especially that Apple gets some latitude on
| compliance.
|
| Still leaves Apple open to potential Nvidia's changes to
| the ISA's direction (and the ISA won't stand still). I
| assume a full fork of the ISA isn't on the cards even for
| Apple.
| kube-system wrote:
| Given Apple's history, and their business style, I don't
| think they have loyalty to any architecture or any
| specific technology in particular. They're care about
| product first, and choose whatever technology they need
| to choose to get there.
| https://youtu.be/oeqPrUmVz-o?t=113
| klelatti wrote:
| It's a great clip - possibly my favourite Jobs clip.
|
| Agree with the point 100% but Apple also has a history of
| long and sustained investments in key parts of the stack
| where it sees long term value - including compilers and
| silicon - and long relationships with suppliers. I
| suspect their relationship with Arm is in that category
| and so in the absence of something that is demonstrably
| better, then that will continue.
| klelatti wrote:
| I wouldn't be completely surprised if there is a box
| running a build of Mac OS for RISC V somewhere in
| Cupertino!
|
| Seriously though, I suspect that the ISA isn't that
| important for Apple but on the other hand I think they're
| probably quite happy with the direction of the Arm ISA
| (probably had a big say in parts of it) and it would take
| quite a lot to push them away.
|
| I think that the odds on the Nvidia takeover are quite
| small by now so don't think a move likely at all.
| mavhc wrote:
| Apple's aggressive removal of legacy stuff means the apps
| they do have are mostly kept up to date, so they have
| that going for them. On the desktop the major Mac OS only
| apps are now a) owned by Apple and b) rewritten from
| scratch so are easy to port to ARM, and therefore most
| likely, anything else.
|
| Will RISC V do what ARM did to x86? Start at the low end,
| be more open, and slowly take over.
| klelatti wrote:
| I think that's unlikely - Arm gradually replaced a number
| of in-house ISA's and designs because the economics
| didn't support each firm doing their own thing. I'd be
| surprised if in many cases - except for eg Western
| Digital - the economics of going RISC-V make sense.
| cmrdporcupine wrote:
| Apple has been using ARM on and off since 1993. They have
| more long term organizational experience with ARM than they
| did with x86.
|
| The Newton, then the iPod, then the iPhone, and now the M1.
|
| The iPhone is a more important device for Apple than the Mac
| from a revenue point of view, and they've sold more devices
| with ARM chips in them than they have 68k, PowerPC, or x86.
| They've sold 2.2 billion iPhones. I can't find an easy number
| on how many Macintoshes they've sold totally, but I can't
| imagine it's close to that.
|
| In fact, they used ARM in the Newton (1993) before they used
| PowerPC in the Power Mac (1994).
| pabs3 wrote:
| Raptor Computing has some expensive-but-not-that-expensive
| POWER systems:
|
| https://raptorcs.com/
| PedroBatista wrote:
| Yes, but where can I buy a SPARC CPU? How many of those who
| have/can have it are running Illumos and are putting money/time
| in it? And more importantly what's the outlook for SPARC?
| onedognight wrote:
| Intel (nee Movidius) were selling SPARCs a year or two ago in
| the Myriad 2. SPARC is an open source CPU with solid GCC
| support.
| cbmuser wrote:
| > Yes, but where can I buy a SPARC CPU?
|
| You can buy them used or new in various kind of servers.
|
| > How many of those who have/can have it are running Illumos
| and are putting money/time in it?
|
| Dunno, I'm not really a Solaris guy. I use Solaris as a
| hypervisor for Linux and BSD LDOMs.
|
| > And more importantly what's the outlook for SPARC?
|
| Well, you could make the very same argument about Illumos.
| The Python developers wanted to drop support for Solaris
| already and OpenJDK upstream did actually drop it.
| ptribble wrote:
| For illumos, the sweet spot is the 10-15 year old Sun gear
| you can pick up on eBay. Works well, supported, not overly
| expensive.
|
| Newer SPARC systems are really quite good. And pretty cost-
| effective too. The problem is that the starting price is
| out of reach, and almost nobody is offering a cloud service
| based on SPARC, so you can't hire it either.
|
| I'm running illumos on SPARC. I have some old hardware
| (desktop and server) that I like to make use of. Time, yes,
| but I'm not putting money into it.
|
| And while OpenJDK upstream has dropped support for SPARC
| and Solaris, that was really all about problems with the
| Studio compiler. I' maintaining an illumos OpenJDK port
| with the gcc toolchain on x86 - it's not excessively hard,
| and realistically if you're using a common toolchain and
| common CPU most standards-compliant code is portable at the
| OS layer.
| rjsw wrote:
| I have SPARC systems, I run NetBSD on them though not
| Illumos.
| roryrjb wrote:
| One thing I've wondered (randomly) and I could be way off the
| mark here, but does Illumos have any kind of place at Oxide
| Computer? The author of the link and the CTO of Oxide both have
| strong links to Illumos in one way or another but on the other
| hand some of their team are Linux kernel developers, or is the
| work they are doing not at this level in the stack?
| wmf wrote:
| https://github.com/oxidecomputer/propolis
| roryrjb wrote:
| Ah interesting, thank you!
| qwerty456127 wrote:
| Sound crazy. Like if Windows had dropped x86.
| corty wrote:
| Yes, but Sparc was on life-support as soon as Oracle bought
| Sun, and dead soon after. Sparc was already hideously expensive
| and slow compared to x86 during the Sun days. There innovation
| in Sparc came to a halt soon after T1, which also dropped most
| of the really nice features like CPU and RAM hotswapping. So
| you just got something expensive, slow and incompatible for
| five to six times the price. Oracle then proceeded to cut
| research further, cut rebates, cut off customers from support
| trying to renegotiate higher rates. Which sent everyone running
| to x86 if they weren't already.
|
| Solaris on Sparc was great until ca. 2006. After that it
| started dying.
| binarycrusader wrote:
| The notes about SPARC hardware performance are not accurate;
| there were significant performance improvements to SPARC in
| progress before the Oracle acquisition and some after. Oracle
| made significant investments for a time after the
| acquisition. I don't know at what point that direction
| changed, but it must have been somewhere between 2014-2017.
|
| For anyone that got to use a T3, T4+, etc. performance was
| obviously and substantially improved.
|
| You're also ignoring significant innovations such as ADI.
|
| Regardless, it doesn't matter anymore.
| retrac wrote:
| Much less crazy than when Mac dropped the 68K -- those had been
| joined at birth and separation was believed impossible by even
| the best surgeons.
|
| Solaris was portable by design all along, in the later Unix
| fashion. Sun actually sold their first x86-based system running
| SunOS all the way back in the 1980s, as a low-end complement to
| their new high-end SPARC machines:
| https://en.wikipedia.org/wiki/Sun386i
| wicket wrote:
| With Linux having caught up with key Solaris features in recent
| years (DTrace -> eBPF, Zones -> Namespaces, ZFS -> ZFS on Linux),
| I always thought that the main reason to use Illumos would be
| first-class SPARC support. With that now dropped, I'm concerned
| that Illumos soon become irrelevant. Are there any compelling
| reasons left to use Illumos, other than being something for those
| who just want a free Solaris alternative?
| jsiepkes wrote:
| In my opinion Linux hasn't caught up.
|
| * Namespaces don't come close to FreeBSD jails or Solaris /
| Illumos Zones. There is a reason Docker hosters put their
| Docker tenants in different hardware VM's. Because the
| isolation is too weak.
|
| * Due to CDDL and GPL problems ZFS on Linux will always be hard
| to use making every update cycle like playing Russian roulette.
|
| And there are other benefits. Like SMF offers nice service
| management while not providing half an operating system like
| systemd.
| tptacek wrote:
| The problem with this jails/zones stuff is that I don't know
| anyone who seriously trusts jails and zones for real
| multitenant workloads anyways. The dealbreaker problem
| remains a shared kernel attack surface between tenants. It's
| one thing to propose that Zones are better than namespaces
| (they probably are), but another thing to cross the threshold
| where the distinction is meaningful in practice.
| [deleted]
| jchw wrote:
| Also, tools for improving Docker for multi-tenant workloads
| exists, like gVisor. I don't think equivalents exist for
| jails/zones really.
| tptacek wrote:
| gVisor isn't a shared-kernel multitenant system; it's
| essentially kernel emulation. It's a much stronger
| design.
| jchw wrote:
| I mostly mean that it is intended to be a solution to run
| containers from multiple tenants on the same host. Though
| I do agree, being essentially a kernel in itself, it is a
| bit in a different wheelhouse. It still is a huge value
| add that you can implement something like that on top of
| Docker, imo.
| tptacek wrote:
| You can run container workloads in "real" VMs too; for
| instance, check out Kata Containers. Containers are a way
| of packaging applications; confusingly, they happen to
| also have a reference standard runtime associated with
| them. But you don't have to use it.
| jchw wrote:
| Of course, and that's the value of the abstraction to me.
| Docker itself is obviously nothing to do with the Linux
| container technologies themselves that make up the
| equivalent functionality of FreeBSD jails, but I'm not
| aware of any equivalent abstraction that works around
| jails or zones even though it might be possible. So the
| way I see containers on Linux is not literally a
| composition of kernel features like cgroups or seccomp,
| but as an abstract thing that can be composed out of
| various primitives. And in practice, there's a number of
| different runtimes around it, including Docker clones
| like Podman, or tools that manage effectively chroots
| much closer to what you would do with jails.
|
| That said, I could just be completely wrong, and there
| could be similar things that can be done using jails and
| zones. But when I looked around for similar art with
| FreeBSD jails, either with regards to Docker's style of
| packaging and distribution, or with regards to additional
| layers like gVisor, it didn't seem like a thing well-
| suited to that kind of composition. In comparison, jails,
| at least, seem kind of like more powerful chroots. To me
| this is a pretty big difference versus Linux
| "containers".
| tptacek wrote:
| My mental model of Zones and Jails is that they are a
| cleaner, more convenient, less error-prone way of
| expressing a modern, minimally-privileged, locked down
| Docker runtime. You won't catch me arguing that Zones
| aren't better than Docker, but the u->k attack surface is
| untenable for multitenant workloads.
| bcantrill wrote:
| At Joyent, we deployed public-facing multitenant workloads
| based on zones (and before that, jails) for many years. We
| seriously trusted it -- and had serious customers who
| seriously depended on it. So, now you know someone!
| wmf wrote:
| There were also tons of providers who trusted Linux
| containers for VPS hosting.
| tptacek wrote:
| How'd that turn out?
| mixmastamyk wrote:
| Security requirements (and awareness) have increased over
| the years, have they not?
| bcantrill wrote:
| They definitely have! And we had a (zones-based) public
| cloud through it all. On that note, Alex Wilson's
| description of working with Robert Mustacchi on
| mitigating Meltdown by adding KPTI to illumos[0]
| definitely merits a read!
|
| [0] https://blog.cooperi.net/a-long-two-months
| unixhero wrote:
| And needless to say it became a billion dollar business,
| with a great product.
| psanford wrote:
| They were acquired for $170m.
| unixhero wrote:
| I stand corrected. Still great product, business and
| team.
| tptacek wrote:
| I'm sure they're great. No part of what I have to say
| about this has anything to do with how competent they
| are.
| psanford wrote:
| To be fair, y'all had some serious vulnerabilities,
| including zone escapes and arbitrary kernel memory reads,
| discovered by @benmmurphy.
| bcantrill wrote:
| Yes, though I would like to believe that Ben's
| responsible disclosure coupled with our addressing those
| vulns (and auditing ourselves for similar) reflect
| exactly that seriousness around multitenant security. And
| for whatever it's worth, one of those vulnerabilities --
| which was a bug in my code! -- very much informed by own
| thinking about the inherent unsafety of C, underscoring
| the appeal of Rust. So I am grateful in several
| dimensions!
| tptacek wrote:
| If you have a kernel implemented in Rust, (1) you should
| shout that from the rooftops and (2) use whatever
| isolation mechanism you like on it.
| bsder wrote:
| They're starting with the bootloader and management
| engine. Give them some time to get Rust above that.
| tptacek wrote:
| To this, all I can say is that I spent from 2005-2014,
| and then from 2016-2020, doing nothing but security
| evaluations of products, probably about 60% of which were
| serverside multitenant SAAS systems of one form or
| another, and I don't remember ever evaluating (or
| overseeing the evaluation of) a system that relied on
| Jails or Zones. Lots of Docker! And, until a few years
| ago, multitenant Docker isolation was an infamous joke!
| I'm not sticking up for it!
|
| You can look at the recent history of Linux kernel LPEs
| --- there has been sort of a renaissance because of
| mobile devices --- and count all the ways any shared-
| kernel multitenant system would have broken down. At the
| end of the day, it's not so much about predicting whether
| your system can get owned up (it can), so much as: "what
| do I need to do when there is a kernel LPE announced on
| my platform". If you're doing shared-kernel isolation,
| the right answer to that question is usually "fire
| drill". It's not a noodley thought-leadership kind of
| question; it's a simple, practical concern.
| schoen wrote:
| > The dealbreaker problem remains a shared kernel attack
| surface between tenants.
|
| Also, now, extremely subtle and hard-to-mitigate timing
| attacks between tenants.
| tptacek wrote:
| In fairness, that's an attack class that's very difficult
| to eradicate even with virtualization.
| laumars wrote:
| > _In my opinion Linux hasn 't caught up._
|
| I completely agree. I love Linux and it's easily my preferred
| desktop OS but when it comes to stuff like ZFS,
| containerisation and other enterprise features, FreeBSD and
| Solaris are just more unified and consistent. A lot of it has
| to do with Linux being a hodgepodge of different contributors
| resulting in every feature effectively being a 3rd party
| feature. Which I think is the problem Pottering was trying to
| solve. And in many ways that's quite a strength too. But
| ultimately it boils down to the old Perl mantra of "There's
| more than one way to do it" and how it's fun for hackers but
| FreeBSD et al add the "but sometimes consistency is not a bad
| thing either" part of the mantra doesn't too.
|
| https://en.m.wikipedia.org/wiki/There%27s_more_than_one_way_.
| ..
| wicket wrote:
| > Namespaces don't come close to FreeBSD jails or Solaris /
| Illumos Zones. There is a reason Docker hosters put their
| Docker tenants in different hardware VM's. Because the
| isolation is too weak.
|
| This is largely a myth, please provide an namespace-related
| CVE that has gone unpatched to support your argument. The
| reason they run as VMs is that hypervisors run on ring 0 and
| require higher privileges than the kernel, therefore they are
| naturally more secure. Like Namespaces, Zones and Jails are
| also managed by their respective kernels. If there were any
| major hosters running managed services for Zones and Jails,
| you can bet they would implement them in a similar way.
|
| > Due to CDDL and GPL problems ZFS on Linux will always be
| hard to use making every update cycle like playing Russian
| roulette.
|
| You're right in that the CDDL causes complication but I don't
| consider this to be a compelling reason to use Illumos. Many
| who want to use ZFS on Linux will use it and get it to work
| despite the licensing issues and complications.
|
| > Like SMF offers nice service management while not providing
| half an operating system like systemd.
|
| SMF is relatively nice (apart from the use of XML) and like
| you, I would not touch systemd barge pole. Despite systemd
| making a lot noise in major distros, there are plenty
| alternative distros for those of us who don't want to use it.
|
| Don't get me wrong, I'm a Solaris guy, it made my career. I
| just fear that by dropping SPARC, Illumos have put the final
| nail in their own coffin.
| shrubble wrote:
| As someone who has run both side by side on literally identical
| hardware... no, Linux has not caught up.
|
| Lxc containers vs Solaris Zones... zones clearly wins.
|
| SMF vs systemd (I know that you didn't include this, but it
| matters)... SMF is clearly superior as well.
| cyberpunk wrote:
| Not really. SmartOS was nice for a while but tbh you may as
| well go with openstack or proxmox these days.
|
| I used it for hosting a lot of java over the years but these
| days everyone wants a k8s endpoint and really the kind of
| hypervisor you are running doesn't really make a difference.
|
| Shame, it was nice tech.
| pjmlp wrote:
| Solaris SPARC is one of the few OSes in production taming C
| with hardware memory tagging (ADI).
|
| With Linux that will eventually happen in ARM, but currently
| only Android is adopting it, who know when it will ever come to
| upstream enabled by default like on Android.
| nimbius wrote:
| >ZFS on Linux
|
| its been said in the thread already but this was always a non-
| starter. Torvalds even said so himself. CDDL was the last
| poison pill of a dying giant who couldnt pull its foot from the
| well.
|
| What we, er, the linux community, chose instead, was BTRFS. It
| isnt ZFS, but its made incredible strides. for most use cases,
| it is a reasonable and working replacement for ZFS.
| nix23 wrote:
| >Torvalds even said so himself.
|
| Torvald's comment about ZFS was as uninformed as it
| gets...and he calls himself an FS-Guy ;(
| arid_eden wrote:
| His comment was more on the wisdom (or otherwise) of
| running an out of tree filesystem. I think its hard to
| disagree with him. He went on to say you would never be
| able to merge the ZFS tree with Linux. Again he's the one
| who would know what code gets in Linux. His only actual
| comment against ZFS was that benchmarks didn't look great -
| which is unsurprising given all the extra work ZFS is doing
| wrt data integrity than other filesystems in production
| use.
|
| https://www.realworldtech.com/forum/?threadid=189711&curpos
| t...
| wang_li wrote:
| From the link:
|
| >[ZFS] was always more of a buzzword than anything else,
| I feel,
|
| This is deeply ignorant. I feel that Linux has been
| handicapped by the fact that many developers have never
| done any serious enterprise administration and thus not
| having clear understanding of the needs of a set of their
| users.
| josteink wrote:
| > What we, er, the linux community, chose instead, was BTRFS.
| It isnt ZFS
|
| Speak for yourself. As a part of "the Linux community" I gave
| btrfs a fair chance, but stopped using it because it
| constantly failed on me in ways no other fs had done before
| and didn't protect my data.
|
| ZFS is rock solid and I've never had any of the issues I had
| with btrfs.
|
| So as a member of "the Linux community" you claim to
| unilaterally represent I put such petty license-politics
| aside and choose the file system which serves my needs best,
| and that is ZFS.
| ahl wrote:
| What is the latest on RAID-5/6 support for BTRFS. RAID-Z has
| its issues (saying this as the triple- and double-parity
| author), but it's been stable.
| simcop2387 wrote:
| As far as Ive heard it's still pretty iffy. Since you were
| involved with the zfs side of things what are your thoughts
| on the upcoming zfs draid bits? I don't have specific need
| for them myself but they look really attractive for
| building a new pool to replace my aging drives.
| jjav wrote:
| When filesystem integrity matters, the filesystem matters
| more than the OS.
|
| While I mostly use Linux these days, for file servers it must
| be ZFS, which means whichever OS has first-class support for
| ZFS. I'm still on Illumos but perhaps will move to FreeBSD at
| some point.
| arid_eden wrote:
| This is why FreeBSD rebasing its ZFS fork on ZFS-on-Linux
| made me so scared for the future of FreeBSD. Their one
| major advantage over Linux and they didn't have the
| developers to maintain their fork themselves.
| laumars wrote:
| ZFS will always be a smoother experience on FreeBSD as
| opposed to Linux because FreeBSD endorses it. Thus the
| user land and documentation is written assuming you're
| running ZFS. As opposed to Linux where some distros might
| ship pre-compiled binaries but everything is written
| assuming you're not running ZFS. Thus everything takes
| that extra couple of steps to set up, fix, and maintain.
|
| For example, if you want to use ZFS as a storage for
| containers on Linux, you have to spend hours hunting
| around for some poorly maintained 3rd party shell scripts
| or build some tooling yourself. Whereas on FreeBSD all
| the tooling around Jails is built with ZFS in mind.
|
| This is why platforms like FreeBSD feel more harmonious
| than Linux. Not because Linux can't do the job but
| because there are so many different contributors with
| their own unique preferences that Linux is essentially
| loose Lego pieces with no instructions. Whereas FreeBSD
| has the same org who manage the kernel, user land and who
| also push ZFS.
|
| And I say this as someone who loves Linux. There's room
| for both Linux and FreeBSD in this world :)
| bombcar wrote:
| The Linux Community is working on BTRFS but if ZFS emerged
| with a GPLv2 compatible license tomorrow BTRFS would likely
| be moribund.
| arid_eden wrote:
| My main issue with ZFS is the integrated nature - like
| systemd for filesystems. My 'alternative' for ZFS isn't BTRFS
| (awful performance characteristics for my workloads) but LVM
| coupled with ext4 and mdraid. I get snapshots, reliability,
| performance and a 'real UNIX' composable toolchain. I miss
| out on data checksums.
| riku_iki wrote:
| No compression..
| bombcar wrote:
| You could use VDO (I never have)
| https://access.redhat.com/documentation/en-
| us/red_hat_enterp...
| _hyn3 wrote:
| I lost data and had to restore from 12 hour old backups one
| too many times with BTRFS. XFS + Ext4 for me from here on
| out, but that's one of the great things about Linux: lots of
| choices.
| nix23 wrote:
| XFS all the way down...or ZFS (on FreeBSD)
| acomjean wrote:
| Lamentably, for those of us that just use linux, the lots
| of choices seem weird and frankly a little scary (Stories
| of data loss with BTRFS I have heard before.) I use the
| default Ext4? I think?
|
| But as a believer in open source, I really would love it if
| the choices we had were so much superior to the proprietary
| stuff that was out there that it made using an open source
| OS a no brainer.
| guenthert wrote:
| BTRFS is the only fs I recall having lost data to (rather it
| corrupts data, i.e. data of one file is found in another) as
| late as 2018.
| guardiangod wrote:
| >What we, er, the linux community, chose instead, was BTRFS.
|
| Isn't that putting politic before technical excellence,
| something the Linux crowd is proud of? Other than in place
| volume expansion, there is no technical reason to choose
| BTRFS over ZFS (for now.)
|
| I don't really see a killer feature from BTRFS that would
| persuade me to take a chance with it.
| blihp wrote:
| It's being pragmatic. Linux has typically placed freedom
| ahead of pretty much everything else.[1] All else being
| equal, sure you want the best technical solution. But if it
| doesn't fit the definition of freedom that Linux requires,
| how otherwise good a solution is doesn't matter. So the
| main 'killer' feature of BTRFS is that it fits the
| licensing requirements for integration into Linux. Linux
| has a great many problems, but being sticklers for a
| particular type of license isn't one of them IMO.
|
| [1] This isn't just idealism. See Oracle v. Google for an
| example of what happens if you play fast and loose with
| licenses and a malicious actor. Google eventually won, but
| how many millions of dollars did that victory cost them?
| Oracle would _love_ Linux developers to blunder their way
| into the receiving end of a lawsuit.
| rnd0 wrote:
| >Isn't that putting politic before technical excellence,
| something the Linux crowd is proud of?
|
| It's not unprecedented. The adoption of systemd was forced
| on distros through political pressure, and not for
| technical reasons.
|
| If you want a truly non-political OS community these days,
| I think you're basically stuck with OpenBSD. No CoC, no
| systemd, no political BS at all -just pure tech.
|
| (there's other problems with OpenBSD -performance, mostly;
| that's why I use windows and Ubuntu instead. But the way
| they run things is admirable IMO. Blatant BS isn't
| tolerated.)
| dale_glass wrote:
| systemd wasn't "forced" on distros. The distros adopted
| it because they liked it.
|
| The thing is that systemd did something quite clever --
| it sold itself to the people actually building
| distributions, which are the people that actually matter
| the most in regards what system software gets used. It
| made their jobs easier and less annoying in many ways.
|
| As somebody who's done a lot of packaging and writing of
| SysV scripts, I can tell you that it's a tiresome and
| annoying task even for a small amount of software, let
| alone a whole distro. At that point the unix philosophy
| loses its luster quite a bit.
| laumars wrote:
| > _for most use cases, it is a reasonable and working
| replacement for ZFS._
|
| That is a huuuuge overstatement for the current state of
| Btrfs. In some specific domains it is a working replacement.
| But for most domains it still falls far behind ZFS in terms
| of stability, resiliency or even ease of use.
|
| By all means if you want to use btrfs then go for it. But the
| favourable comparisons people make when comparing btrfs to
| ZFS is a combination of wishful thinking and not having
| really bullied their fs into those extreme edge cases where
| the cracks begin to show. And frankly I'd rather depend on
| something that has had those assurances for 10+yrs already
| than have the hassle of explaining downtime to restore data
| on production systems.
| zdw wrote:
| While the primary issue is likely developer time and hardware
| availability to test on, there are other OSs like OpenBSD which
| supports much newer SPARC64 hardware:
| https://www.openbsd.org/sparc64.html
| mrweasel wrote:
| I can't really tell if they're just dropping older SPARC
| systems or the architecture altogther.
|
| OpenBSD will eventually face the same issues with older
| systems, and I believe they already dropped platforms because
| hardware couldn't be replaced.
|
| For newer SPARC system you could "just" buy one. Oracle doesn't
| need to donate them, it would be nice if they did, but the
| community around Illumos, Debian and OpenBSD could raise money
| to buy theses systems.
| cptnapalm wrote:
| OpenBSD is the only OS ever to run on my Tadpole laptops
| without any modification necessary. Even Solaris 8 and 10
| needed special software to run on them. OpenBSD works right out
| of the box.
| cyberpunk wrote:
| SPARC was where Theo cut his teeth in the netbsd years,
| anecdotal but I think it's his favourite pet so you'd imagine
| it'll be well supported on his os
| nix23 wrote:
| >The other architectures that OpenBSD supports have
| benefited because some kinds of bugs are exposed more often
| by the 64-bit big endian nature of UltraSPARC.
|
| https://www.openbsd.org/sparc64.html
| [deleted]
| [deleted]
| Ericson2314 wrote:
| > Without ready access to build machines, one might consider
| cross compilation. Though we have some support for cross-
| architecture software generation in the tools, the operating
| system does not currently support being cross compiled in full.
|
| SPARC or not SPARC, I would love to help with that!
| ahl wrote:
| I have a uniquely soft spot for SPARC, having written and
| disassembled a bunch of SPARC early in my career. If this is its
| swan song, I'll take the moment to share some code the takes
| advantage of the odd (today) delay slot architecture to implement
| instruction picking:
|
| https://github.com/illumos/illumos-gate/blob/master/usr/src/...
|
| The trick uses a branch in the delay slot of a jmp--a decidedly
| unusual construction. At the time I found this to be extremely
| clever and elegant... but apparently not so clever as to warrant
| a comment.
| mattst88 wrote:
| Can you explain how that works or what it does? I understand
| delay slots, but I didn't know it was legal to have a branch in
| a delay slot, so I don't really know what this means :)
| ahl wrote:
| You mean the total lack of comments didn't help? ;-)
|
| SPARC has two attributes (I hesitate to call them features)
| that this code interacts with: register windows and a delay
| slot. Register windows are a neat idea that leads to some
| challenging pathologies, but in short: the CPU has a bunch of
| registers, say 64, only 24 of which are visible at any
| moment. There are three classes of windowed registers: %iN
| (inputs), %lN (local), %oN (output). When you SAVE in a
| function preamble, the register windows rotate such that the
| callers %os become your %is and you get a new set of %ls and
| %os. There are also 8 %gN (global) registers. Problems?
| There's a fixed number of registers so a bunch of them
| effectively go to waste; also spilling and filling windows
| can lead to odd pathologies. The other attribute is the delay
| slot which simply means that in addition to a %pc you have an
| %npc (next program counter) and the instruction _after_ a
| control flow instruction (e.g. branch, jmp, call) is also
| executed (usually, although branches may "annul" the slot).
|
| This code is in DTrace where we want to know the value of
| parameters from elsewhere in the stack, but don't want to
| incur the penalty of a register window flush (i.e. writing
| all the registers to memory). This code reaches into the
| register windows to pluck out a particular value. It turns
| out that for very similar use cases, Bryan Cantrill and I
| devised the same mechanism completely independently in two
| unrelated areas of DTrace.
|
| How's it work?
|
| We rotate to the correct register window (note that this
| instruction is in the delay slot just for swag):
| https://github.com/illumos/illumos-
| gate/blob/master/usr/src/...
|
| Then we jmp to %g3 which is an index into the table of
| instructions below (depending on the register we wanted to
| snag): https://github.com/illumos/illumos-
| gate/blob/master/usr/src/...
|
| The subsequent instruction is a branch always (ba) to the
| next instruction. So:
|
| %pc is the jmp and %npc is the ba. The jmp sets %npc to an
| instruction in dtrace_getreg_win_table and %pc is the old
| %npc thus points to the ba. The ba sets %npc to be the label
| 3f (the wrpr) and %pc is set to the old %npc, the instruction
| in the table. Finally the particular mov instruction from the
| table is executed and %pc is set to the old %npc at label 3f.
|
| Why do it this way? Mostly because it was neat. This isn't
| particularly performance critical code; a big switch
| statement would probably have worked fine. In the Solaris
| kernel group I remember talking about interesting
| constructions like this a bit around the lunch table which is
| probably why Bryan and I both thought this would be a cool
| solution.
|
| I'm not aware of another instance of instruction picking in
| the illumos code base (although the DTrace user-mode tracing
| does make use of the %pc/%npc split in some cases).
___________________________________________________________________
(page generated 2021-05-07 23:01 UTC)