Post B2GUtzSoWN0oJsdgrw by gentoobro@shitpost.cloud
 (DIR) More posts by gentoobro@shitpost.cloud
 (DIR) Post #B2G1mlisJWZr0NM7xQ by OpenComputeDesign@linuxrocks.online
       2026-01-13T22:06:04Z
       
       1 likes, 0 repeats
       
       @kabel42 @rl_dane @amin @sotolf @dusnm Maybe its just an expectation I've gotten from x86, where the name of the architecture, to me, implies there is at least _some_ level of intercompatibility. Arm does not, to any practical degree, have this. But maybe that's just an x86 thing
       
 (DIR) Post #B2G1mnDOlWaTdKe1Am by gentoobro@shitpost.cloud
       2026-01-13T23:32:34.909193Z
       
       0 likes, 0 repeats
       
       It's because Intel, AMD, and Cryix have a legal cartel over a certain scheme for ordering bits that can be used to command a CPU, somehow, and don't want to share very much. There is a new Chinese x86 CPU licensed through AMD, and occasionally whatever bankruptcy firm that happens to own Cyrix at the moment teases some vaporware.Don't be surprised if some Chinese companies decide to ignore the copyright on x86 in the near future; what's the US going to do, sanction Huawei double hard with extra sprinkles on top?ARM, on the other hand, built their whole business model around licensing out core designs for custom CPUs. They don't really care about keeping the instruction set close because they don't make the chips themselves and the whole reason you license an ARM core is because you can't or don't want to design a CPU yourself from scratch. It's much easier to choose from ARM's fast food menu and then tack on whatever extra goodies you need.
       
 (DIR) Post #B2GUCWPQfZyzkxKQvw by OpenComputeDesign@linuxrocks.online
       2026-01-14T01:47:23Z
       
       1 likes, 0 repeats
       
       @gentoobro @amin @rl_dane But like, everyone who licenses ARM has to make the actual hard parts themselves. I see the ARM's _sole_ "value" as being that they let companies pretend to use an industry standard, while at the same time not actually having to be compatible with anything.Companies want more than anything else to _not_ be commodity, and consumers always want commodity, so ARM is a way for companies to scam everyone with "Hey, it's a standard CPU!" but then still charge for an SDK
       
 (DIR) Post #B2GUCXpLOiJ49cSdxw by gentoobro@shitpost.cloud
       2026-01-14T04:51:00.224951Z
       
       0 likes, 0 repeats
       
       Not really. ARM licenses all sorts of things, including virtual core designs that run on FPGAs. You can just sort of buy a cookie-cutter, ready-to-go ARM CPU design from them, slap it into an FPGA or paste it into your VHDL program and ship it off to a fab. Then you can use existing ARM libraries and codebases, and you're using a target ISA that's well-known and easy to find developers for. Differences between ARM versions are a lot smaller than differences between the ARM family and some other architecture family. Remember, most of ARM's real customers are "little" guys making custom chips for their TVs or routers or whatever, not Apple or Samsung that are trying to make high end CPUs for phones and such. The big guys license the instruction set for toolchain reasons and then design their own cores with whatever special sauce they want. RISC-V is the one to watch. It's truly open, and extremely well designed.
       
 (DIR) Post #B2GUDAcx8iZGUTxnZA by OpenComputeDesign@linuxrocks.online
       2026-01-14T04:06:34Z
       
       1 likes, 0 repeats
       
       @amin @rl_dane @sotolf @dusnm Even modern nokia dumb phones aren't as good as they used to be tbh
       
 (DIR) Post #B2GUtxzhz68VlK0vrc by OpenComputeDesign@linuxrocks.online
       2026-01-14T04:55:35Z
       
       1 likes, 0 repeats
       
       @gentoobro @amin @rl_dane Without timers and peripheral controllers and stuff (all the hard part) it is emphatically _not_ ready-to-go.If RISC-V is anything like ARM, I no longer care about RISC-V. 100% not useful.
       
 (DIR) Post #B2GUtzSoWN0oJsdgrw by gentoobro@shitpost.cloud
       2026-01-14T04:58:52.368662Z
       
       0 likes, 0 repeats
       
       It's a fair bit different. The extension mechanism is built in to the spec from the start. There are variable-width SIMD instructions too, allowing implementers to go as wide as they want. It's not particularly RISC-y in the grand scheme of things, except when looking down on it from x86's complete clusterfuck of instructions.
       
 (DIR) Post #B2GVEQqMQvg11oOsyW by gentoobro@shitpost.cloud
       2026-01-14T05:02:35.148336Z
       
       0 likes, 0 repeats
       
       Depends what you're targeting. If it's an FPGA, there are tons of commercial and open source peripheral controller designs that you can just paste in. If you're designing an ASIC, then timers are peripheral controllers are kind of the point of what you're doing with it and should be within your wheelhouse. Licensing an ARM core means you don't have to deal with it, and that you have an entire pre-existing ecosystem that can target your device.
       
 (DIR) Post #B2GWmBTcAo8xAWQ3Hc by OpenComputeDesign@linuxrocks.online
       2026-01-14T05:01:46Z
       
       1 likes, 0 repeats
       
       @gentoobro @amin @rl_dane Yeah, looking at pretty much any still developed instruction set these days, "RISC" is a hilarious joke of a lie more often than not. Although, from what I've heard, pretty much all modern CPUs, RISC or CISC alike, don't actually implement the instruction set directly, but actually have some form of internal conversion or interpretation, Cruesoe style.Which honestly just makes instruction sets even more of a joke.
       
 (DIR) Post #B2GWmCYGB1SAVDbHA8 by gentoobro@shitpost.cloud
       2026-01-14T05:19:48.708977Z
       
       1 likes, 0 repeats
       
       What happened was that both sides were right. Simpler instructions do allow faster clocks, and complicated instructions do allow advanced hardware optimizations. The big development is that you need to decode the CISC instructions from the user into highly specialized, model-specific internal RISC instructions that we now call microcode. The translation step is essentially free at modern cpu scales. You have to do something like this anyway to get a superscalar design (introduced, IIRC, with the Pentium 1, and only a few years ago on any ARM core); superscalar cores need multiple simultaneously operating execution units for various things and as such you have to track what they're doing and where all the results go and how to put it all back together. The difference between this bookkeeping and "microcode" is vanishing and semantic. The big gains come in when you can figure out ow to make special hardware that does a complicated instruction in fewer or faster microcode. Consider FMA, a super common operation. If you have an FMA instruction then it's really easy for early or simple designs to microcode it as a multiply followed by an add, whereas later or more advanced designs can have specialized FMA hardware that does it all at once. Lacking the instruction, the compiler emits some sequence of small instructions, likely spread apart and using more registers, that would be much harder for the CPU to then figure out, verify the semantics, and optimize on special hardware. Zen 1's 256 bit AVX instructions are microcoded to run sequentially on the 128 bit hardware, but Zen 2 had full width lanes, both generations being binary compatible. Zen 2 users got a free performance boost even on software that predates the core design because the ISA designers thought ahead.
       
 (DIR) Post #B2GWtbJEWjABJX6Fxw by OpenComputeDesign@linuxrocks.online
       2026-01-14T05:05:58Z
       
       1 likes, 0 repeats
       
       @gentoobro @amin @rl_dane "Licensing an ARM core means you don't have to deal with it, and that you have an entire pre-existing ecosystem that can target your device."Or would if it wasn't for the fact that, regardless of the reason, ARM code is simply unreasonably unportable between chips and implementations.And also every ARM chip ever made uses completely different (although from what I've heard and found personally, all equally unusable) SDKs
       
 (DIR) Post #B2GWtccPeu6DNJ55V2 by gentoobro@shitpost.cloud
       2026-01-14T05:21:05.132713Z
       
       0 likes, 0 repeats
       
       Sure, but you're not writing a whole toolchain from scratch, and devs aren't having to learn the entire architecture anew.
       
 (DIR) Post #B2GXgoqiyPntUvEaFk by OpenComputeDesign@linuxrocks.online
       2026-01-14T05:23:37Z
       
       1 likes, 0 repeats
       
       @gentoobro @amin @rl_dane So I guess the question is, why the fuck don't all modern CPUs use programmable instruction sets like Cruesoe did.
       
 (DIR) Post #B2GXgq6iISBhOniroW by gentoobro@shitpost.cloud
       2026-01-14T05:30:05.468262Z
       
       0 likes, 0 repeats
       
       They kinda do, at a hardware level, if you happen to have the trade secrets of exactly how each model of CPU runs its internal microcode firmware. The microcode decoding bits are programmable, and are uploaded by the BIOS into the CPU at boot. This is how they implemented the Spectre and Meltdown mitigations. Exactly how programmable any given CPU is isn't known to the public, but it's probably not fully programmable.In practice, that's closely guarded because none of the x86 players want you to wander off the farm. Big arm designs are in the same camp, and small designs of any architecture don't have the transistor/gate budget for the extra overhead.Personally, I would love to see it, and I suspect there will be some cheeky fabs out there one day that officially target RISC-V but also happen to ship and spec for their conveniently flexible microcode programming.
       
 (DIR) Post #B2GXvTI5PVCyyj1Fxo by OpenComputeDesign@linuxrocks.online
       2026-01-14T05:32:17Z
       
       1 likes, 0 repeats
       
       @gentoobro @amin @rl_dane I guess what I'm saying in all this is, I hate any and all companies that have more control over my computer than me, regardless of the technical reasons. And I feel like we should start storming more corporate headquarters
       
 (DIR) Post #B2GYarfXg2dwSXjDm4 by gentoobro@shitpost.cloud
       2026-01-14T05:40:14.937109Z
       
       0 likes, 0 repeats
       
       There is some hope on the horizon. Fab tech has already basically plateaued and it's only a matter of time before every two-bit chipmaker is shitting out 1.7nm EUV chips for bargain basement prices, just like everyone else. At that point some poorer, lesser technologically developed but not exactly 100% NATO or China aligned countries will hopefully come to the realization that it's in their national interest to manufacture extremely open chips and let the open source community subsidize the toolchain, software development, and security of it all. If, for example, Russia, started making fully programmable, open ISA, no secret bullshit cpus and gpus then lots of people and countries would be interested in them and would build a community around them. Catch is, last time I checked, Russia was only making 130nm class chips in small quantities. In 15 years, who knows?
       
 (DIR) Post #B2GZjREDS6vZ10bBiq by OpenComputeDesign@linuxrocks.online
       2026-01-14T05:46:56Z
       
       1 likes, 0 repeats
       
       @gentoobro @amin @rl_dane What a strange world this is where this is what hope looks like. But western companies clearly have no intention what so ever of making things better instead of worse, so..ALL HAIL EASTERN CHIP FABS(Realizes the irony that actually most of our chips are already made in the east, just in countries we've stuck our western flags in and claimed for our own)
       
 (DIR) Post #B2GZjSj5snDlf43MUS by gentoobro@shitpost.cloud
       2026-01-14T05:52:58.545303Z
       
       0 likes, 0 repeats
       
       China makes a shit-ton of chips now and is only a little behind Taiwan. They make chips with more advanced processes than the CPU I'm writing this response from, which is perfectly fine for all of my checks notes game development needs.