[HN Gopher] In 1981, Intel released the iAPX 432
___________________________________________________________________
In 1981, Intel released the iAPX 432
Author : mepian
Score : 115 points
Date : 2023-04-20 16:47 UTC (6 hours ago)
(HTM) web link (oldbytes.space)
(TXT) w3m dump (oldbytes.space)
| walnutclosefarm wrote:
| Any discussion of the iAPX 432 should mention IBM's earlier stab
| at building a minicomputer around a single-level object store
| hardware architecture - the System 38. It too struggled in the
| market, although being a sibling of the system 3X -> AS/400 line,
| didn't die quite as ignominiously as the iAPX 432.
| rektide wrote:
| Intel's first attempt at a 32-bit (thanks for the save chris_st)
| CPU, a huge massive two-chip CISC beast. Read in, well worth it.
| chris_st wrote:
| Do you mean their first _32-bit_ CPU? The 4004 /8008/8080 all
| precede it ( _edited, thanks chihuahua_ ).
| [deleted]
| zymhan wrote:
| Recent related post:
| https://news.ycombinator.com/item?id=35412342
|
| "iAPX432: Gordon Moore, Risk and Intel's Super-CISC Failure"
| e40 wrote:
| When I was at UCB I was roommates with a grad student working
| on/with (not sure which) the 432. I was learning CS at the time
| and what he described did seem like a wild alternate universe of
| computing.
| kens wrote:
| Was your roommate working on the 432 performance paper? (Paul
| M. Hansen, Mark A Lirdon, Robert N. Mayo, Marguerite Murphy,
| and David A Patterson)
| e40 wrote:
| Not one of those people... honestly, having trouble
| remembering his name now, but I would recognize it if I saw
| it. And, I know Dave Patterson wasn't my roommate. :) His
| office was just down the hall, though, when I was at UCB.
|
| EDIT: I think it was Karl (Carl?) something.
| Zanni wrote:
| Favorite bit: "What if the 432 had won? Computing would be very
| different. Many security problems wouldn't exist. You can't have
| a buffer overflow because every data structure is a separate
| object with memory segment size enforced in hardware. You can't
| smash the stack or make bad pointers."
|
| In the early 80s, speed was everything and security a non-issue.
| LANs hadn't even taken off yet, let alone the internet. But
| that's almost flip-flopped today.
| pjmlp wrote:
| Really?
|
| "A consequence of this principle is that every occurrence of
| every subscript of every subscripted variable was on every
| occasion checked at run time against both the upper and the
| lower declared bounds of the array. Many years later we asked
| our customers whether they wished us to provide an option to
| switch off these checks in the interests of efficiency on
| production runs. Unanimously, they urged us not to--they
| already knew how frequently subscript errors occur on
| production runs where failure to detect them could be
| disastrous. I note with fear and horror that even in 1980
| language designers and users have not learned this lesson. In
| any respectable branch of engineering, failure to observe such
| elementary precautions would have long been against the law."
|
| C.A.R. Hoare in his 1980's Turing Award speech.
|
| In 1988, Morris Worm takes over UNIX
|
| https://en.m.wikipedia.org/wiki/Morris_worm
| aidenn0 wrote:
| On a side note, my dad observed that when he was getting his
| Master's degree in CS, they had to read Hoare's CSP paper. He
| said the class was divided into two groups: those who didn't
| understand it, and those who thought the problems with
| mutable-by-default multithreading could be solved with proper
| programmer discipline.
| WorldMaker wrote:
| I'm really curious at this point what would happen if someone
| tried to resurrect a design like the 432. Maybe not the 432
| _exactly_ because things like bit-aligned instructions are
| still awkward /weird and didn't turn out to be as useful as the
| designers hoped. (It seems like an obvious compromise because
| object-tagging taking so much RAM in an era where RAM was so
| expensive it is an easy bet that they felt a need to nickel and
| dime/golf program code size, if they could.)
|
| But even just reusing the architecture as-was, it would
| certainly be cheaper and need fewer chips today. It might be
| fun to have a cheap Raspberry Pi-like board with a 432 to
| experiment coding against.
| bbatha wrote:
| The CHERI extension for ARM does this:
| https://www.cl.cam.ac.uk/research/security/ctsrd/cheri/. The
| Rust language is experimenting with adding language level
| pointer provenance apis that provide the same info to the
| compiler, and would presumably compile to the instructions if
| available, https://doc.rust-
| lang.org/nightly/std/ptr/index.html#strict-...
| kens wrote:
| The ironic thing is that the performance analysis paper [*]
| found that the weird bit-aligned instructions iAPX 432 didn't
| actually help code size.
|
| "Although the 432 has bit-variable length instructions, it
| requires more space than either the 68000 in Pascal or the
| VAX in C. Reasons include the lack of immediates and the
| inability to refer to a local variable or constant using
| fewer than 16 bits of address."
|
| (From a modern perspective, the test programs are absurdly
| small: 120-2900 bytes. Nowadays, you probably couldn't even
| create a program that small.)
|
| [*] https://archive.org/details/PerformanceEvaluationOfTheInt
| elA...
| Joker_vD wrote:
| Well, I can make a guess: the experience of porting existing
| C code to it would be a horrible experience so it would never
| take off the ground because, you know, nobody ain't got time
| to rewrite the whole world from scratch.
| WorldMaker wrote:
| My curiosity has nothing to do with running existing C
| code. The reference to Raspberry Pi should have made that
| obvious, I think. The idea would be to have something to
| play with as a hobbyist. Something to develop new compilers
| for to see what you can build and if what you can build is
| fun/interesting, even if it doesn't have commercial
| aspirations or "productive" uses and is only ever a "toy".
| You know, hobbyist fun.
| Joker_vD wrote:
| Ah, well, that you can do this with FPGA prototyping even
| today, you know.
| WorldMaker wrote:
| That is something that I know. That is why I suggested it
| might be cheap to explore the concept (at the bottom of
| my post) and part of why I'm curious to see others try. I
| _personally_ have college experience in FPGA prototyping
| but primarily have focused my efforts on software
| development then and since, I wouldn 't have much fun
| myself building an FPGA prototype of something like the
| 432 by myself. But if someone sent me a cute RPi-like
| board and an okay (doesn't have to be great)
| assembler/debugger for it, I'd certainly give it a try
| seeing if I could assemble something interesting like a
| compiler for a next higher-level language. If that same
| someone were to build enough of the boards to build an
| entire community of hobbyist developers more than just
| me, that would be even better and really interesting and
| the biggest reason to mention it as a curiosity on a site
| like this, not to do a one-off thing to scratch a
| personal itch but to wonder what a community of hobbyists
| could do together as a group.
| rwmj wrote:
| What would have _actually_ happened is once the first
| programmer realised you could get a speed bump by putting all
| your objects inside a single segment, we 'd have been back to
| where we are now. Which is roughly what happened to the 286
| which borrowed some of the 432 concepts -- every practical OS
| ignored the call gates, segments, and all but two of the rings.
| pjmlp wrote:
| That is what happens without liability, finally cybersecurity
| laws are fixing this.
| AnimalMuppet wrote:
| They are? In what jurisdictions?
| pjmlp wrote:
| US and EU,slowly getting there.
|
| https://www.sonatype.com/national-cybersecurity-strategy-
| wha...
|
| https://digital-strategy.ec.europa.eu/en/library/cyber-
| resil...
|
| The rest will follow.
| AnimalMuppet wrote:
| That doesn't seem to point me to any actual laws in the
| US...
| singleshot_ wrote:
| The state of liability for insecure software in America
| is woefully bad.
|
| You're not going to be able to apply breach of warranty
| under UCC. As a threshold matter, software might not even
| be considered a "good" and so it might not even governed
| by UCC. But if you did manage to apply UCC to software,
| the contract or license under which software is provided
| would have to be incompetently drafted if it were to be
| found that it did not effectively disclaim liability or
| if it was found unconscionable. There are some additional
| procedural hurdles concerning probity of contract but
| let's leave it there
|
| Under basic tort law, a software provider would need to
| be found to have a duty of care in order to be found
| negligent, but surprisingly, this is seldom found to
| exist. There is also no clear standard to measure whether
| a software provider may have breached that duty.
| Intervening or superseding cause exists in almost every
| breach, although for security-specific software, this
| might not be the case (e.g., a firewall fails open might
| be reasonably foreseeable to cause damage by means of a
| hacker, against whom the firewall is meant to protect).
|
| Finally even if you get to this point, recovery may be
| barred or limited by the economic loss doctrine.
|
| Defective product law is even worse, due to the
| (inscrutable) fact that while most courts would call
| software a good for UCC purposes, they would not call it
| a product for the purposes of product defect law. There
| are some additional wrinkles here concerning design of
| software versus manufacture of software in the context of
| defective product design law, but we're starting to
| wander out of HN post territory and into CLE territory
| here.
|
| Frankly if Joe Biden wanted to change every thing I just
| red it would be superb but I think we have to be
| realistic about how much influence the federal government
| can apply here without causing massive terrible second
| and third order effects. For example look back to 43 and
| Obama's heavy reliance on cybersecurity insurance to
| advance national security interests and how quickly that
| turned into free payouts for everyone who could manage to
| get a random reared out to work.
| kens wrote:
| Author here if there are any questions. Also, if anyone happens
| to have other iAPX 432 chips sitting around, I'd be happy to take
| high-resolution die photos. There aren't good 432 die photos
| around, so I was happy to get access to the 43201 chip.
|
| The thread is also on Twitter:
| https://twitter.com/kenshirriff/status/1649075486008524801
| panick21_ wrote:
| Tom Lyon said in a podcast he has some, he said his wife was in
| Intel Marketing and they were giving them away. You should be
| able to find him.
| kens wrote:
| Thank you for this very helpful suggestion! I got in touch
| with Tom and he does have some 432 chips.
| panick21_ wrote:
| Glad my memory for useless information I retained from
| podcasts paid off for once.
|
| I think it was Oxide and Friends episode on SPARC but not
| sure.
| helf wrote:
| Just wanted to say that I love your content and look forward to
| new stuff. Thanks so much for putting the effort forward that
| you do! I've learned tons of the years following you :) greatly
| appreciate it.
| JoeAltmaier wrote:
| My brother worked there at the time. He wanted in on the project,
| but Intel mgmt had already begun to pull back from it,
| controlling costs.
|
| I recall him saying at the time, "A JMP . instruction takes 500
| machine cycles". This was apparently because it was largely an
| interpreter - the 'assembly' wasn't run in hardware but some kind
| of firmware. I suppose the OP covered all that.
| sedatk wrote:
| To be fair, all modern CISC CPUs are microcode interpreters
| too.
| dfox wrote:
| Implementing a CPU by writing an interpreter in lower level
| microcode is a obvious implementation strategy for CISC ISAs.
| But the mechanism how modern OoO x86 CPUs (and even many
| notionally RISC CPUs) works is different: The incoming
| instruction stream is converted into internal "more RISC"
| uOps and these are then executed by the OoO pipeline, this
| happens almost entirely in hardware (and on current x86
| implementations with somewhat incredible amount of
| parallelism) and can in fact even combine multiple
| instructions into one uOp (or even completely discard an
| instruction). There is usually some microcode, but this is
| there to handle instructions that cannot be efficiently
| mapped to block of the uOps (too much work done in one
| instruction, interestingly ARM has bunch of such cases) and
| to handle interrupt and exception handling.
| aidenn0 wrote:
| Someone observed to me once that the dominance of ARM has
| proven both the CISC _and_ RISC sides wrong as ARM has
| fairly intermediate complexity instructions.
| kimixa wrote:
| I've generally heard that the two major successful
| architectures today are the most complex RISC in ARM, and
| the most simple CISC in x86.
| monocasa wrote:
| Awesome, I was really hoping that kens would take a look at the
| 432 chipset given his experience reversing the 8086 and the fact
| that the 432 is on a similar Intel HMOS process. Particularly
| because the 432 has a lot of commentary on why it failed, but
| unfortunately lacks a lot of hard details on what all it actually
| was.
|
| If people want to read more, some of the best docs I've found are
| the academic literature on the chips.
|
| Instruction Decoder: https://doi.org/10.1109/JSSC.1981.1051633
|
| Execution Unit: https://doi.org/10.1109/JSSC.1981.1051631
|
| Interface Processor: https://doi.org/10.1109/JSSC.1981.1051632
| Findecanor wrote:
| The book "Capability-Based Computer Systems" by Henry M. Levy
| has 28 pages on how to program the iAPX432. Quite a lot to
| digest.
|
| <https://homes.cs.washington.edu/~levy/capabook/Chapter9.pdf>
| kens wrote:
| Yes, those papers were very informative. Also the iAPX 432
| patents have a lot of detail:
| https://patents.google.com/patent/US4325120A
| https://patents.google.com/patent/US4367524
| https://patents.google.com/patent/US4415969
| tibbydudeza wrote:
| The days of Byte Magazine where people and esp venture
| capitalists funded and the market had space for all sorts of new
| ideas like the Amiga.
|
| My favourite one was a 3 chip Object Oriented system called
| Objekt,Numerik and something else by Hifi manufacturer Linn - the
| owner at the time loved the Vaxen running his factory and funded
| a new architecture.
|
| The fixation with the letter "k" I found rather intriguing.
| EdwardCoffin wrote:
| The four chip system you are thinking of was the Logik, Objekt,
| Numerik, and Klock chips that made up Rekursiv [1]. There was
| some crazy backstory to it that I read up on a few years ago,
| which I no longer remember, but I do remember it was
| outlandish.
|
| [1] https://en.wikipedia.org/wiki/Rekursiv
| forgotmypw17 wrote:
| https://archive.is/imnEY
| chris_st wrote:
| And, then in 1989, they released the i860 [0] with similar
| promises to similar failure.
|
| 0: https://en.wikipedia.org/wiki/Intel_i860
| kjs3 wrote:
| Intel and Microsoft thought so much of the i860's future, it
| was the architecture Microsoft started developing Windows NT
| on. Obviously...didn't work out.
| WorldMaker wrote:
| Similarly, NT allegedly invested a lot into supporting the
| even later (and more goofily named) Itanium [1] (IA-64)
| architecture.
|
| In general, Intel's history of failed architectures is almost
| more interesting than its history of winning architectures.
|
| [1] https://en.wikipedia.org/wiki/IA-64
| Findecanor wrote:
| The i960 embedded CPU series (that came out at about the same
| time as the i860) has been called the successor to the iAPX432.
| It had the same lead designer, and was also intended to run
| Ada, but except for its use of tagged memory it was a
| relatively conventional RISC architecture:
| <https://en.wikipedia.org/wiki/Intel_i960>
| jandrese wrote:
| The i860 failed for entirely different reasons though. It was
| really a vector engine that could manage some general compute
| with some grumbling whereas the 432 was built for general
| purpose applications written in Ada.
|
| But I guess they did both fail for the same reason: they were
| more expensive and slower than x86 when running regular
| applications. Itanium would follow this trend. Come to think of
| it i860 and Itanium are far more closely related, both
| suffering greatly from the Intel hardware engineers punting on
| the difficult instruction scheduling issues with the
| architecture and the compiler writers not being able to pull
| the rabbit out of the hat and magically fix those issues. So
| you have chips that only ever perform properly on trivially
| parallelizeable tasks and are slow on the vast majority of
| code.
| chris_st wrote:
| As I understand it, the hardware design for the i860 made it
| _incredibly_ hard to write a compiler for. This also helped
| it fail.
| cpeterso wrote:
| For more of the story behind the development of the iAPX 432,
| check out "iAPX432 : Gordon Moore, Risk and Intel's Super-CISC
| failure":
|
| https://thechipletter.substack.com/p/iapx432-gordon-moore-ri...
|
| https://news.ycombinator.com/item?id=35412342
| kabdib wrote:
| I was taking a VLSI design course around that time; our
| instructor came into class with a fist-size block of plexiglass,
| with an iAPX 432 chipset embedded in it. A gift from Intel. The
| chips were HUGE.
|
| I have Elliot Organick's book on the 432. It is not fun reading.
| pkaye wrote:
| I found the book on Bitsavers.
|
| http://www.bitsavers.org/components/intel/iAPX_432/Organick_...
| jandrese wrote:
| Only one chip wasn't even enough, the amount of die needed and
| the limited lithography at the time meant Intel had to split it
| into three huge and expensive chips.
|
| One of the crazier aspects of the design was that the chip
| design basically required the Ada computer language to take
| over the world. As far as predictions of the future go that has
| to be one of the most expensive misses of all time.
| rwmj wrote:
| The first IBM POWER was also split over several chips
| (Wikipedia says up to 10! -
| https://en.wikipedia.org/wiki/POWER1#Physical_description)
| aidenn0 wrote:
| POWER2 was originally multiple chips as well.
| Taniwha wrote:
| Remember that at the time no one put a main-frame class chip
| on a single die - the 68ks/32ks, and much later 386s were the
| first generation who managed to pull off putting a large
| architecture on a chip (I'm trying to make a distinction here
| between architecture and what particular implementations
| could do) - in the early 70s mainframes were the size of
| basketball courts, in the late 70s/early 80s they were the
| size of a large fridge (and disk drives the size of a washing
| machine).
|
| So splitting it all in 3 wasn't necessarily the crazy idea we
| might think it is today - it still all fitted on one circuit
| board!
| jandrese wrote:
| It depends on your perspective. From a
| mainframe/minicomputer standpoint it was pretty good. From
| the microcomputer perspective it was nuts. Unfortunately
| for Intel they've always been closely associated with
| microcomputers so that is the standard by which they were
| judged.
| twoodfin wrote:
| At the time it wasn't at all clear that microcomputers
| and their architecture would graduate from being toys for
| hobbyists to ruling the computing world. It was assumed
| in many circles (including Intel, apparently) that minis
| and mainframes--really, more "modular" less "integrated"
| architectures--would own the enterprise.
|
| (The dependence on Ada makes a lot more sense in this
| context: Your mainframe vendor telling you what language
| you could use wasn't odd at all in that space.)
|
| DEC also essentially fell victim to this misapprehension,
| and unlike Intel didn't have a Plan B.
| chx wrote:
| One of my favorite crazy facts: in 1991 a Hungarian
| university got an IBM 3090 mainframe as a gift, while
| outdated it was still insanely more advanced than anything
| else available in the country. They found a spot on the top
| floor for it but it was impossible to get it up there so
| _they removed the roof_ and used a crane.
|
| Memorial page with photos:
| http://hampage.hu/nosztalgia/netto/ursus/index.html
|
| In 1995, when another university got a VAX 9000 cluster for
| the bargain basement price of 50 000 CHF (~130 000 USD
| equivalent today) -- the Swiss basically discarded it as
| outdated, this truly was cheap -- and they tried to switch
| it on, it blew the fuse ... of the district. They needed to
| run a new power line from the nearest substation.
| aidenn0 wrote:
| The 68k was so much simpler than the 432 that it came out
| about over a year earlier, despite starting two years
| later.
| WorldMaker wrote:
| From my understanding, that wasn't necessarily a bad
| prediction at the time. Ada was one of the most advanced
| languages at the time and was built for a lot of things that
| wouldn't become standard in other languages for decades later
| in some cases. From that perspective even in hindsight it was
| very much "the future".
|
| It was built for the US Department of Defense and it is easy
| to believe that something with US government backing might
| have massive staying power, even if only just for (sometimes
| lucrative) defense contracts.
|
| The biggest problem with Ada at the time was how expensive it
| was to purchase compilers, which is obviously par for the
| course for something built for defense contracts on a defense
| budget but would have been a lot harder to swallow for
| general software companies. Even then there was room to
| imagine that competitor compilers might be built (especially
| if you think you've got a chipset designed to make that in
| theory easy to do by moving many of the language
| intrinsics/underlying virtual machine directly into hardware
| as a physical machine).
| jandrese wrote:
| Ada had a lot of companies that missed the boat on earlier
| languages seeing their opportunity to really get the
| monetization right this time. It helped that Ada was so
| complex that some rando in his basement couldn't write a
| good compiler for it like they could with C, BASIC, Pascal,
| Fortran, LISP, etc... It was expensive to develop so it was
| expensive to purchase. Even better, such a huge compiler
| for a new language is doomed to be buggy, especially at
| first, so they can make even more money on expensive
| support contracts.
|
| The only thing missing was a reason for programmers to
| switch to it. There were promises that the code would be
| safe, and that if it compiled it should just work, but the
| buggy nature of the compilers undercut these promises.
| Actually, it sounds a lot like Rust minus the buggy
| compiler part.
|
| The idea that the defense contractors would drive the
| industry over-estimates just how much code they write. Sure
| it's millions of lines every year, but compared to
| worldwide C development it is a drop in the bucket.
| WorldMaker wrote:
| > The idea that the defense contractors would drive the
| industry over-estimates just how much code they write.
| Sure it's millions of lines every year, but compared to
| worldwide C development it is a drop in the bucket.
|
| The implication I was making, in case it wasn't overt
| enough, had a lot more to do with money/budget/revenue
| than lines of code. The US DOD has massive budgets and
| over-spends those budgets on all kinds of wild things (so
| long as they can pass Congressional oversight, at least).
| The amount of money the US DOD spends on defense
| contractors will always attract a lot of attention, even
| if the amount of code that the US DOD gets back from all
| that spending will always be a drop in the bucket to the
| overall software industry.
|
| A different, unrelated, implication that is often used is
| that the DOD has increased requirements for safety,
| security, and oversight and that "defense-grade" or
| "defense-level" signifies something stronger than
| consumer-grade. You can see some of that in the 432's
| attempted marketing: imagine having access to "defense-
| grade" Ada-capable hardware in the microcomputer form
| factor at a fraction of then-current mainframe costs. Of
| course, in reality, not everyone needs "defense-grade"
| and the "dream" of using the same language and similar
| tools to the US DOD never had quite the appeal that Intel
| marketers must have hoped for. But you can certainly see
| why Intel marketers might have thought there some
| advantage in "defense grade".
|
| The US DOD and its defense contractors have never exactly
| "led" the industry, but they've certainly had a large
| shadow over of it for most of computing history. (Says
| the person sending this little collection of thoughts and
| opinions over a vast and incredible inter-network of
| networks that began as a Defense advanced research
| project.)
| ajxs wrote:
| > ...some rando in his basement couldn't write a good
| compiler for it like they could with C, BASIC, Pascal,
| Fortran, LISP...
|
| I see Ada as having a fundamentally different philosophy
| from languages like C and BASIC. Ada compilers were
| allowed to not implement all of the standard library
| features, and many didn't.
|
| > ...such a huge compiler for a new language is doomed to
| be buggy, especially at first...
|
| Was this actually what happened though? There was always
| the Ada Conformity Assessment Test Suite (ACATS), which
| tested a compiler's implementation. If compilers ever
| hindered Ada's adoption, it wasn't because they were
| buggy, it was how expensive and resource intensive they
| were.
|
| > There were promises that the code would be safe, and
| that if it compiled it should just work, but the buggy
| nature of the compilers undercut these promises...
|
| ...except that it did kind of work in the end though,
| didn't it? Ada is still very widely used in a lot of
| places, and has been for a long time. Lots of new code is
| still being written in Ada too. There's lots of
| literature available about how well Ada integrates in
| environments where code is audited against safety
| regulations.
|
| > The idea that the defense contractors would drive the
| industry over-estimates just how much code they write.
|
| This is certainly true now, but was this true when Ada
| was being designed in the late-70s/early-80s? My
| understanding is that C didn't become the dominant
| embedded-systems language until much later. I wasn't
| there, mind you. I could have some historic details
| wrong.
|
| Edit: It's worth adding that of the languages the parent
| poster mentioned above, the only language which still
| sees mainstream use is C. I know Fortran and LISP have
| very healthy communities and enjoy a lot of use in niche
| industries, of course. I don't think this is what the
| parent poster meant exactly, however it's very common to
| hear people talk about Ada as a failure on the basis that
| it didn't completely overtake the software industry. You
| don't really hear people condemn BASIC, or Pascal in the
| same way though, even though Ada enjoys much more modern
| usage.
| nradov wrote:
| Even the defense industry is migrating away from Ada. The
| F-22 Raptor software was mostly written in Ada. The F-35
| Lighting II (JSF) software is mostly written in C++.
| While the JSF program has had some major software delays,
| overall the choice of C++ seems to have worked out.
| ajxs wrote:
| This is true. Ironically, when we're talking about an
| area as heavily regulated as aerospace, I don't think
| language choice matters as much as it does elsewhere. My
| understanding is that if you're aiming for certification
| with a standard like DO-178C and have to demonstrate
| compliance on a line-by-line basis, ultimately only
| compliance matters, not what language it's written in.
| Some languages will achieve that result more easily than
| others though. Someone please correct me if I'm off track
| here.
| bitwize wrote:
| Wow, it's a Burroughs machine on a chip!
|
| Well, two chips actually, but the idea is the important thing.
| pjdesno wrote:
| I remember reading all about it in my dad's issues of Spectrum.
| suid wrote:
| That was the starting point of my undergraduate independent
| seminar topic.
|
| Back then the complexity was even more mind-boggling, as I had
| barely any appreciation of the object-oriented world, and the
| idea of pushing all that into the micro-architecture was really
| hard to get my mind around, let alone talking about it to a
| class full of fellow students.
___________________________________________________________________
(page generated 2023-04-20 23:00 UTC)