[HN Gopher] The cult of Amiga and SGI, or why workstations matter
___________________________________________________________________
The cult of Amiga and SGI, or why workstations matter
Author : kgerzson
Score : 113 points
Date : 2022-04-05 13:24 UTC (9 hours ago)
(HTM) web link (peter.czanik.hu)
(TXT) w3m dump (peter.czanik.hu)
| bitbckt wrote:
| I still boot up my maximum-spec Octane2 (2x600MHz R14k, 8GB RAM,
| VPro V12, PCI shoebox) every so often to bask in the good ol'
| days.
|
| After nekochan went offline, there isn't really a central
| gathering place for SGI fans anymore, but we are out there.
| im_down_w_otp wrote:
| That's a beast of a machine!
|
| In my home office I have a little mini museum that consists of
| display of esoteric 90's workstations:
|
| * Apple Quadra 610 running A/UX (Apple's first UNIX)
|
| * NeXT NeXTstation Turbo
|
| * SGI Indigo2 IMPACT10000
|
| * Sun Ultra 2 Elite3D
|
| * UMAX J700 (dual 604e) running BeOS
|
| * HP Visualize C240
|
| All working and all fun to fire up and play around with from
| time to time. Tracking down software to play with is a
| challenge at times. Since most of what I want to fiddle around
| with is proprietary and long since abandoned (Maya, CATIA, NX,
| etc.). If by some chance we were to end up on a conference
| call, you'd see them displayed in the background. :-)
| em-bee wrote:
| neat, here is my museum:
|
| * Apollo Domain 4500
|
| * HP 9000
|
| * m86k/25 NeXTstation
|
| * NeXTstation Turbo
|
| * NeXT Cube with NeXTDimension card
|
| * SPARCstation 5
|
| * SGI OEM machine from Control-Data with mips R2000A/R3000
|
| * SGI Indy
|
| * IBM RS6000/320H
|
| * IBM RS6000/250
|
| * Cobalt Qube 2700D
|
| * Sun JavaStation1
|
| * Sun Ray1
|
| * SPARC IPC
|
| * Alpha-Entry-Workstation 533
|
| they are in storage at my grandmothers now, and i don't know
| if any of them still run. some of these i was using actively
| as my workstation at home. some where just to explore. as i
| got more and more into free software, dealing with the
| nonfree stuff on those machines got less and less appealing.
| though i was also running linux on machines that were
| supported.
| NexRebular wrote:
| My museum is quite small in comparison:
|
| * HP 9000 712/60
|
| * SUN Ultra 1 Creator (With SunPC DX2)
|
| * Mac Quadra 610
|
| These are still working, albeit waiting for recapping. The
| Gecko is running NextSTEP for the original id Software DooM
| map editor and the Ultra and Quadra are on the original OS
| they came with. Would love to get more SUN and SGI hardware
| but the prices are getting quite out of hand...
| justinlloyd wrote:
| Sounds like it will run circles around my Indigo2 R10K in the
| workshop. What do you do with all that power?
| anthk wrote:
| https://forums.sgi.sh/
| classichasclass wrote:
| My 900MHz V12 DCD Fuel says hi. I miss Nekochan.
| bitbckt wrote:
| Forgot to add the DCD. :)
| anthk wrote:
| https://forums.sgi.sh/
| sleepybrett wrote:
| Sometimes I miss 3dwm though apple stole a lot of it's best
| ideas and put them into the original osx
| jacquesm wrote:
| There were some attempts at getting 3dwm to be ported to
| Linux, but I'm not sure what came of them.
| jacquesm wrote:
| That's like driving a classic in modern day traffic, it's a bit
| slower but it does the job with elegance. Nice rig!
| lasereyes136 wrote:
| SGIs were the first Unix systems I used. We had a lab of them in
| school. They worked wonderfully for me and I never lost any files
| on them. There was also a really good Unix Sysadmin that
| maintained them and was available if you had any questions.
| postexitus wrote:
| what kind of low bar is that "never losing any files" - what
| files did you lose in other systems?
| peckrob wrote:
| I remember seeing an SGI Octane [0] at Comdex. I think it was
| 1997 or 98? I was still in high school, but I remember thinking
| that it was just the absolute coolest thing ever. When my home
| computer could barely play DOOM, this thing was just blowing up
| over here with beautiful video animations that didn't stutter
| at all. Things _my_ PC wouldn 't be capable of until a few
| years after that. Not to mention it just _looked_ cool. In an
| era of beige boxes, you had this striking blue cube.
|
| [0] https://en.wikipedia.org/wiki/SGI_Octane
| tannhaeuser wrote:
| Actually, SGI released he first affordable, decent flatscreen
| monitors in 2001 or so (the 1700sw TN monitors that Apple
| rebranded with translucent cases at the time).
| giantrobot wrote:
| Monitors that required a special video card to use with a PC.
| At least the 1600sw did. IIRC it came with a Revolution9 card
| or some such.
| tannhaeuser wrote:
| You could order an external DVI-to-LVDS converter (or
| whatever the native connector/signalling was) later when it
| became clear DVI was set to become the standard.
| giantrobot wrote:
| It was several years after the 1600sw came out that I had a
| machine with DVI. I've still never seen one in person. The
| industrial design looked really nice but I have no idea if
| the panel quality was decent.
| tannhaeuser wrote:
| The TN panel, especially the black level, was crap
| compared to modern IPS let alone OLED LCDs, but was
| decent enough that SGI guaranteed a reference color
| garmut, with compensation algorithms over the panel's
| lifetime. I bought mine towards the end of the product's
| lifecycle at a good price with converter included, and
| ran it with high-end (at the time) nvidia cards on Linux
| without problem.
| jeffbee wrote:
| Most of what I remember about 90s RISC workstations is how
| unbelievably slow they were. On an SGI Octane in 1998 was giving
| integer performance about the same as 1996-released Pentium Pros.
| And that's why RISC died: not because it was cheaper but because
| it was both cheaper and faster. Sometimes dramatically faster.
| The idea that RISC was somehow elegant turned out to be a myth.
| The complexity of x86 mattered less as cores grew in size and
| sophistication, but the code compression properties of CISC
| continued to benefit x86.
| panick21_ wrote:
| RISC is better if you have finite money and want to build a
| chip.
|
| The simple reality is that Intel had the Wintel monopoly and
| they had gigantic volume and absurd amounts of money to invest.
| If you compare the size of teams working on SPARC to what Intel
| invested its totally clear why they ended up winning.
|
| > The idea that RISC was somehow elegant turned out to be a
| myth.
|
| No, it didn't. The reality is a bunch of literal students made
| a processor that outperformed industry cores. Imagine today if
| a university said 'we made a chip that is faster then i9'.
|
| The early RISC processors with pretty small amount of work very
| incredibly competitive.
|
| So yes, its was actually amazing and revolutionary and totally
| changed computing forever.
|
| That this advantage would magically mean 'RISC will be the best
| thing ever for the rest of history' is pretty crazy demand to
| make for it to be called revolution.
|
| > code compression properties of CISC continued to benefit x86.
|
| Not actually that much, code density of x86 for 64 bit systems
| isn't all that amazing. Its certainty not why they won.
| jeffbee wrote:
| The moment at which it seemed like the RISC people were
| really on to something important was the moment when the size
| and complexity of the x86 frontend was really quite large
| compared to the rest of the core. Now you can't even find the
| x86 decoder on a die shot, because it's irrelevant. The
| 512x512b FMA unit is like the size of Alaska and the decoder
| is the size of Monaco. So the advantages of RISC were
| overtaken by semiconductor physics for the most part.
| panick21_ wrote:
| Again, if you give 2 teams 50M to develop a new processor.
| One using x86 and the other using RISC-V I have not
| question in my mind what team would come out ahead.
| jeffbee wrote:
| That is exactly the ivory tower attitude that torpedoed
| all of the RISC workstation companies. Nobody, literally
| not one single customer cares how easy or hard it was to
| design and implement the CPU. They only care how much it
| costs and how fast it goes.
| snvzz wrote:
| >They only care how much it costs and how fast it goes.
|
| Parent is telling you: At any given development cost,
| you'll end up with a faster CPU if you go RISC.
|
| This is why almost every new ISA to meet success in the
| last three decades has been RISC.
| snvzz wrote:
| >The 512x512b FMA unit is like the size of Alaska and the
| decoder is the size of Monaco. So the advantages of RISC
| were overtaken by semiconductor physics for the most part.
|
| There still is a hardware advantage (has nothing to do with
| sizes, everything to do with complexity), but let's ignore
| that.
|
| RISC being simpler doesn't just help the hardware. It also
| helps the software, the whole stack.
|
| Extra complexity needs strong justification. RISC-V takes
| that idea seriously, and this is why it already has the
| traction it does, and is going through exponential growth.
| jeffbee wrote:
| RISC-V has many nice properties but it didn't exist 25
| years ago so what does it have to do with why _those_
| companies and their objectively inferior CPUs
| disappeared?
| hedgehog wrote:
| It seems to be more about skill and budget of the
| development teams, and those are both getting bigger. I
| think a major underlying factor is the massive increase
| of transistor budgets relative to clock speed and latency
| to memory. That pulls every architecture down the path of
| big caches, speculation, specialized functional units,
| multiprocessors, etc, that add complexity dwarfing
| anything in the front end. If I was starting fresh the
| labor to do x86 would be a handicap but on the other hand
| Intel switching to RISC-V or whatever wouldn't do
| anything for them.
| snvzz wrote:
| >The idea that RISC was somehow elegant turned out to be a
| myth.
|
| Citation needed.
|
| Still, if you're thinking about Intel success, I can tell you
| there's two factors for it: IBM PC Clones and Intel's monopoly
| on advanced fab nodes. These were enough to overcompensated for
| CISC being trash.
|
| >the code compression properties of CISC continued to benefit
| x86.
|
| x86 had good code density. AMD64 (x86-64) has really bad code
| density, dramatically worse than RV64GC.
| mst wrote:
| The expense of fab node transitions back when every chip
| manufacturer built their own fabs seems to've contributed
| significantly to the demise of a fair few architectures over
| the years.
| buescher wrote:
| It depends. Before peecee memory architectures got gud in the
| early oughts, the Octanes were sufficiently better for big
| finite element codes than Intel based machines that you'd still
| see them used for that. But it didn't last, and you're right
| that the writing was on the wall already in the mid nineties. I
| got a Pentium 90 running Slackware running some less demanding
| scientific code faster than a pretty loaded Indigo2 in that
| era.
| anthk wrote:
| >I got a Pentium 90 running Slackware running some less
| demanding scientific code faster than a pretty loaded Indigo2
| in that era.
|
| Well, FVWM and URxvt were lighter than MWM and Irix' setup
| for Winterm.
| Sohcahtoa82 wrote:
| My late dad was a huge Amiga fan back in the day. I was just a
| little kid at the time and didn't see what the big deal was.
|
| Looking back at what it was capable of though...they were doing
| 256 colors and sampled audio at a time when x86 was still pushing
| 16 colors and could only produce generated tones through the
| speaker built into the case.
|
| There was some really good music on the Amiga, too. Some of my
| favorites:
|
| Hybris theme: https://youtu.be/Siwd7b0iXOc
|
| Pioneer Plague theme: https://youtu.be/JSLcN6GBzO0?t=17
|
| Treasure Trap theme: https://youtu.be/n5h_Wu7QRpM
|
| And of course, you can't mention Amiga music without also
| mentioning Space Debris: https://youtu.be/thnXzUFJnfQ
| reaperducer wrote:
| _they were doing 256 colors and sampled audio at a time when
| x86 was still pushing 16 colors_
|
| 4,096 colors.
|
| https://en.wikipedia.org/wiki/Hold-And-Modify
| Sohcahtoa82 wrote:
| I knew it could do 4,096 colors and I played around with it
| in Deluxe Paint, but I don't recall any games that used it.
| Being a kid at the time, the games were all I cared about.
| vidarh wrote:
| Games span a bit of a range. With the copper you could
| change the palette while the screen updated, so you _could_
| do more than 256 on AGA or more than 64 on ECS Amiga 's,
| but even on AGA Amiga's (A1200, A4000, CD32) it was rare
| for games to even reach 256 because of memory bandwidth.
| 32-64 was a more common range. Even that often used copper
| tricks, because using fewer bitplanes meant less memory
| bandwidth used, and so it was often worthwhile using copper
| palette tricks.
|
| A handful of games did use HAM for up to 4096 colours, but
| mostly for static screens (there's somewhere in the region
| of half a dozen exceptions total)
| markus_zhang wrote:
| As a side note, back in the 90s I dabbed into the hobby of FPS
| level design. Some tasks, for example, calculating light for
| Quake I maps could take a lot of computational time to complete.
| I fondly remember that people back then discussed a lot about
| purchasing powerful gigs, or even workstations to build very
| large maps for games such as Quake, Unreal and such. Typical
| machine at the time IIRC only has around 32-128GB(sorry MB) rams
| which were good enough for gaming but fell short for level
| designing tasks. Even opening large levels for Half-Life requires
| a good machine.
| vidarh wrote:
| 32MB-128MB, not GB, presumably.
|
| Our desktop workstations in '95-'96 had 16MB. Our servers had
| 128MB and it was extravagant (and cost way too much - we should
| have made do with half that or less).
| markus_zhang wrote:
| Yeah you are absolutely right. I'll correct.
| sleepybrett wrote:
| I want to say that iD was cooking BSPs on SPARCs or SGIs at one
| point.
| markus_zhang wrote:
| They did have the big bucks :D. I'm wondering what's the
| setup for modern game designers such as Skyrim.
| graupel wrote:
| I spent many years working on high-end TV weather graphics on SGI
| Indigo, Onyx, and the O2 (toaster over shaped) boxes; they were
| remarkable for their time and the hours and hours it took to
| render graphics made for really nice downtime at work letting me
| say things like "sorry, can't do anything, graphics are
| rendering".
|
| The best source for hardware ended up being a local university
| surplus shop where we could get the big SGI monitors for pennies
| on the dollar.
| sherr wrote:
| In the mid-nineties I worked for Parallax Graphics in London,
| doing support for their SGI based paint and compositing
| software. One application, Matador, was heavily used in the
| video and film industry - also for TV and things like weather
| forecast graphics. Feels like ancient history. I loved working
| on my SGI and learning a bit of Irix.
| peatmoss wrote:
| What is old is new again. I am running a query against a very
| large datalake since yesterday. Only thing is, with the
| elasticity of resources in the cloud, I have no plausible
| reason to not be doing other work :-/
| panick21_ wrote:
| If anybody has podcasts or very good video resources about the
| 80/90s computer industry I would be interested. I couldn't find
| very much, lots of bits and pieces.
| protomyth wrote:
| Look up the Computer Chronicles
| https://archive.org/details/computerchronicles
| PaulHoule wrote:
| Most of my memories of SGI machines from the 1990s are not so
| good. As I remember SGI seemed to value looks and performance on
| paper to "it works".
|
| There was the professor who bought an SGI machine but didn't put
| enough RAM in it, plugged it into the AC power and Ethernet,
| couldn't do anything with it, and left it plugged in for a few
| years with no root password.
|
| There was the demo I attended up at Syracuse where a pair of
| identical twins from eastern Europe were supposed to show off
| something but couldn't get it to work, Geoffrey Fox's conclusion
| was "never buy a gigabyte of cheap RAM" back when a gigabyte was
| a lot.
|
| When SGI came out with a filesystem for Linux I could never shake
| the perception that it would be a machine for incinerating your
| files.
| usefulcat wrote:
| My first job was at a company that made visual simulation
| software, and literally everyone had an SGI on their desk (the
| Indy), and there were also a few refrigerator-sized machines in
| the server room.
|
| I did development work on the Indy and a much larger 8(!) CPU
| machine daily for several years. I remember there were some
| complications related to shared libraries (which, IIRC, were a
| relatively new feature at the time), but overall I remember
| those machines working quite well. The Indy was a great daily
| driver for the time.
| tyingq wrote:
| I saw some of that happen too, but it wasn't appreciably
| different from other expensive Unix workstations in that
| respect. That is, while there were people getting actual value
| from them, there were also people buying them that didn't need
| them.
| technothrasher wrote:
| <shrug> I had an SGI Indigo back in the mid-90s, and it
| functioned fine as a Unix workstation, as well as being very
| useful for the weather satellite imagery work I was doing. It
| ran circles around the Sun machines I had access to at the
| time.
| lallysingh wrote:
| In the mid 1990s I was at a military research lab doing VR
| research. SGIs all around, working quite well. The indy on my
| desk worked well as a workstation, and the larger machines
| (High/Max impacts, Indigo2s I think) blew me away with their
| hardware-accelerated GL (IrisGL, still, I think) apps. They
| worked very well for what we bought them for, and I was sad to
| see the company get eaten away by competition that had much,
| much, much cheaper solutions to the same problems, but none of
| the pizazz or desktop UI. Intergraph on NT, mostly.
| rbranson wrote:
| I love all the nostalgia, but the post doesn't really answer the
| most interesting part of the title: why do workstations matter? I
| was really hoping there was some revelation in there!
| blihp wrote:
| They allow you to spend much less time thinking about resource
| constraints and/or performance optimization and just focus on
| what you're trying to get done and/or do more than would be
| possible with conventional systems. Workstations let you buy
| your way past many limitations.
|
| The closest example today would be people like developers, AI
| researchers, 3D designers and video editors buying high-end
| video cards (quite possibly multiple) running in Threadripper
| systems. They're paying up for GPU power and huge amounts of
| cores/RAM/IO bandwidth/whatever to either do something that
| isn't feasible on a lower end system or to complete their work
| much more quickly.
| Wistar wrote:
| This is correct. I do video and 3D with a Threadripper 3990X
| with 128GB RAM and a 3090 because I don't want to even think
| about computational restraints. It is overkill for 95% of my
| work but, that other 5% where I am rendering something
| arduous, it pays off.
| corysama wrote:
| Alan Kay attributes a big part of the advances of PARC to the
| custom workstations they built for themselves. They cost
| $20k(?) but ran much faster than off the shelf high-end
| machines at a time when Moore's Law was accelerating CPU speed
| dramatically. He says it let them work on machines from the
| future so they had plenty of time to make currently-impossible
| software targeting where them common machines would be when
| they finished it.
| buescher wrote:
| It also helps if you are Alan Kay or the other talents that
| were at PARC back then. What future would you create if you
| had a custom $100K (2022 dollars) workstation?
| pavlov wrote:
| The NVIDIA DGX Station A100 has a list price of $149k, I
| believe. It's a workstation that's advertised as an "AI
| data center in a box":
|
| https://www.nvidia.com/en-us/data-center/dgx-station-a100/
| buescher wrote:
| That looks like it would be an absolute hoot to
| experiment with, but I don't know what I could possibly
| do with one that would generate a return on $150K. What
| would you do?
| sbierwagen wrote:
| In some circumstances making a hedge fund model 0.001%
| better would return 10x that.
|
| John Carmack tweeted about buying one a while back. I'm
| not sure if a DGX on your desk does anything for working
| with ML at the bleeding edge, though, since those all run
| on megaclusters of A100s or TPUs.
| buescher wrote:
| What's he doing with it?
| sbierwagen wrote:
| AI stuff: https://www.facebook.com/permalink.php?story_fb
| id=2547632585...
| luckydata wrote:
| I think we tend to overestimate how "good" those people
| were. Yes they were definitely good professionals, but they
| happened to be in a very special place at a special time
| with very few constraints compared to how we work now. It
| was a lot easier for them to innovate than for any of us
| now.
| azinman2 wrote:
| They were working in a total vacuum. Computers were
| classically giant things that simply tabulated or ran
| physics simulations. To create an entire well articulated
| vision of HCI is extremely difficult and requires both
| creativity and technical competence. I would not make
| such statements that it was easier to innovate then. In
| fact, I'd say it's way easier to innovate now that so
| much exists to play with and mix and match, not to
| mention the ability to have perspective on negatives of
| assumptions previously made that can be corrected.
| retrocryptid wrote:
| well... not a TOTAL vacuum. a number of the PARC people
| (thinking of Tesler & Kay) were intimately familiar with
| Englebart's work at SRI. When the ARC (Engelbart's
| Augmentation Research Center at SRI) was winding down and
| PARC was staffing up, the people who left first were
| supposedly those who rejected the "brittleness" of the
| expert-focused software ARC developed.
|
| It's definitely true that the cost of implementing a
| frame buffer fell well into the affordable range as they
| were moving to PARC. And politics at PARC made it easy to
| say you were developing a system for "inexpert document
| managers." They were definitely exploring new ideas about
| HCI as PC hardware was emerging. But Larry Tessler was
| pretty clear that Lisa learned what not to do from
| looking at the Alto & Star. And the Alto & Star learned
| what not to do by looking at various bits of ARC
| software. And Engelbart was adamant his team not repeat
| the UI/UX mistakes of ITS.
|
| So sure... they were trailblazers, but they had a good
| idea of where they wanted to go.
| azinman2 wrote:
| > So sure... they were trailblazers, but they had a good
| idea of where they wanted to go.
|
| And where would that be? Not so obvious. Stick in a
| random person and they'd have no idea. You could easily
| say the same thing today. Are you not wanting to make the
| mistakes of all previous computing and know the direction
| it should take? If you do, and you're able to execute and
| change the direction of computing, you'd be a very rare
| talent indeed.
| EricE wrote:
| Ah, the old "they didn't earn that but lucked into it"
| argument.
| buescher wrote:
| Folks at PARC designed and built their own PDP-10 clone
| to get around internal politics. It's hard to
| overestimate the amount of talent concentrated there at
| the time.
|
| It always looks like all the low-hanging fruit has
| already been plucked. So, stop looking for low-hanging
| fruit.
| TheOtherHobbes wrote:
| And they did it as a warm-up before tackling something
| difficult.
| justinlloyd wrote:
| I am currently working with a hardware start-up, that happens
| to have "some monies" in the bank to deliver what we need.
| And if I was asked to describe how the culture inside the
| company feels, I would say "like the early days of NeXT."
| There's money here to do what we want, there's technically
| smart guys in the room, nothing is off the table in terms of
| what we're willing to try, we have a vision of what we want
| to build, nobody is being an architecture astronaut, all of
| us have shipped product before and know what it takes.
|
| Where I am going with all this is that what we're trying to
| build, the consumer grade hardware to run it won't exist for
| two more years so we're having to use really beefy
| workstations in our day-to-day work. Not quite PARC level of
| built-from-scratch customization, but not exactly cheap
| consumer grade desktops either.
| rbanffy wrote:
| A long time ago I suggested developing on Xeon Phi-based
| workstations because, in order to run well on future
| computers, you need to be able to run on lots of slow
| cores. The idea kind of still holds. These days the cores
| are quite fast and running on one or two of them gives
| acceptable performance, but if you can manage to run on all
| cores, your software will be lightning fast.
| justinlloyd wrote:
| Yes, we're very much taking a distributed, multi-threaded
| approach, but at the same time, the distributed parts are
| still local to the user.
| indigodaddy wrote:
| Are you creating an OS and/or softball as well there?
| justinlloyd wrote:
| We are not creating a custom OS at this time. We have to
| be aware of the limits of what we can achieve given the
| size of our team and the desire to actually get to market
| in a timely fashion. That said, there's heavy
| customization of the OS we are using, along with some
| bare metal "OS? Where we're going we don't need no
| steenking OS" work. We're more focused on the h/w, the UI
| and UX that interfaces between the h/w and the user, and
| the graphics pipeline.
| [deleted]
| EricE wrote:
| I think an analogy to supercars is pretty relevant. They are a
| minuscule percentage of cars developed/sold but have a
| disproportionate influence on the car market overall.
|
| I'm sure there are analogies for a lot of other industries as
| well.
|
| Also - there is no cloud, just someone else's computer. Which
| is why I will never rely on something like a Chromebook, the
| web or other modern day equivalents of dumb terminals :)
| jart wrote:
| I don't want to come across as disrespectful to my elders but in
| many ways I feel that certain kinds of nostalgia like this are
| holding open source back. One of my favorite pieces of software
| is GNU Make. Having read the codebase, I get the impression that
| its maintainer might possibly be a similar spirit to the OP. The
| kind of guy who was there, during the days when computers were a
| lot more diverse. The kind of guy who still boots up his old
| Amiga every once in a while, so he can make sure GNU Make still
| works on the thing, even though the rest of us literally would
| not be able to purchase one for ourselves even if we wanted it.
|
| It's a pleasure I respect, but it's not something I'll ever be
| able to understand because they're longing for platforms that got
| pruned from the chain of direct causality that led to our current
| consensus (which I'd define more as EDVAC -> CTSS -> MULTICS/CPM
| -> SysV/DOS/x86 => Windows/Mac/Linux/BSD/Android/x86/ARM).
|
| My point is that open source projects still maintain all these
| #ifdefs to support these unobtainable platforms. Because open
| source is driven by hobbyism and passion. And people are really
| passionate about the computers they're not allowed to use at
| their jobs anymore. But all those ifdefs scare and discourage the
| rest of us.
|
| For example, here's a change I recently wrote to delete all the
| VAX/OS2/DOS/Amiga code from GNU Make and it ended up being
| 201,049 lines of deletions.
| https://github.com/jart/cosmopolitan/commit/10a766ebd07b7340... A
| lot of what I do with Cosmopolitan Libc is because it breaks my
| heart how in every single program's codebase we see this same
| pattern, and I feel like it really ought to be abstracted by the
| C library, since the root problem is all these projects are
| depending on 12 different C libraries instead of 1.
| pjmlp wrote:
| A example of holding on to old stuff is still making use of a
| system programming language designed in 1972 in 2022.
| jart wrote:
| I agree. That's why the Cosmopolitan Libc repository includes
| support for C++ as well as a JavaScript interpreter. It has
| Python 3. You can build Python 3 as a 5mb single file
| Actually Portable Executable that includes all its standard
| libraries! Then you put your Python script inside the
| executable using a zip editing tool and it'll run on Mac,
| Windows, Linux, name it. You can also build Actually Portable
| Lua too. More are coming soon.
| cbmuser wrote:
| Sounds like an advertisement.
| dwidget wrote:
| While I agree that having one library could be a good solution,
| I don't think all those #ifdefs are wasted. There are a lot of
| legacy tech programs that use systems way older than I ever
| imagined would still be in use. There was a minor crisis at for
| an org I was working at one time where they were going to need
| to flip a multimillion dollar system because the only source of
| replacement parts was a hobbyist in his garage and for new gov
| compliance purposes that guy was going to need to become a
| cleared contractor supplier...which can be problematic if the
| person in question is an open source advocate whose main
| purpose in running this business in retirement is supplying
| enthusiasts rather than government departments or contractors.
|
| I'm sure some of those systems and ones like it make plenty of
| use out of those #ifdefs though, and it's not just a handful of
| old fogey enthusiasts cramping everyone elses style.
| Established systems can't always evolve as fast as the general
| market.
| anthk wrote:
| Hey, NetBSD and MacPPC are still used, and people still
| backports Nethack/Slashem/Frotz to those archs.
|
| Old hardware is always useful.
| jart wrote:
| All Actually Portable Executables run on NetBSD. I love
| NetBSD. I helped fix a bug in their /bin/sh. I even put the
| little orange flag on my blog.
| https://justine.lol/lambda/#binaries See also
| https://github.com/jart/cosmopolitan#support-vector
| anthk wrote:
| On non amd64 NetBSD?
| jart wrote:
| It is supported very well on AMD64 NetBSD. Perhaps that
| will be expanded in the future.
| the_only_law wrote:
| NetBSD is so cool, and I have so many machines sitting around
| I need to get running on (SGI, Alpha, Dreamcast, etc.)
|
| Sadly I've heard it can be rough on older architectures
| still. I've been told, that at least on VAX, for example is
| not in the best of states because usermode dependencies on
| Python. From what I was told, Python currently doesn't have a
| VAX port due to the architectures floating point design.
| cbmuser wrote:
| The VAX backend in GCC was recently modernized and improved
| so it could survive the cc0 removal.
|
| There was a fundraiser which I created for that purpose.
| bitwize wrote:
| I'm sorry that not all of us are as brilliant as you, and most
| of us have failed to realize that Cosmopolitan and actually
| pdrtable executable on x86 have made all other runtimes and
| even ISA targets obsolete.
|
| On behalf of the rest of the hacker community, we'll get right
| to work on blotting out the memory of anything that's not in
| jart's personal stack, the best of all possible stacks.
| jart wrote:
| Everything I do, I do for you. I don't expect you to use it
| or thank me or pay me. All I'm saying is I could have more
| impact serving the community with fewer ifdefs.
| bitwize wrote:
| And if you cut down every #ifdef in the code base, would
| you be able to stand in the winds that would blow then?
|
| It's not just GNU make. Lots of GNU software is as you
| describe, because the GNU project took on the burden of
| abstracting the mess that was interoperating across many
| _very_ different platforms -- many of which have far less
| capability than a modern or even decade-old x86-64 box --
| and so became an internal reflection of that mess. It 's
| not pleasant, I wish I could chuck autotools into the
| fucking sun, but it gets GNU going on a variety of exotic
| platforms that still run and are made more pleasant by the
| presence of GNU there. This effort is not helped by you
| going in and trying to yeet all the code you personally
| have decided is obsolete and gets in your way. GNU Make
| doesn't need such a "service", it's not "held back" by
| refusing it, and if you want a build tool that runs on and
| helps you build your little inner-platform effect without
| considering anything you personally deem irrelevant, write
| your own! Maybe start with Plan 9's mk as a base, it's tiny
| and comes from a somewhat similar philosophy.
|
| Sheesh. Terry Davis believed he was important enough to be
| gangstalked by the CIA, and even he took a hobbyhorse,
| take-it-or-leave-it approach to TempleOS.
| jart wrote:
| I don't have any authority over the GNU project. The
| things I do in the Cosmopolitan Libc repo have no bearing
| on them. They're free to keep doing what they're doing.
| However I'm willing to bet that once I add ptrace()
| support, it'll be attractive enough that folks will be
| willing to consider the equally libre copy of the GNU
| Make software that I intend to distribute instead. Just
| as they'll be free to copy what I did back into their own
| den of #ifdefs if they want to compete. Competition is
| good. Open source is good. It's all done with the best
| intentions. We're going to end up with a better Make once
| I've contributed this feature.
| bitwize wrote:
| I apologize sincerely, it wasn't clear you were going the
| fork-and-hack route from your initial post.
|
| Still, do consider starting from Plan 9's mk... GNU make
| is... a hairball, even without the #ifdefs :)
| jart wrote:
| No worries friend. Not the first time I've gotten the
| Terry Davis treatment. Didn't Gandhi or someone say first
| they ignore, then laugh, then fight, and you win?
| Changing the world one line of code at a time!
| grishka wrote:
| The weird thing I keep seeing is that many C libraries still
| define their own integer types for some reason instead of just
| using the ones from stdint.h. Even new ones, that certainly
| didn't ever need to support ancient platforms and ancient
| compilers, like libopus.
| cesarb wrote:
| > instead of just using the ones from stdint.h. Even new
| ones, that certainly didn't ever need to support ancient
| platforms and ancient compilers, like libopus.
|
| But stdint.h is from C99, and AFAIK there are non-ancient
| compilers for non-ancient platforms that _still_ don 't fully
| support C99.
| uxp100 wrote:
| stdint.h is usually in the part they do support though (I
| think, in my experience, I haven't done a survey.)
| rbanffy wrote:
| Identifying the moment we need to stop supporting a platform is
| frequently non-obvious. Unisys still supports MCP (as Clearpath
| OS), VMS is supported and was ported to x86, Atos supports
| GECOS, and some people are making CP/M fit inside dedicated
| word processors. A couple months back there was a report of
| ncurses failing on Tandem NonStop OS (still supported, IIRC, by
| HPE). As long as something works, we'll never hear about all
| those wonderful exotic platforms people still use for various
| reasons. There must be a lot of PCs controlling machinery doing
| GPIO through parallel ports while emulating PDP-8's with some
| poor intern having to figure out how to make changes to that
| code.
| jart wrote:
| Here's a simple criterion I propose: is the platform
| disbanded?
|
| For example, in GNU Make's dir.c file. There's a lot of stuff
| like this: #ifndef _AMIGA
| return dir_file_exists_p (".", name); #else /* !AMIGA
| */ return dir_file_exists_p ("", name);
| #endif /* AMIGA */
|
| There should be a foreseeable date when we can say, "OK the
| Amiga maintainers have added a feature that lets us use '.'
| as a directory so we can now delete that #ifdef". But that
| day is guaranteed to never come, because the Amiga project is
| disbanded. So should we keep that until the heat death of the
| universe?
|
| I would propose that we instead say, if you use Amiga,
| there's great support for it in versions of GNU Make up until
| x.y.z. So if you love old Amigas, you'll be well served using
| an older version of GNU Make. I think it's straightforward.
| arrakeen wrote:
| > is the platform disbanded?
|
| amigaos had a major new release less than a year ago so i
| guess the amiga ifdefs should stay
| outworlder wrote:
| What do you mean by 'disbanded'? Should FOSS stop support
| the moment a manufacturer discontinues a device/platform?
| cbmuser wrote:
| Why on earth do you need to hack on GNUMake in the first
| place?
|
| It's old software which has far better and faster
| replacements like Ninja.
|
| The whole design of Make itself is outdated and inefficient
| which is why tools like Ninja are much faster.
| jart wrote:
| The Cosmopolitan Libc repo uses GNU Make. It builds 672
| executables, 82 static archives, and 17,637 object files
| and runs all unit tests in under 60 seconds on a $1000
| PC. How fast would Ninja do it?
| zokula wrote:
| BlackFingolfin wrote:
| Maybe because tons and tons of software build with GNU
| make, not with Ninja, and if you want to be able to build
| that software, you need GNU make?
|
| Also, Ninja by itself is not really a replacement for GNU
| make. Rather it's a tool one can build such a replacement
| on, so the comparison is a bit off to start with...
| chipotle_coyote wrote:
| > But that day is guaranteed to never come, because the
| Amiga project is disbanded. So should we keep that until
| the heat death of the universe?
|
| Surely there are more options "keep that until the heat
| death of the universe" and "remove that the moment its
| platform is out of production". A more practical metric for
| free/open software, I think, would be "is there still
| someone maintaining this software on this platform": every
| major release (e.g., 5.x -> 6.0) could be a time to do a
| sanity check, documenting ports that don't have an active
| maintainer as "deprecated". At the next major release, if
| nobody's stepped up to maintain it, then it gets removed.
| (One could argue even that's too draconian, because there
| may be people still _using_ the port even if it 's not
| maintained.)
| pavlov wrote:
| This kind of thing shows why ifdefs are usually the wrong
| tool for multi-target projects.
|
| There should be a platform adapter API instead that defines
| a shared header with function names for these actions,
| multiple platform-specific implementation files, and only
| one of them gets compiled.
|
| That way you could simply ignore the existence of
| "filesys_amiga.c", and then maybe delete it 50 years from
| now.
|
| (I realize it's probably not realistic to do such major
| internal surgery on Make at this point.)
| dhosek wrote:
| This was one of the brilliant aspect of Knuth's web
| system: You could have change files that would be applied
| to the immutable source to manage ports to individual
| platforms.1 I really wish that this sort of patching had
| spread to other programming paradigms.
|
| [?]
|
| 1. It works even on somewhat actively developed code
| since the likely to require porting parts of the program
| could be clumped together in a web source file and, once
| isolated, generally saw few if any changes. I remember
| maintaining the public domain vms change files for the
| TeX 2-3/MF 1-2 upgrades and as I recall, even with these
| significant changes to the programs, no updates for the
| change files were necessary.2
|
| 2. Most of the work that I did back then in maintenance
| was centered around enhancements to the VMS-specific
| features. For example, rather than having iniTeX be a
| special executable, instead, iniTeX features could be
| enabled/disabled at runtime from a single executable.3
| Similarly with debug code.
|
| 3. This feature appeared soon after in the web2c port of
| TeX and friends, but I think that Tom Rokicki might have
| got the idea from me (or else it was a case of great
| minds thinking alike in the late 80s).
| cbmuser wrote:
| Make is a very old codebase that you shouldn't change in
| dramatic way anyway. It's in itself an outdated piece of
| software which has far better and more modern replacements.
|
| No need to break it for older systems.
| jjtheblunt wrote:
| > I feel that certain kinds of nostalgia like this are holding
| open source back
|
| i'm misunderstanding what the post had to do with promoting
| open source
| causi wrote:
| _Because open source is driven by hobbyism and passion. And
| people are really passionate about the computers they 're not
| allowed to use at their jobs anymore. But all those ifdefs
| scare and discourage the rest of us._
|
| Isn't this the same process you yourself referenced? There's
| nothing stopping people from forking and building leaner
| versions of these programs, but it turns out that projects with
| those passionate, nostalgic developers are more successful even
| with the support burden than that same project without them.
| That backwards-support might be a _cost_ rather than a _waste_.
| pengaru wrote:
| Scaring new talent away from spending their precious time on a
| solved problem like GNU make is a feature not a bug. Work on
| something more relevant to today's challenges.
|
| There's plenty of things "holding open source back", this isn't
| a significant one of them IMNSHO.
| rodgerd wrote:
| > this isn't a significant one of them IMNSHO.
|
| "You can't have systemd in Debian, what about kFreeBSD" "You
| can't use Rust until it supports DEC Alpha"
|
| ...there are no shortage of examples where open and free
| software is held back by hyper-niche interests, where our pet
| twenty and thirty year old, long-dead projects and processor
| architectures create absurd barriers to improve anything.
| jart wrote:
| Saying make is a solved problem is a real failure of
| imagination. I used to do a lot of work on Blaze and Bazel. I
| intend to add support for a lot of the things it does to GNU
| Make. Such as using ptrace() to make sure a build rule isn't
| touching any files that aren't declared as dependencies. I
| can't do that if our imagination is stuck in the 80's with
| all this DOS and Amiga code.
| pengaru wrote:
| > Saying builds are a solved problem is a real failure of
| imagination.
|
| Don't put words in my mouth, I said GNU make is a solved
| problem.
| BlackFingolfin wrote:
| That sentence doesn't even make sense.
| cbmuser wrote:
| People can just use other tools like Bazel or Ninja.
|
| Make works the way it's intended to work. Leave it as is.
| jart wrote:
| I wrote Bazel's system for downloading files. https://git
| hub.com/bazelbuild/bazel/commit/ed7ced0018dc5c5eb... So
| I'm sympathetic to your point of view. However some of us
| feel like people should stop reinventing Make and instead
| make Make better. That's what I'm doing. I'm adding
| ptrace() support. That's something I asked the Bazel
| folks to do for years but they felt it was a more
| important priority to have Bazel be a system for running
| other build systems like Make, embedded inside Bazel. So
| I asked myself, why don't we just use Make? It's what
| Google used to use for its mono repo for like ten years.
| yvsong wrote:
| The UI of SGI's IRIX was better than the current macOS on some
| aspects, e.g., sound effects. Wish there are more competitions in
| computer UI.
| jart wrote:
| Have you seen how many desktops Linux has?
| mobilio wrote:
| Why SGI failed: https://vizworld.com/2009/04/what-led-to-the-
| fall-of-sgi-cha... https://vizworld.com/2009/04/what-led-to-the-
| fall-of-sgi-cha... https://vizworld.com/2009/04/what-led-to-the-
| fall-of-sgi-cha... https://vizworld.com/2009/04/what-led-to-the-
| fall-of-sgi-cha... https://vizworld.com/2009/04/what-led-to-the-
| fall-of-sgi-cha... https://vizworld.com/2009/05/what-led-to-the-
| fall-of-sgi-epi...
|
| It's a long story...
| panick21_ wrote:
| Podcast seems to be gone.
| oppositelock wrote:
| Oh, it's a lot longer story than that. I worked as SGI from
| just around its peak, to its downfall, seeing the company
| shrink to a tenth of its size while cutting products.
|
| At the time, I was a fairly junior employee doing research in
| AGD, the advanced graphics division. I saw funny things, which
| should have led me to resign, but I didn't know better at the
| time. Starting in the late 90's, SGI was feeling competitive
| pressure from 3DFx, NVIDIA, 3DLabs, Evans and Sutherland
| (dying, but big), and they hadn't released a new graphics
| architecture in years. They were selling Infinite Reality 2's
| (which were just a clock increase over IR1), and some tired
| Impact graphics on Octanes. The O2 was long in the tooth.
| Internally, engineering was working on next gen graphics for
| both, and they were both dying of creeping featureitis. Nothing
| ever made a deadline, they kept slipping by months. The high
| end graphics pipes to replace infinite reality never shipped
| due to this, and the "VPro" graphics for Octane were fatally
| broken on a fundamental level, where fixing it would mean going
| back to the algorithmic drawing board, not just some Verilog
| tweak, basically, taping out a new chip. Why was it so broken?
| Because some engineers decided to implement a cool theory and
| were allowed to do it (no clipping, recursive rasterization,
| hilbert space memory organization).
|
| At the same time, NVIDIA was shipping the GeForce, 3DFx was
| dying, and these consumer cards processed many times more
| triangles than SGI's flagship Infinite Reality 2, which was the
| size of a refrigerator and pulled kilowatts. SGI kept saying
| that anti-aliasing is the killer feature of SGI and that this
| is why we continue to sell into visual simulation and oil and
| gas sector. The line rendering quality on SGI hardware was far
| better as well. However, given SGI wasn't able to ship a new
| graphics system in perhaps 6 years at that point, and NVIDIA
| was launching a new architecture every two years, the reason to
| use SGI at big money customers quickly disappeared.
|
| As for Rick Beluzzo, man, the was a buffoon. My first week at
| SGI was the week he became CEO, and in my very first allhands
| ever, someone asked something along the lines of, "We are
| hemmoraging a lot of money, what are you going to do about it"?
| He replied with, "Yeah, we are, but HP, E&S, etc, are
| hemmoraging a lot more and they have less in the bank, so we'll
| pick up their business". I should have quit my first week.
| unixhero wrote:
| Thank you so much for your inside story. Hilbert space memory
| organization sounds great :)
| beecafe wrote:
| Texture memory is still stored like that in modern chips
| (presuming they meant Hilbert curve organization). It's so
| that you can access 2D areas of memory but still have them
| close by in 1D layout to make it work with caching.
| anamax wrote:
| In many cases, an executive's behavior makes sense after you
| figure out what job he wants next.
| buescher wrote:
| I have no clue what hilbert space memory organization could
| possibly be - arbitrarily deep hardware support for indirect
| addressing? - but it sounds simultaneously very cool and like
| an absolutely terrible idea.
| vardump wrote:
| Nowadays all GPUs implement something similar (not
| necessarily Hilbert but maybe Morton order or similar) to
| achieve high rate of cache hits when spatially close pixels
| are accessed.
|
| 3D graphics would have terrible performance without that
| technique.
| buescher wrote:
| Got it. I was imagining something else entirely.
| oppositelock wrote:
| the framebuffer had a recursive rasterizer which followed a
| hilbert curve through memory, the thinking being that you
| bottom out the recursion instead performing triangle
| clipping, which was really expensive for the hardware at
| the time.
|
| The problem was that when you take some polygons which come
| close to W=0 after perspective correction, their unclipped
| coordinates get humongous and you run out of interpolator
| precision. So, imagine you draw one polygon for the sky,
| another for the ground, and the damn things Z-fight each
| other!
|
| SGI even came out with an extension to "hint" to the driver
| whether you want fast or accurate clipping on Octane. When
| set to fast, it was fast and wrong. When set to accurate,
| we did it on the CPU [1]
|
| 1 - https://www.khronos.org/registry/OpenGL/extensions/SGIX
| /SGIX...
| panick21_ wrote:
| Trying to be both sell a seller of very high end computer
| products while also doing your own chips and graphics at the
| same time is quite the lift. And at the same time their
| market was massively attacked from the low end.
|
| The area where companies could do all that and do it
| successfully kind of ended in the late 90s. IBM survived but
| nothing can kill them, I assume they suffered too.
|
| What do you think, going back to your first day, if you were
| CEO could have been done?
|
| I always thought for Sun OpenSource Solaris, embracing x86,
| being RedHat and eventually Cloud could have been the winning
| combination.
| oppositelock wrote:
| I think some kind of discipline around releasing products
| in a timely way by cutting features would have done
| wonders. However, the kinds of computers SGI built were on
| the way out, so they couldn't have survived without moving
| in the direction that people wanted. Maybe it was a company
| whose time had come. SGI wasn't set up to compete with the
| likes of NVIDIA and Intel.
| panick21_ wrote:
| Why couldn't they compete with NVIDIA? Were the not just
| as big?
| oppositelock wrote:
| Engineering culture. SGI was not pragmatic in building
| hardware, more of an outlet for brilliant engineers to
| ship experiments.
| jacquesm wrote:
| I can see how that was your view if you came in on the
| tail end but it definitely wasn't always so. I've owned
| quite a few of them and if you had the workload they
| delivered - at a price. But for what they could do they
| would be 3 to 4 years ahead of the curve for a long time,
| and then in the space of a few short years it all went to
| pieces. Between NVIDIA and the incredible clock speed
| improvements on the x86 SGI was pretty much a walking
| zombie that did not manage to change course fast enough.
| But CPU, graphics pipeline, machine and software to go
| with it is an expensive game to play if the number of
| units is smaller than any of your competitors that have
| specialized.
|
| I'm grateful they had their day, fondly remember IRIX and
| have gotten many productive years out of SGI hardware, my
| career would definitely not have taken off the way it did
| without them, in fact the whole
| 'webcam/camarades.com/ww.com' saga would have never
| happened if the SGI Indy did not ship with a camera out
| of the box.
| digisign wrote:
| The PC market grew bottom up to be 10x the size of the
| workstation market during the 90s. Even with thinner
| margins, eventually workstation makers couldn't compete
| any longer on R&D spend.
|
| The book The Innovator's Dilemma describes the process.
| digisign wrote:
| ^meant thinner margins of PC industry.
| rbanffy wrote:
| > What do you think, going back to your first day, if you
| were CEO could have been done?
|
| Not quite sure. You correctly pointed out SGI (HP, Sun,
| everyone else in the workstation segment) was suffering
| with Windows NT eating it from below. To counter that, SGI
| would need something to compete in price. IRIX always had
| excellent multiprocessor support and, with transistors
| getting smaller, adding more CPUs could give it some
| breathing room without doing any microarchitectural
| changes. For visualization hardware the same also applies -
| more dumb hardware with wider buses on a smaller node cost
| about the same while delivering better performance. To
| survive, they needed to offer something that's different
| enough from Windows NT boxes (on x86, MIPS and Alpha back
| then) while maintaining a better cost/benefit (and
| compatibility with software already created). I'd focus in
| low-end entry-level systems that could compete with the
| puny x86's by way of superior hardware-software
| integration. The kind of what Apple does, when you open the
| M1-based Air and it's out of hibernation before the lid is
| fully opened.
|
| > I always thought for Sun OpenSource Solaris, embracing
| x86, being RedHat and eventually Cloud could have been the
| winning combination.
|
| I think embracing x86 was a huge mistake by Sun - it helped
| legitimize it as a server platform. OpenSolaris was a step
| in the right direction, however, but their entry level
| systems were all x86 and, if you are building on x86, why
| would you want to deploy on much more expensive SPARC
| hardware?
|
| Sun never even tried to make a workstation based on Niagara
| (first gen would suck, second gen not so much), and
| OpenSolaris was too little, too late - by then the ship had
| sailed and technical workstations were all x86 boxes
| running Linux.
| jacquesm wrote:
| SGI also offered x86 based machines, of all things
| running NT or WIN 2K. That was when the writing really
| was on the wall.
| panick21_ wrote:
| > IRIX always had excellent multiprocessor support and,
| with transistors getting smaller, adding more CPUs could
| give it some breathing room without doing any
| microarchitectural changes.
|
| That kind of exactly what Sun did and likely gave them
| legs. This might not have made it out of the 90s
| otherwise.
|
| > I think embracing x86 was a huge mistake by Sun - it
| helped legitimize it as a server platform.
|
| x86 was simple better on performance. I think it would
| have happened anyway.
|
| > OpenSolaris was a step in the right direction, however,
| but their entry level systems were all x86 and, if you
| are building on x86, why would you want to deploy on much
| more expensive SPARC hardware?
|
| That's why I am saying they should have dropped Sparc
| already in the very early 2000s. They waste so much money
| on machines that were casually owned by x86.
| the_only_law wrote:
| I always find the story of DEC interesting as well.
| rbanffy wrote:
| It was the pinnacle of tech tragedy to see them being
| acquired by Compaq.
|
| At least until Oracle, of all companies, acquired Sun...
| ulzeraj wrote:
| Also Itanium.
| jart wrote:
| That reads like a tabloid, the way it attacks individuals and
| t-shirts. I heard the fall of SGI summed up in one sentence
| once. It went something like, "SGI had a culture that prevented
| them from creating a computer that cost less than $50,000."
| That's all probably all we need to know.
| digisign wrote:
| --> The Innovator's Dilemma
| pjmlp wrote:
| It was thanks to SGI hosting of C++ STL documentation (pre-
| ISO/ANSI version) that I learned my way around it.
|
| Being graphics geek, I also spent quite some time around the
| graphics documentation.
|
| For me, one of the biggest mistakes was only making IrisGL
| available while keeping Inventor for themselves.
|
| To the subject at hand, this is one difference I find with most
| modern computers, the lack of soul in a vertical integration
| experience blended between hardware and software.
| smm11 wrote:
| I never saw an SGI in "personal computer" mode until the sun was
| setting - they were always being banged on by departments, or
| rendering 24-7. I'm jealous of anyone who had one to themselves
| when they were still a power.
|
| The Amigas were something else, though, but every time I ran into
| one it was getting its lunch eaten by the nearby Mac. Only for a
| year or two, in TV production environments, did I see an
| advantage with Amiga.
|
| Now, with both among my collection, the SGI is the one I turn on
| most frequently (when the power grid can handle it).
| ido wrote:
| You were probably seeing the amiga too late in its life - it
| was only really impressive in the 80s, but it was really
| impressive in the 80s (especially when it was competing with
| 286, EGA & PC-speaker).
| dark-star wrote:
| Talking about "the cult of SGI", and then using the new logo
| instead of the old cube logo, that's blasphemy! :-D
| twmiller wrote:
| I have a hard time taking anyone seriously when they drop
| something like this: "MacOS felt a kind of dumb, and does so ever
| since" ... I mean...MacOS is just *nix these days and has been
| for 20+ years. I jump back and forth between it and linux pretty
| much all day long and I see nothing that indicates that macOS is
| any dumber than linux.
| digisign wrote:
| It never came with a standardized package manager, and many
| user tools are ancient. Newer versions won't let you turn off
| telemetry services because they are started in a read-only boot
| volume. It's pretty but pretty dumb at times.
| tombert wrote:
| I used to be in that camp until I actually _used_ a Macbook.
| For some reason I was convinced that it wasn 't "real" Unix,
| unlike Linux.
|
| It was a naive perspective.
| snek_case wrote:
| It's definitely more locked down, less open than something
| like Linux or BSD and less developer-friendly (signed
| software, etc.), which takes away from the Linux/Unix hacker
| ethos IMO.
|
| I respect that Apple makes good quality hardware, but I wish
| there was an equivalent that was more developer-friendly.
| System76 is almost that but not quite.
| em-bee wrote:
| on the commandline it's a decent unix, sure, but no
| proprietary unix can measure up with linux nowadays.
|
| they all suffered from lack of packagemanagement and old
| versions of commandline tools. you almost always had to
| manually install better tools like GNU.
|
| i did use a macbook for some time, but the only reason i
| managed was that most of my work is on remote servers, so
| most of the terminals on my mac were running linux anyways.
| yet, when i switched back to a linux machine as my main
| workstation i just immediately felt better, and didn't miss
| the mac at all. and now when i use the mac i really just want
| to go back to linux.
| tombert wrote:
| I ran Linux for a decade full time, and I feel like the
| latest version of Gnome are actually very good, but sadly
| the lack of software support is what keeps me on macOS,
| particularly for media. FinalCut Pro is, in my opinion, a
| much better video editing suite than Lightworks (the best
| editor I'm aware of on Linux), and there really isn't
| anything even comparable to ToonBoom on Linux [1].
|
| The media scene on Linux is definitely improving (Blender
| and Lightworks and Krita have gotten good) but I think it
| still has awhile before I'm fully able to abandon my Mac
| setup. Honestly I just wish Darling would improve [2]
| enough to where I could just run everything I care about
| within Linux.
|
| [1] I actually did google and apparently there is an
| OpenToonz Snap package, so I could be wrong on this. I'll
| need to play with it.
|
| [2] No judgement to the Darling team, I realize it's a
| difficult project.
| toddm wrote:
| I have fond memories of the SGI machines - workstation and larger
| - I worked on in the 1990s and early 2000s. Octanes, O2s,
| Origins, Indigos, and so on.
|
| They were best-in-class for visualization, and when used with
| Stereographic Crystal Eyes hardware/glasses, 3D was awesome. We
| also rendered high-quality POV-Ray animations on an O2 in 1996,
| when the software was barely 5 years old!
|
| My last big computing efforts were on a SGI Origin 2000 (R12000)
| in 2002, and the allure of that machine was being able to get 32
| GB of shared RAM all to myself.
| sleepybrett wrote:
| I saw a driving simulator built with an actual car and a couple
| of reality engines driving projectors that were projecting on
| screens all around the car. It was a pretty impressive setup.
|
| Now you can probably build that out of forza, a decent gaming
| pc, and some hobbiest electronics.
| reaperducer wrote:
| _3D was awesome_
|
| Around 1990 or 2000, I was able to see how some of the big
| energy companies in Houston were using SGI machines' 3D
| capabilities.
|
| They'd have rooms about 10 feet square with projectors hanging
| from the ceiling that would take seismic data and render it on
| the walls as colorful images of oil deposits and different
| strata of rocks and gas and water and such. Using a hand
| controller, the employees could "walk" through the earth to see
| where the deposits were and plot the best/most efficient route
| for the drilling pipes to follow.
|
| Pretty much today's VR gaming headset world. Except, without a
| headset. And this was almost a quarter of a century ago.
|
| I can't imagine what the energy companies are doing now, with
| their supercomputers and seemingly limitless budgets.
| retrocryptid wrote:
| This is going to sound weird... but I really loved my 43P. And
| now I have a flood of nostalgia about it (and AIX).
| KingOfCoders wrote:
| Had an Amiga (500/A4k40), always wanted an SGI. We were on
| several CEBITs asking SGI for selling us a machine for the 3d
| graphics, but didn't happen (we bought lots of monitors at CEBITs
| though). Later worked with SGIs in the 90s for at my first
| developer job <3
| unixhero wrote:
| You can emulate a machine with Qemu and run Irix there to enjoy
| the GUI.
|
| The killer apps on the platform were the various proprietary
| high en graphical and 3D suites. These are much better on
| modern computers now anyways.
| AlbertoGP wrote:
| From reading about it, I was under the impression that the
| emulation was painfully slow, so I did not even try it. Is it
| possible to get close to a real machine under emulation with
| a current computer?
| mst wrote:
| My favourite thing about my Indigo2 (bought second hand for
| relatively cheap in the early '00s) was that unlike all the
| x86 kit I had running, it survived power brownouts with a
| line logged to console mentioning it had happened.
|
| When the area I was living at the time was having periodic
| power issues I'd check the console any time I got home, and
| if there was a new log message I knew I'd need to bring all
| the x86 kit back up once I'd had a coffee or three.
| jasoneckert wrote:
| I remember the SGI machines well. IRIX was easily my favorite
| graphical UNIX.
|
| We used SGI Indy systems for network administration at the
| university, and SGI Octanes for niche graphical applications and
| databases, but they were always considered an expensive luxury
| for both of those use cases. Nearly every other UNIX system at
| the university back then was Sun Microsystems.
| meerita wrote:
| I was a 16 years old kid back then when I bought PC Magazine only
| too see the new models of SGI workstations. I drooled so much and
| I remember the crazy prices back then +30k for small
| workstations. I loved the style of the PC cases, the colors and
| the OS and then I hit 19 and started working with a Mac and my
| desire to acquire SGI have gone
| Keyframe wrote:
| I still power up my amigas and Indigo2 10k Max Impact
| occasionally. Just to think of how much that purple computer cost
| back then (I worked on them, VFX).
|
| Anyways, here's desktop for linux to mimic IRIX 4dwm/motif
| https://docs.maxxinteractive.com/ if you're into that sort of
| thing.
| geocrasher wrote:
| The movie "Hackers" prophesied that RISC was going to "change
| everything". And it did, but not in these workstations, but
| rather in smart phones, raspberry pi's and other projects that
| have made RISC viable again.
| happycube wrote:
| x86 from Pentium Pro and K6 on are basically hardware x86->RISC
| recompilers. (This is a good bit of why Transmeta failed in the
| end, the last good bits of Moore's Law ensured that the
| recompiling became cheaper/more efficient in hardware)
| pjmlp wrote:
| ARM is very CISCy in their instruction set.
| mst wrote:
| So far as I can tell, people still call ARM RISC because it's
| load/store and x86 isn't (feel free to correct me on this,
| I'm not a CPU person), and ARM's instruction proliferation
| gets glossed over.
|
| I do remember writing asm for the arm26 (arm2) chip in my
| Archimedes and that was definitely actual RISC, but obviously
| these days not so much.
| Cockbrand wrote:
| As an aside, I kinda find funny how Apple flip-flops from CISC
| (68k) to RISC (PPC) to CISC (x86) back to RISC (ARM). Let's see
| whether RISC is here to stay now.
| unixhero wrote:
| I started buying all workstations and Amigas I could find. To be
| honest the Amigas are annoying because each and every one need
| some kind of upgrade or repair in order to work. And then it is
| the illogical AmigaOS and Workbench where purists are the only
| ones who truly get it.
|
| I prefer Unix workstations instead and play retro games on my
| MiSTER.
| zozbot234 wrote:
| What's so illogical about Workbench/AmigaOS? It seems very
| intuitive to me, even by modern standards.
| unixhero wrote:
| Sure the GUI is okay you are right. But when you are
| repairing and when you are tinkering you need to shim in
| drivers and make stuff work. Google-fu is not enough to find
| solutions in my experience. That aspect of it.
| vidarh wrote:
| There are a handful of sites to go if you need help
| repairing:
|
| https://amigaworld.net
|
| https://forum.amiga.org/
|
| https://amigans.net
|
| https://eab.abime.net/ (English Amiga Board)
|
| You'll be able to get help there much more easily than via
| Google.
| unixhero wrote:
| Thanks a lot for these resources. I will surely look into
| these when I get any further on the 1200 machines or 600
| vampire accelerated or something.
| blihp wrote:
| It's only illogical in retrospect now that Unix/Linux 'won'.
| Back then, every platform had its own quirky OS and hardware.
| Of the bunch, I found the Amiga running AmigaOS the among the
| _least_ quirky and illogical.
| zozbot234 wrote:
| Let's be clear, those workstations were hella expensive. (The
| Amiga was not in the true workstation range, but rather more of a
| glorified home computer. Their workstation equivalents would
| probably be the stuff from NeXT.) Their closest modern equivalent
| would probably be midrange systems like whatever Oxide Computer
| is working on these days. A workstation was simply a "midrange"
| level system that happened to be equipped for use by a _single_
| person, as opposed to a shared server resource. The descendant of
| the old minicomputer, in many ways.
| sleepybrett wrote:
| I'd say when you get into a fully kitted 2k video toaster you
| get into 'workstation' territory for my potentially personal
| definition of 'workstation'. For me a 'workstation' is a
| machine built and optimized for a task that primarily runs that
| task and that task only. It is sometimes the 'core hardware'
| that is interesting, but often many of the peripherals are more
| interesting. Things I consider workstations include Avid and
| other video editing systems, machines built for cad, and yes
| many of the 'desktop' sgi machines which generally did nothing
| but run software like softimage all day every day.
|
| The 'workstation' largely died because general off the shelf
| machines because fast enough to perform those task almost as
| well. You now see a more open market for the peripherals that
| help 'specialize' a general purpose computer. Wacom tablets,
| video capture devices, customized video editing controllers,
| midi controllers, GPUs, etc
| jmwilson wrote:
| Yep, the closest I ever got to a SGI was drooling over their
| product brochures as a kid. The cost of a modest Indy was about
| the same as a mid-range car. It's hard to grasp as a modern PC
| user that these workstations could handle classes of problems
| that contemporary PCs could not, no matter what upgrades you
| did. Today, it would be like comparing a PC to a TPU-based (or
| similar ASIC) platform for computing.
|
| From what I've read, Oxide is making racks of servers and has
| no interest in workstations that an individual would use.
| sleepybrett wrote:
| When a game company I worked at went out of business and
| couldn't unload their aging Indigo Elans and Indys I picked
| up one of each for about a hundred bucks. I now have some
| regrets simply because their monitors have strange
| connectors, so i keep them around and they are heavy and
| annoying to store. That said I could probably pay off my
| initial purchase and then some by unloading one of their
| 'granite' keyboards (ALPs boards, collectors love them).
| tech2 wrote:
| That 13W3 connector is the worst. I also had an Indy many
| years ago and getting an adapter together for it was a real
| challenge. These days I expect it to be somewhat simpler
| though.
| notreallyserio wrote:
| No kidding:
|
| https://daringfireball.net/linked/2019/12/17/sgi-workstation...
|
| > The Octane line's entry-level product, which comes with a
| 225-MHz R10000 MIPS processor, 128MB of memory, a 4GB hard
| drive, and a 20-inch monitor, will fall to $17,995 from
| $19,995.
|
| Really makes the M1 Ultra look affordable.
| guyzero wrote:
| That's just over $31,000 in 2022 dollars. I don't think I can
| even imagine what kind of modern desktop you could build for
| that much money.
| justinlloyd wrote:
| Well dual XEON SP2 CPUs, multiple RTX A5000 GPUs, 30TB of
| SSD storage, 512GB of RAM and dual BlackMagic quad-input 4K
| capture cards can get you pretty darn close when it comes
| to your computer vision work.
| jeffbee wrote:
| https://zworkstations.com/configurations/3010617/
|
| 24x 4.5GHz cores, 96GB memory, 48TB NVMe storage, 2 giant
| GPUs, etc.
| guyzero wrote:
| That's wild, although it seems to be server parts in a
| workstation case. I guess none of Intel's "desktop" chips
| support a dual/quad CPU configuration though, so that's
| your only choice. Quad 8 TB NVMe drives is definitely one
| way to get to $30K of parts pretty quickly.
| sbierwagen wrote:
| Neither Intel or AMD support SMP with consumer chips. To
| go dual-processor with AMD you have to buy EPYC skus,
| which are several times more expensive than their
| threadripper core-count equivalents.
| nullc wrote:
| FWIW, EPYCs sell on ebay with $/core prices much closer
| to threadripper prices-- presumably that's closer to what
| AMD is selling them for to large companies after
| discounts.
|
| The MSRP on them is ... quite staggering though!
| Sohcahtoa82 wrote:
| I'm not even sure you _could_ build a $31,000 desktop
| computer even if you wanted to without resorting to some
| ridiculous "expensive for the sake of being expensive"
| parts. Even quad RTX 3090 Ti's would only set you back
| $8,000 if you got them at MSRP.
|
| EDIT: Just saw the other comment and I stand corrected.
| paulmd wrote:
| You can run up costs pretty much arbitrarily with big
| memory and big storage. 2TB of RAM in a workstation will
| run you at least $30k if not more (it was $40k last time
| I checked), and you can go as high as 4TB in current
| systems. And big storage and NVMe arrays, it's almost a
| matter of "how much you got?", you can really scale
| capacity arbitrarily large if you've got the cash
| (although _performance_ won 't increase past a certain
| point).
|
| This was always the dumb bit with the "apple wants HOW
| MUCH for a mac pro!?!?" articles about the "$50k mac"...
| it had $40k of memory in it alone, and the "comparable"
| systems he was building maxed out at 256GB theoretical
| and 128GB actual. That's great if it works, using a lower
| spec will push costs down on _both_ sides, but it 's not
| _comparable_.
| nullc wrote:
| > 2TB of RAM in a workstation will run you at least $30k
|
| The trick here is to use a board with 32 dimm sockets --
| which requires an oddball formfactor-- but it radically
| lowers the cost of reaching 2TB.
|
| But your point remains, change your target to 4TB ram
| (which really isn't an absurd amount of ram) and the
| astronomical costs come back (unless you go to 96 dimm
| socket systems, which have their own astronomical costs).
| justinlloyd wrote:
| Top-of-the-line 512GB LRDIMM DDR4 will run you about
| $2,500 before tax if you buy name brand Samsung. I know
| this because that is what is in both of my dual Xeon
| workstations. It gets pricey when you go through Dell or
| HP of course.
| pram wrote:
| Quad RTX A6000s would be $24k and that's what would go in
| a "workstation"
| Sohcahtoa82 wrote:
| Venturing off-topic a bit here, but what exactly makes a
| "workstation" GPU? What's the difference between an RTX
| A6000 and an RTX 3090?
| jeffbee wrote:
| The A6000 has ECC memory and the 3090 does not. I think
| that's the chief differentiation between workstations and
| any other kind of desktop computer. Like a server, they
| will have ECC everywhere.
| madengr wrote:
| dagw wrote:
| The main difference is you get twice the memory. If you
| don't need that, there is very little reason to get an
| A6000.
| justinlloyd wrote:
| ECC RAM, different cooling setup (blower vs side fans),
| very different thermal characteristics, 24GB or 48GB,
| more bus width usually, optimized paths for data load &
| unload, GPU interconnects for direct GPU to GPU
| communication, shareable vGPUs between VMs, GPU store &
| halt, h/w support for desktop state, GPU state hand-off
| to another machine. It isn't just a "more memory" kind of
| thing.
| pram wrote:
| In addition to the other stuff people posted you also get
| to use the certified GPU drivers. Which means they
| actually tested that the card would work 100% with
| AutoCAD or whatever
| ChuckNorris89 wrote:
| _> I'm not even sure you could build a $31,000 desktop_
|
| A decked out Mac Pro can reach over $50,000 and it's not
| even that powerful as your 2x 3090Ti example, but that's
| the Apple tax for you.
| EricE wrote:
| Have you tried to price out a comparative PC? If you even
| can? Because there aren't many that will take as much RAM
| as that $50K Mac Pro and when you do find a PC that will
| all of the sudden you realize there isn't much of an
| Apple tax at all for the equivalent hardware.
|
| Want to argue that Apple should have more variation in
| their offerings and price points? Sure - I heartily
| agree. But blithely tossing out a contextless $50K price
| tag as being some sort of "tax" is just silly.
| greggsy wrote:
| You can easily get well past $40k once you start adding
| some Quadro GPUs, 192gb RAM and a few TBs of PCIe storage
| into any of the mainstream manufacturers' workstation
| products.
| nazgulsenpai wrote:
| Could easily get there with a Mac Pro:
| https://www.apple.com/shop/buy-mac/mac-pro/tower
| usefulcat wrote:
| The Indy, which predated the Octane, started much lower ($5k
| according to Wikipedia, presumably in mid-nineties dollars),
| but yeah your point very much stands.
| twoodfin wrote:
| The Indy, though, was notoriously underpowered. Very much
| the "glorified home computer" the GP described, albeit
| running MIPS.
|
| Still, sure did stand out in the MIT computer labs!
| Epiphany21 wrote:
| Indys weren't truly that slow. The problem was the base
| models were memory constrained to the point where IRIX
| could barely boot. 16MB was not enough, and IRIX 5.x had
| memory leaks that made it even worse. An Indy with 96MB+
| will run IRIX 6.5 pretty well.
| usefulcat wrote:
| That sounds right. I believe most or all developers at
| the place I worked had either 32 or 64 MB in their
| machines. At first (~1995) most were probably using IRIX
| 5.3, but by 96 or 97 I think most if not all had moved to
| 6.5.
|
| Whatever I had, I don't recall lack of memory ever being
| a problem. And the GUI was quite snappy.
| jacquesm wrote:
| The GUI was fantastic. Minimal got out of your way as
| much as possible and used the hardware acceleration to
| great effect. IRIX 6.5 was rock solid, I used it as my
| main driver for years before switching to Linux, we also
| had some windows boxes floating around because we
| supported a windows binary but that and admin were the
| only things done on those, everything was either SGI or
| Linux. I was still using my SGI keyboard two years ago
| but it finally died.
| vidarh wrote:
| SGI some places did a great job at giving good deals to
| computer labs. When I was at university in Oslo, there
| were rows and rows of Indy's on one side of the biggest
| undergrad computer lab, and then a bunch of Suns with
| multiple monochrome Tandberg terminals hooked up on the
| other.
|
| No big surprise that the Indy side always filled up
| first, and that "everyone" soon had XEarth and similar
| running as backgrounds on the Indys... Of course
| "everyone" loved SGI and were thoroughly unimpressed with
| Sun after a semester in those labs.
| don-code wrote:
| There's a running joke about the Indy that it's the Indigo
| (its much-more-expensive brother) without the "go".
| api wrote:
| > Really makes the M1 Ultra look affordable.
|
| The amount of power you can buy today for under $1000 let
| alone under $10000 is insane compared to back then. The M1
| Ultra is not that expensive compared to mid-range
| workstations or even high-end PCs of previous eras.
| vidarh wrote:
| I used to run an e-mail service with ~2m user accounts
| '99-'01. Our storage was an IBM ESS "Shark" stocked with
| 1.5TB of drives and two RS/6000 servers as the storage
| controllers.
|
| Add on web frontends and mail exchangers, and the entire
| system was slower and had less aggregate RAM and processing
| power, less (and slower) disk (well SSD in my laptop) than
| my current $1500 laptop.
| paulmd wrote:
| Yeah, I don't quite get the way people sometimes reminisce
| about the hardware costs of the past. We used to have
| _consumer_ CPUs topping $1000 back when that was some
| serious money, and big-boy graphics workstations could
| easily run the tens or hundreds of thousands.
| TheOtherHobbes wrote:
| Non-toy computers were only available to the relatively
| wealthy for nearly a decade. The original Apple II was
| the equivalent of around $5000, which certainly wasn't a
| casual purchase for most people.
|
| If you look in back-issues of Byte the prices of early
| PCs with a usable spec are eye-watering, even before
| correcting for inflation.
|
| Prices didn't start dropping to more accessible levels
| until the 90s.
| LinuxBender wrote:
| Anecdotally a friend had a computer store that sold Amiga's and
| had his entire inventory bought out by the CIA _of whom never
| paid him_ so they must have been powerful for something. This
| was in the late 90 's. No idea what they were using them for. I
| used one to help a friend run a BBS. I could play games with
| incredible graphics whilst the BBS was running in the
| background.
| vidarh wrote:
| If it was late 90's, as much as I love Amiga, it would have
| been for niche stuff like replacing a bunch of information
| screens or something like that where they _could_ have
| replaced it with PCs but would then need to change their
| software setup. In terms of "power" the Amiga was over by
| the early 90's, even if you stuffed it full of expensive
| third party expansions. It still felt like an amazing system
| for a few years, but by the late 90's you needed to seriously
| love the system and AmigaOS to hold onto it, and even for
| many of us who did (and do) love it, it became hard to
| justify.
| mysterydip wrote:
| Could have been a case of designing a platform around the
| hardware in the early 90s, then being desperate for parts
| to keep the platform going while designing the upgrade.
| vidarh wrote:
| Maybe, in which case it'd likely still be something
| video-oriented. The Amiga was never particularly fast in
| terms of raw number-crunching. As a desktop computer it
| _felt_ fast because of the pre-emptive multitasking and
| the custom chips and the use of SCSI instead of IDE. Even
| the keyboard had it 's own CPU (a 6502-compatible SOC) on
| some of the models - "everything" was offloaded from the
| main CPU, and so until PCs started getting GPUs etc. it
| didn't matter _that much_ that Motorola from pretty early
| on was struggling to keep up with the x86 advances.
|
| But for video it had two major things going for it:
| Genlock, allowing cheap passthrough of a video signal and
| overlaying Amiga graphics on top of the video, and
| products like the Video Toaster that was initially built
| around the Amiga.
|
| So you could see Amiga's pop up in video context may
| years after they were otherwise becoming obsolete because
| of that.
| mst wrote:
| It seems entirely plausible to me that three letter
| agencies could also have done render farm type things
| like this: http://www.generationamiga.com/2020/08/30/how-
| 24-commodore-a...
|
| (I think this is a subset of your comment rather than an
| extension but Babylon 5 reference ;)
| vidarh wrote:
| That'd be Video Toasters. But Newtek ditched Amiga
| support in '95, and by late 90's PCs or DEC Alphas would
| cream the Amigas for the render farms.
|
| Even Babylon 5 switched the render farms for seasons 4
| and 5.
|
| Not impossible someone would still want to replace
| individual systems in a render farm rather than
| upgrading, but given the potential speed gains it'd seem
| like a poor choice.
| jasonwatkinspdx wrote:
| Yeah, I had a friend in the late 90s that used an Amiga
| with Genlock to fansub anime. I wouldn't be surprised if
| the CIA had some generic media rack kit or whatever that
| did something similar.
|
| People also kept Amigas going past their prime for apps
| like Deluxe Paint.
| sleepybrett wrote:
| Hell even in the early aughts you would still see video
| toasters in use at small local television stations until
| they were finally killed by HD.
| zerohp wrote:
| Amiga's were used for a lot of weird video things like
| touch-screen video kiosks. Genlock a serial controlled
| laser disc player to the Amiga and put it in a cabinet
| with a serial port touch screen.
|
| A PC could certainly replace it by 2000 but if you
| developed your content in the mid-1980's then Amiga was
| probably your solution and you needed to keep it going
| for a while.
| blihp wrote:
| A loaded up Amiga (i.e. add a CPU accelerator board, more RAM
| than most PCs could handle, specialized video processing cards
| etc) could get into the low end of workstation territory. But
| you are right that architecturally, they had more in common
| with high end PCs than workstations of their day. The Amiga's
| main claim to fame from a hardware standpoint was their
| specialized chipset.
___________________________________________________________________
(page generated 2022-04-05 23:01 UTC)