[HN Gopher] Atari Transputer Workstation
___________________________________________________________________
Atari Transputer Workstation
Author : gscott
Score : 100 points
Date : 2021-03-17 09:26 UTC (1 days ago)
(HTM) web link (dunfield.classiccmp.org)
(TXT) w3m dump (dunfield.classiccmp.org)
| abraxas wrote:
| I always wondered if we could achieve greater success in server
| hardware by having boards with truly massive numbers of tiny CPUs
| all with their own RAM and maybe even individual storage. Since
| most web apps are very simple at an individual session level
| having a 1 to 1 mapping between a CPU and a session could provide
| for a great developer and user experience when building server
| apps.
| guenthert wrote:
| > I always wondered if we could achieve greater success in
| server hardware by having boards with truly massive numbers of
| tiny CPUs all with their own RAM
|
| You mean like GPGPUs?
| abraxas wrote:
| Not familiar with the term. I'll read up on it.
| rbanffy wrote:
| One day I'll design a board with a beefy network switch and a
| bunch (32, perhaps) of Octavo SiPs (an ARM core, RAM, ethernet,
| SD reader wires, GPIO in a single BGA package), put a red LED
| for each module on the edge, stack 32 of those in a translucent
| box and play with something that, at least, has the same number
| of blinking lights as a Connection Machine.
| shrubble wrote:
| Some of the folks ended up being the principals at XMOS (a play
| on words of InMos), which still has a vaguely transputer-ish
| design.
|
| As well, there is the KROC compiler which allows a variant of the
| Occam programming language to be run on Linux, OS X and I think
| Windows. (Mentioning this in case anyone wants to play with the
| concepts without having a transputer).
| [deleted]
| 2sk21 wrote:
| I wrote a lot of code on a Meiko computing surface in Occam in
| the late 1980s! Was very hard to get even simple algorithms to
| run.
| jeffbee wrote:
| The rework on the graphics card is killing me. That it was more
| economical to patch over circuit board design flaws with manual
| soldering than to re-spin a second revision of the board
| surprises me.
| buescher wrote:
| A board spin used to be orders of magnitude more expensive in
| both time and money than it is now, and this was a very low
| volume application.
| jasomill wrote:
| That, and bodges can be easily applied to already-assembled
| boards in inventory, repair centers, and the field.
| ilaksh wrote:
| What's the difference between a transputer and the latest
| graphics cards?
|
| Also, don't recent CPUs have a lot of engineering to integrate
| many cores together efficiently? Most high-end CPUs now are 4-16
| cores.
|
| I suspect that maybe just about every computer today is kind of a
| transputer.
| jacquesm wrote:
| > What's the difference between a transputer and the latest
| graphics cards?
|
| That you could string Transputers together via their links to
| create an arbitrary topology computing fabric.
| mattowen_uk wrote:
| Looks like it runs this OS:
|
| https://en.wikipedia.org/wiki/HeliOS
| a-dub wrote:
| interesting. if i'm reading the wikipedia article about this
| right, they were essentially SoCs with on-die high speed
| networking and no shared ram? the ring architecture reminds me a
| bit of multicore x86...
| timthorn wrote:
| MicroLive had a feature on the Transputer back in the day:
| https://clp.bbcrewind.co.uk/be4d23d20200fca1b1db963376852c3f
| bitwize wrote:
| With all the talk about attaching CPUs to RAM modules, the
| transputer may well live again!
| reason-mr wrote:
| I did my Ph.D. thesis on a Sun 4/110 connected to a VME
| transputer board and from there into a larger transputer array,
| using T-800s. Amazing and way ahead of the times - what really
| killed things was a combination of the removal of financial
| support by the UK Govt, and also an unexpected increase in the
| clock frequency of single core CPUs, rendering anything which was
| not on the latest process out of date. Then the UK went into
| recession, and many good people left. Some went to the west coast
| US, other took teaching positions in places like Australia. I do
| always wonder what could have become of the UK computer industry
| in the early 90's had it been appropriately funded at the right
| time (via something like DARPA). But instead, the concentration
| went into turning London into a financial hub.
| [deleted]
| reason-mr wrote:
| And also - while we are on the subject of british parallel
| computing - have a look at this :
| https://link.springer.com/chapter/10.1007%2F3-540-17943-7_12...
| - Cobweb - wafer scale VLSI in 1987
| fanf2 wrote:
| What killed the transputer, from the technical point of view,
| was the failure of the T9000. Its target clock speed was, IIRC,
| 50MHz (it was roughly contemporary with the first Pentium) but
| Inmos has terrible problems getting it to run at more than
| about 20MHz
|
| So companies that had been building large multiprocessors using
| transputer switched to other architectures, eg Meiko who were
| in the building next door were making machines with SPARC CPUs
| and their own interconnect.
|
| The T9 was cool, though. The transputer instruction set was a
| stack-based byte code, very dense but by the 1990s not that
| fast, because of the growing discrepancy between CPU speed and
| memory speed. So the T9 had an instruction decoder that would
| recover risc-style ops from the stack bytecode. It was helped a
| bit because the transputer had the notion of a "workspace", a
| bit of memory (about 16 words) that a lightweight process could
| access with very short instructions - in the T9 this
| effectively became the register set. The T9 would have been a
| very early superscalar CPU.
|
| And the T9's new fast serial links used a relatively efficient
| layer 1 signalling scheme that was later reused for IEEE 1344
| Firewire.
|
| (I was an intern at Inmos between secondary school and
| university, 1993-1994, when this was happening.)
| UncleOxidant wrote:
| > unexpected increase in the clock frequency of single core
| CPUs
|
| The 90s were brutal for alternative architectures like the
| Transputer because performance of Intel processors were
| significantly improving just about yearly. I recall a neural
| net chip startup company near where I live - they did some cool
| science Saturday presentations for the public where they
| explained how neural nets worked (this was the early 90s). But
| unfortunately, they only lasted a few years - they were only
| about 25 years ahead of their time. Now here we are in the
| 2020s and alternative architectures are sprouting like
| dandelions.
| cesaref wrote:
| I attended Bristol Uni's Computer Science department in the
| late 80s. They had a room of Sun 3s which had transputer cards,
| which were programmed in occam in a weird folding editor. It
| was clearly the future, and all programming would look like
| that in the future (hint, not the one I ended up living in).
|
| I also seem to remember seeing a demo of a mandlebrot set being
| rendered impressively quickly in parallel on a transputer based
| machine, which I think was a cube shaped machine. A quick look
| on the web doesn't throw up any obvious hits though.
| criddell wrote:
| Could the cube shaped machine be a SGI Iris?
| fanf2 wrote:
| No, that didn't contain any transputers, but it might have
| been a Parsytec GigaCluster
|
| http://www.geekdot.com/category/hardware/transputer/parsyte
| c...
| cesaref wrote:
| Just found reference to it on David May's page, which is
| suitably retro html for the Inmos architect who created
| occam and did all sorts of interesting things like formally
| prove their FPU implementation (before formal proofs for
| that sort of thing were common):
|
| http://people.cs.bris.ac.uk/~dave/transputer.html
|
| 'The B0042 board contains 42 transputers connected via
| their links into a 2-dimensional array. A number of them
| were built following a manufacturing error - all of these
| transputers were inserted into the packages in the wrong
| orientation so were fully functional but unsaleable. I had
| them all (around 2000) written off for engineering use and
| we built the B0042 'evaluation' boards! Many of these were
| given to Southampton University where they were assembled
| into a 1260 processor machine and used for experimental
| scientific computing. Inmos used them in a number of
| exhibitions (in a box of 10 boards - 420 processors)
| drawing Mandelbrot sets in real time!'
|
| Sounds like the machine I remember, a 420 processor machine
| in a box in the late 80s was quite something.
| youngtaff wrote:
| Bristol has many silicon design companies based around it,
| and they're often credited to Inmos being there first
| voldacar wrote:
| https://www.youtube.com/watch?v=W17opJa9KGY
|
| might have been this one? or perhaps the following old demo:
|
| https://www.youtube.com/watch?v=cdK3PXKvYgs
| aap_ wrote:
| I got to play with one of these two years ago for a bit. Quite a
| fun machine!
| buescher wrote:
| I remember reading about these; I didn't know Atari ever brought
| it to market.
|
| I did personally see a transputer card in a PC running a
| Mandelbrot demo in that era. EGA graphics! I don't think the
| person who had that in his office ever got it to do what he
| originally bought it for, btw.
|
| Here's a video; from 1986!
| https://www.youtube.com/watch?v=cdK3PXKvYgs
| agumonkey wrote:
| i knew about them but seeing it run is quite shocking
| alexisread wrote:
| Yes, Atari did, and without the ATW, the Falcon and Jaguar
| wouldn't have happened.
|
| Richard Miller designed the Blossom video card, was hired by
| Atari to design the Falcon (which contained a cut down
| blossom), and who in turn hired his ex-Sinclair friends to
| design the Jaguar.
|
| https://atariage.com/forums/topic/212866-atari-sparrow-proto...
| leashless wrote:
| I saw Mandelbrot sets rendering on these in realtime in the late
| 1980s or very early 1990s. I knew I was seeing the future. Now it
| all fits on an M1 Mac.
| UncleSlacky wrote:
| I saw these too, running on a Meiko Computing Surface
| containing (IIRC) 64 Transputers when I was at university. We
| were taught Occam, but weren't allowed to touch the machine
| itself.
|
| Many years later, I found a T-800 in its storage case abandoned
| in the drawer of my new (to me) desk at a new job - so I kept
| it.
| randomifcpfan wrote:
| Transputer (and later Cell) was a bet that SMP with cache
| coherency would be too difficult to implement. Here's a long
| explanation from Linus Torvalds of why we're not using
| architectures like Transputer any more:
| https://www.realworldtech.com/forum/?threadid=200812&curpost...
| benreesman wrote:
| Microservice-by-default architecture is, IMHO, neatly rebutted
| by the same argument.
| wrs wrote:
| From the manual:
|
| "[In the future] a single processor will provide somewhere
| between a gigaflop and a teraflop of performance. Are there
| really problems which need such power?"
|
| It goes on to list problems like quantum chemistry simulations
| and weather forecasting. Turns out the answer was...running a web
| browser.
| siltpotato wrote:
| Didn't foresee the inefficiencies and overhead of modern
| programming.
| shaunxcode wrote:
| I like to believe the transputer lives on in the various flavors
| of CSP that are actively in use (GO, core.async etc.)
| tyingq wrote:
| The description of the transputer seems to have a lot in common
| with the Parallax Propeller, which is pretty popular.
|
| https://www.parallax.com/propeller/
| hinkley wrote:
| I feel like at this point we should be teaching History of
| Computing in CS programs.
|
| The old people cry, "This is all the same (stuff that's been
| going on forever," the young cry, "This is the future, old man."
|
| The truth is somewhere in the middle.
|
| "The future is already here, it's just unevenly distributed,"
| very much describes the cycles, but I feel like in some ways this
| was more obvious 15 years ago, when PC evolution was the focus
| (nowadays it seems like mobility takes up a lot of bandwidth,
| which is a huge thing that feels small).
|
| Things like GPUs hitting a tipping point where some feature that
| had been around for 4-8 years was now ubiquitous enough that
| software would assume that it was available - and fast. UI
| expectations ratchet up almost overnight even though the
| underlying technology has been simmering for years. This was
| quite pronounced in the 90's, but continued well into the 00's.
|
| Everyone I think can see those, but the epicycles take a bit more
| study or time. Big architectural changes are driven by cost
| inequalities in our technologies, and those cycle. Eventually the
| right somebodies gets fed up and we get SSDs, or 10G Ethernet.
| Each of those makes some previously abandoned solutions viable
| again, and they sneak back in (often having to relearn old
| mistakes).
|
| This multiprocessing idea dates back almost to the very beginning
| of private sector computing. The ILLIAC IV (1975) was intended to
| scale to 16 cores but some hardware problems capped it at 4
| cores. Those processors were 64bit, and connected to ARPAnet a
| year before the Cray 1 was born.
|
| Sequent revisited this idea in the late 80's, early 90's, using
| Intel x86 processor arrays.
|
| We now have a handful of programming languages that have features
| that could be useful in aggressively NUMA systems (in
| particular,Scala, Elixir, and Rust). We'll probably see single-
| box clusters coming around again.
| UncleOxidant wrote:
| Definitely should be teaching more history. I recall that
| neural nets were pretty popular in the late 80s into the mid
| 90s. I recall a local startup that was working on a specialized
| neural net chip back in the early 90s, but they couldn't keep
| up with performance/price improvements from Intel & AMD and
| folded after a few years. Now there must be dozens of companies
| doing specialized architectures for neural nets.
| brianobush wrote:
| It is much cheaper nowadays to make ASICs, which I think
| partially explains the expounding growth in NN chips.
| UncleOxidant wrote:
| Not sure I completely agree. I was in the ASIC biz back in
| the early 90s. I knew about the NN company I described
| above because they were a customer. Looking at NRE costs
| now vs then it doesn't seem all that different (considering
| inflation). (Sure, they can do a lot more gates now than
| back then)
| flyinghamster wrote:
| > Sequent revisited this idea in the late 80's, early 90's,
| using Intel x86 processor arrays.
|
| Also, there were NS3200 versions, like the Balance 8000 with
| six CPUs I remember at UIUC around 1986. I was floored by how
| effortlessly it handled having just about everyone in the CS
| class compiling their assignments the night before they were
| due. Compared to the Pyramid 90x I'd used the year before, it
| was like night and day.
|
| It took a while, but that eventually percolated down to
| everyday PCs. It's kind of stunning that I can get 64-core CPUs
| these days, and even my much more modest first-generation Ryzen
| is no slouch.
| tyingq wrote:
| _" Compared to the Pyramid 90x I'd used"_
|
| Ahh, the original OSX.
| Maursault wrote:
| > I feel like at this point we should be teaching History of
| Computing in CS programs.
|
| I am responding in trepidation, because I am certain you and
| everyone at HN must know what I am about to say.
|
| Computer Science is not the science of computers, nor is it
| remotely the history of computers. The "computer" in Computer
| Science is not a machine... it is a person, "one who computes."
| Nor is Computer Science programming, not strictly speaking,
| though programming is often among the tools utilized by a
| computer scientist. Computer Science is and only is a subset of
| Mathematics, and properly initially belongs in the Math
| Department of a university.
|
| The simplest analogy I have heard, which I think most now have,
| is that a computer is to a computer scientist what a telescope
| is to an astronomer. Astronomy is not the science of
| telescopes, nor the history of telescopes, though I would
| expect most Astronomy curriculums to include some overview of
| how telescopes work and their history, but not as some core and
| essential tract within the study. So in that many machines were
| utilized to forward the pursuit of Computer Science, so long as
| it is focusing on the computer science and not the nuts and
| bolts computer, your idea has merit.
|
| If I am not mistaken, Computer Engineering probably doesn't
| spend much more than a brief overview of the history of the
| actual hardware. The CE undergraduate degree is overflowing as
| it is.
|
| IMO, what you are suggesting belongs in the curriculum of the
| History of Technology, which is a perfectly valid and endlessly
| fascinating pursuit.
|
| This machine is neat, and I was using computers during this
| era, so it makes my mouth water, "what if I had access to
| that?" But unless it was actually used by someone, a computer
| scientist, for and to advance actual Computer Science (and that
| _can not_ merely be programming or creating business
| applications or games, but needs to at least be efforts towards
| computational systems), it is entirely irrelevant to the field
| of Computer Science.
|
| Also, in the sense that Computer Science predates hardware by
| millennia, the History of Computing (i.e. the history of the
| activity of one who computes) is already covered in C.S.
|
| Suggested to me years ago, which I completely agree with, since
| "computer" is now an ambiguous term, Computer Science should
| change its name to avoid the all to common mistake of assuming
| CS has to do with desktops and servers. Computer Science is
| really the science of reckoning, so it should be called
| "Reckoning Science" to avoid further confusion.
| PAPPPmAc wrote:
| There's a quote from Alan Kay (who is full of delightful
| aphorisms) in an 2002 Dr. Dobbs interview "The lack of
| interest, the disdain for history is what makes computing not-
| quite-a-field." that is one of my favorite descriptions of the
| state of things.
|
| I'm a huge advocate for teaching History of Computing, and try
| to slip some into my classes (I teach computer engineering, but
| close enough) - maybe some year I'll manage to sell running a
| whole elective course.
| thescriptkiddie wrote:
| I strongly agree that we should be teaching a history of
| computing in CS/CE programs. I fondly remember my intro to
| computing systems class taught by a greybeard who spent the
| entire first lecture going over history and then told us to
| learn emacs or vim for homework.
| flenserboy wrote:
| Have a term where students, familiar with modern programming,
| have to deal with programming on older (80s?) machines (on
| VMs, at least). Not only will it force them to deal with
| constraints they would otherwise be unaware of, they will
| appreciate what they have available to them today much, much
| more.
| filoleg wrote:
| People underestimate how useful it would be, despite it not
| being "directly applicable".
|
| If I had to pick one class that I would call fundamental to
| my understanding of CS, it would be CS2110 (Computer
| Organization and Programming) from Georgia Tech.
|
| It started off with building stuff using logic gates, like
| APUs. Then it moved onto other stuff. It all culminated
| into building your own simplistic CPU pipeline (in an
| emulator) and making a small game for Gameboy Advance.
| Dealing with hardware limitations of that handheld console,
| as well as learning some interesting tricks the devs had to
| employ for it to add stuff like parallax backgrounds, felt
| eye-opening.
| thaeli wrote:
| Between GPUs and increasing core counts on CPUs, in many ways
| we've had single-box clusters for a while as normal current-gen
| workstations.
| dfox wrote:
| During the last 20 years the typical x86 box went from single
| CPU, through multiple CPUs, through multiple CPUs with non-
| trivial NUMA topology to the current state where not having
| non-trivial NUMA topology is meaningful point in marketing of
| the thing. The primary reason is that large class of somewhat
| interesting workloads do not cope well with NUMA and then
| obviously running such workloads on some kind of weakly
| coupled cluster is completely impossible.
___________________________________________________________________
(page generated 2021-03-18 23:00 UTC)