[HN Gopher] Ask HN: Weirdest Computer Architecture?
___________________________________________________________________
Ask HN: Weirdest Computer Architecture?
My limited understanding of "the stack" is:
Physical substrate: Electronics Computation theory: Turing
machines Smallest logical/physical parts: Transistors, Logic
gates Largest logical/physical parts: Chips Lowest
level programming: ARM/x64 instructions First abstractions
of programming: Assembly, C compiler Software architecture:
Unix kernel , Binary interfaces User interface: Screen,
Mouse, Keyboard, GNU Does there exist a computer stack that
changes all of these components? Or at the least uses electronics
but substitutes something else for Turing machines and above.
Author : bckr
Score : 71 points
Date : 2024-07-26 16:54 UTC (4 days ago)
| solardev wrote:
| Analog computers, quantum computers, light based computers, DNA
| based computers, etc.
| gjvc wrote:
| rekursiv
| gjvc wrote:
| https://en.wikipedia.org/wiki/Rekursiv
| runjake wrote:
| Here are some architectures that might interest you. Note these
| are links that lead to rabbit holes.
|
| 1. Transmeta: https://en.wikipedia.org/wiki/Transmeta
|
| 2. Cell processor: https://en.wikipedia.org/wiki/Cell_(processor)
|
| 3. VAX: https://en.wikipedia.org/wiki/VAX (Was unusual for it's
| time, but many concepts have since been adopted)
|
| 4. IBM zArchitecture:
| https://en.wikipedia.org/wiki/Z/Architecture (This stuff is
| complete unlike conventional computing, particularly the "self-
| healing" features.)
|
| 5. IBM TrueNorth processor: https://open-
| neuromorphic.org/blog/truenorth-deep-dive-ibm-n...
| (Cognitive/neuromorphic computing)
| Bluestein wrote:
| > Transmeta
|
| Whatever happened to them ...
|
| They had a somewhat serious go at being "third wheel" back in
| the early 2000s, mid 1990s? PS. Actually
| considered getting a Crusoe machine back in the day ...
| runjake wrote:
| They released a couple processors with much lower performance
| than the market expected, shut that down, started licensing
| their IP to Intel, Nvidia, and others, and then got acquired.
| em-bee wrote:
| i did get a sony picturebook with a transmeta processor. the
| problem was that as a consumer i didn't notice anything
| special about it. for transmeta to make it they would have
| had to either be cheaper or faster or use less power to be
| attractive for consumer devices.
| sliken wrote:
| They had a great plan, that was promising, and Intel was
| focused entirely on the pentium-4, which had high clocks for
| bragging rights, long pipeline (related to the high clocks),
| and high power usage.
|
| However between Transmeta's idea and shipping a product
| Intel's Israel location came up with the Intel Core series.
| MUCH more energy efficienct, much better performance per
| clock, and ideal for lower power platforms like laptops.
|
| Sadly transmeta's no longer had a big enough advantage, sales
| decreased, and I heard many of the engineers ended up at
| Nvidia, which did use some of their ideas in a nvidia
| product.
| nrr wrote:
| For those really wanting to dig into z/Architecture: <https://w
| ww.ibm.com/docs/en/module_1678991624569/pdf/SA22-78...>
|
| The Wikipedia link has it as its first reference, but it's
| honestly worth linking directly here. I highly recommend trying
| to get someone to give you a TSO/E account and picking up
| HLASM.
| skissane wrote:
| I put MVS 3.8J in a Docker image:
| https://github.com/skissane/mvs38j
|
| Not TSO/E rather just plain old TSO. Not HLASM rather its
| predecessor Assembler F (IFOX00). Still, if you get the hang
| of the 1970s version, the 2020s version is just adding stuff.
| And some of the stuff it is adding is less unfamiliar (like
| Unix and Java)
| lormayna wrote:
| Why the Cell processor did not had success in AI/DL
| applications?
| crote wrote:
| It was released a decade and a half too early for that, and
| at the time it was too weird and awkward to use to stay
| relevant once CUDA caught on.
| mass_and_energy wrote:
| This. CUDA handles a lot of overhead that the dev is
| responsible for on the Cell architecture. Makes you wonder
| what PS3 games would have looked like with CUDA-style
| abstraction of the Cell's capabilities
| sillywalk wrote:
| I don't know the details of CUDA, so this may not be a
| good comparison, but there were efforts to abstract
| Cell's weirdness. It wasn't for the PS3, but for
| supercomputers. In particular the Roadrunner which had
| both Operteron and Cell processors. It was called CellFS
| and was based on the 9p protocol from Plan 9. It says
| there was 10-15% overhead, which may not have worked for
| PS3 games.
|
| https://fmt.ewi.utwente.nl/media/59.pdf
|
| https://www.usenix.org/system/files/login/articles/546-mi
| rtc...
|
| http://doc.cat-v.org/plan_9/IWP9/2007/11.highperf.lucho.p
| df
| ramses0 wrote:
| Excellent summary, add "Water Computers" to the mix for
| completeness.
| https://www.ultimatetechnologiesgroup.com/blog/computer-ran-...
| PaulHoule wrote:
| I wouldn't say the VAX was unusual even though it was a
| pathfinder in that it showed what 32-bit architectures were
| going to look like. In the big picture the 386, 68040, SPARC
| and other chips since then have looked a lot like a VAX,
| particularly in how virtual memory works. There's no
| fundamental problem with getting a modern Unix to run on a VAX
| except for all the details.
|
| Z is definitely interesting from it's history with the IBM 360
| and its 24 bit address space. (24 bit micros existed in the
| 1980s such as the 286 but I never had one that was
| straightforward to program in 24 bit mode) which, around the
| time the VAX came out, got expanded to 31-bits
|
| https://en.wikipedia.org/wiki/IBM_System/370-XA
| skissane wrote:
| > 24 bit micros existed in the 1980s such as the 286 but I
| never had one that was straightforward to program in 24 bit
| mode
|
| Making clear that we are talking about 24-bit physical or
| virtual addressing (machines with a 24-bit data word we're
| quite rare, mainly DSPs):
|
| 286's 16-bit protected mode was heavily used by Windows 3.x
| in Standard mode. And even though 386 Enhanced Mode used
| 32-bit addressing, from an application developer viewpoint it
| was largely indistinguishable from 286 protected mode, prior
| to Win32s. And then Windows NT and 95 changed all that.
|
| 286's protected mode was also heavily used by earlier DOS
| extenders, OS/2 1.x, earlier versions of NetWare and earlier
| versions of Unix variants such as Xenix. Plus 32-bit
| operating systems such as Windows 9x/Me/NT/2000/XP/etc and
| OS/2 2.x+ still used it for backward compatibility when
| running older 16-bit software (Windows 3.x and OS/2 1.x)
|
| Other widely used CPU architectures with 24-bit addressing
| included anything with a Motorola 68000 or 68010 (32-bit
| addressing was only added with the 68020 onwards, while the
| 68012 had 31-bit addressing). So that includes early versions
| of classic MacOS, AmigaOS, Atari TOS - and also Apple Lisa,
| various early Unix workstations, and umpteen random operating
| systems which ran under 68K which you may have never heard of
| (everything from CP/M-68K to OS/9 to VERSAdos to AMOS/L).
|
| ARM1 (available as an optional coprocessor for the BBC Micro)
| and ARM2 (used in the earliest RISC OS systems) were slightly
| more than 24-bit, with 26-bit addressing. And some late pre-
| XA IBM mainframes actually used 26-bit physical addressing
| despite only having 24-bit virtual addresses. Rather similar
| to how 32-bit Intel processors ended up with 36-bit physical
| addressing via PAE
| mass_and_energy wrote:
| The use of the Cell processor in the PlayStation 3 was an
| interesting choice by Sony. It was the perfect successor to the
| PS2's VU0 and VU1s, so if you were a game developer coming from
| the PS2 space and were well-versed in the concepts of "my
| programs job is to feed the VUs", you can scale that knowledge
| up to keep the cores of the Cell working. The trick seems to be
| in managing synchronization between them all
| Hatrix wrote:
| microcode - https://en.wikipedia.org/wiki/Microcode
|
| memristor -
| https://www.computer.org/csdl/magazine/mi/2018/05/mmi2018050...
| mac3n wrote:
| FPGA: non-sequential programming
|
| Lightmatter: matrix multiply via optical interferometers
|
| Parametron: coupled oscillator phase logic
|
| rapid single flux quantum logic: high-speed pulse logic
|
| asynchronous logic
|
| https://en.wikipedia.org/wiki/Unconventional_computing
| mikewarot wrote:
| BitGrid is my hobby horse. It's a Cartesian grid of cells with 4
| bit in, 4 bit out, LUTs (look up tables), latched in alternating
| phases to eliminate race conditions.
|
| It's the response to the observation that _most of the
| transistors in a computer are idle at any given instant_.
|
| There are a full rabbit hole worth of advantages to this
| architecture once you really dig into it.
|
| Description https://esolangs.org/wiki/Bitgrid
|
| Emulator https://github.com/mikewarot/Bitgrid
| bjconlan wrote:
| On reading this I thought "oh someone's doing the Green Arrays
| thing" but this looks like it pre-dates those CPUs by some
| time.
|
| But as nobody has mentioned it yet surprisingly:
| https://www.greenarraychips.com/ albeit perhaps not weird; just
| different
| mikewarot wrote:
| The Green arrays chips are quite interesting in their own
| right. The ability to have a grid of CPUs each working at
| part of a problem could be used in parallelizing a lot of
| things, including the execution of LLMs.
|
| There are secondary consequences of breaking computation down
| to a directed acyclic graph of binary logic operations. You
| can guarantee runtime, as you know a-priori how long each
| step will take. Splitting up computation to avoid the
| complications of Amdahl's law should be fairly easy.
|
| I hope to eventually build a small array of Raspberry Pi Pico
| modules that can emulate a larger array than any one module
| can handle. Linear scaling is a given.
| MaxBarraclough wrote:
| Agreed, they belong on this list. 18-bit Forth computers,
| available today and intended for real-world use in low-power
| contexts.
|
| Website: https://www.greenarraychips.com/home/documents/index
| .php#arc...
|
| PDF with technical overview of one of their chips: https://ww
| w.greenarraychips.com/home/documents/greg/PB003-11...
|
| Discussed:
|
| * https://news.ycombinator.com/item?id=23142322
|
| * https://comp.lang.forth.narkive.com/y7h1mSWz/more-
| thoughts-o...
| jononor wrote:
| That is quite interesting. Seems quite easy and efficient to
| implement in an FPGA. Heck, one could make an ASIC for it via
| TinyTapeout - https://tinytapeout.com/
| Elfener wrote:
| https://en.wikipedia.org/wiki/Phillips_Machine
| muziq wrote:
| The Apple 'Scorpius' thing they bought the Cray in the 80's for
| emulating.. RISC, multi-core, but could put all the cores in
| lockstep to operate as pseudo'SIMD.. Or failing that, the 32-bit
| 6502 successor the MCS65E4..
| https://web.archive.org/web/20221029042214if_/http://archive...
| 0xdeadbeer wrote:
| I heard of counter machines on Computerphile
| https://www.youtube.com/watch?v=PXN7jTNGQIw
| yen223 wrote:
| A lot of things are Turing-complete. The funniest one to me are
| Powerpoint slides.
|
| https://beza1e1.tuxen.de/articles/accidentally_turing_comple...
|
| https://gwern.net/turing-complete
| jasomill wrote:
| I prefer the x86 MOV instruction:
|
| https://web.archive.org/web/20210214020524/https://stedolan....
|
| _Removing all but the mov instruction from future iterations
| of the x86 architecture would have many advantages: the
| instruction format would be greatly simplified, the expensive
| decode unit would become much cheaper, and silicon currently
| used for complex functional units could be repurposed as even
| more cache. As long as someone else implements the compiler._
| trealira wrote:
| > As long as someone else implements the compiler.
|
| A C compiler exists already, based on LCC, and it's called
| the movfuscator.
|
| https://github.com/xoreaxeaxeax/movfuscator
| nailer wrote:
| The giant global computers that are Solana mainnet / devnet /
| testnet. The programs are compiled from Rust into (slightly
| tweaked) EBPF binaries and state updates every 400ms, using VDFs
| to sync clocks between the leaders that are allowed to update
| state.
| ithkuil wrote:
| CDC 6000 was a barrel processor:
| https://en.m.wikipedia.org/wiki/Barrel_processor
|
| Mill CPU (so far only patent-ware but interesting nevertheless) :
| https://millcomputing.com/
| mikewarot wrote:
| I thought magnetic logic was an interesting technology when I
| first heard of it. It's never going to replace semiconductors,
| but if you want to compute on the surface of Venus. You just
| might be able to make it work there.
|
| The basic limit is the curie point of the cores, and the source
| of clock drive signals.
|
| https://en.m.wikipedia.org/wiki/Magnetic_logic
| mikewarot wrote:
| Vacuum tubes would be the perfect thing to generate the clock
| pulses, as they _can_ be made to withstand the temperatures,
| vibrations, etc. I 'm thinking a nuclear reactor to provide
| heat via thermopiles might be the way to power it.
|
| However... it's unclear how thermally conductive the
| "atmosphere" there is, it might make heat engines unworkable,
| no matter how powerful.
| dann0 wrote:
| The AMULET Project was an asynchronous version of ARM
| microprocessors. Maybe one could design away the clock like
| with these? https://en.wikipedia.org/wiki/AMULET_(processor)
| mikewarot wrote:
| In the case of magnetic logic, the multi-phase clock IS the
| power supply. Vacuum tubes are quite capable of operating
| for years in space, if properly designed. I assume the same
| could be done for the elevated pressures and temperatures
| on the surface of Venus. As long as you can keep the
| cathode significantly hotter than the anode, to drive
| thermionic emission in the right direction, that is.
| phyalow wrote:
| Reservoir computers;
|
| https://en.m.wikipedia.org/wiki/Reservoir_computing
| amy-petrik-214 wrote:
| they was some interesting funk in the 80s : Lisp computer:
| https://en.wikipedia.org/wiki/Lisp_machine (these were very hot
| in 1980s era AI) connection machine:
| https://en.wikipedia.org/wiki/Connection_Machine (a gorillion
| monobit processor supercluster)
|
| let us also not forget The Itanic
| defrost wrote:
| Setun: three-valued ternary logic computer instead of the common
| binary- https://en.wikipedia.org/wiki/Setun
|
| Not 'weird' but any architecture that doesn't have an 8-bit byte
| causes questions and discussion.
|
| EG: Texas Instruments DSP chip family for digital signal
| processing, they're all about deep pipelined FFT computations
| with floats and doubles, not piddling about with 8-bit ASCII ..
| there's no hardware level bit operations to speak of, and the
| smallest addressable memory size is either 32 or 64 bits.
| metaketa wrote:
| HVM using interaction nets as alternative to Turing computation
| deserves a mention. Google: HigherOrderCompany
| dongecko wrote:
| Motorola used to have a one bit microprocessor, the MC14500B.
| prosaic-hacker wrote:
| Breadboard implementation using the MC14500b
|
| Usagi Electric 1-Bit Breadboard Computer P.01 - The MC14500 and
| 555 Clock Circuit
| https://www.youtube.com/watch?v=oPA8dHtf_6M&list=PLnw98JPyOb...
| ksherlock wrote:
| The tinker toy computer doesn't even use electricity.
| trealira wrote:
| The ENIAC, the first computer, didn't have assembly language. You
| programmed it by fiddling with circuits and switches. Also, it
| didn't use binary integers, but decimal ones, with 10 vacuum
| tubes to represent the digits 0-9.
| dholm wrote:
| https://millcomputing.com/docs/
| RecycledEle wrote:
| Using piloted pneumatic valves as logic gates blew my mind.
|
| If you are looking for strangeness, the 1990's to early 2000's
| microcontrollers had I/O ports, but every single I/O port was
| different. None of them had a standard so that we could (for
| example) plug in a 10-pin header and connect the same peripheral
| to any of the I/O ports on a single microcontroller, much less
| any microcontroller they made in a family of microcontrollers.
| porridgeraisin wrote:
| Crab powered computer:
|
| http://phys.org/news/2012-04-scientists-crab-powered.html
|
| Yes, real crabs
| vismit2000 wrote:
| How about water computer? https://youtu.be/IxXaizglscw
| drakonka wrote:
| This reminds me of a talk I went to at the 2020 ALIFE conference,
| in which the speaker presented an infinitely-scalable
| architecture called the "Movable Feast Machine". He suggested
| relinquishing hardware determinism - the hardware can give us
| wrong answers and the software has to recover, and in some cases
| the hardware may fail catastrophically. The hardware is a series
| of tiles with no CPU. Operations are local and best-effort,
| determinism not guaranteed. The software then has to reconcile
| that.
|
| It was quite a while ago and my memory is hazy tbh, but I put
| some quick notes here at the time:
| https://liza.io/alife-2020-soft-alife-with-splat-and-ulam/
| sitkack wrote:
| Dave Ackley
|
| Now working on the T2 Tile Project
| https://www.youtube.com/@T2TileProject
| theideaofcoffee wrote:
| I was hoping someone was going to mention Dave Ackley and the
| MFM. It has really burrowed down into my mind and I start to
| see applications of it even when I'm not looking out for it. It
| really is a mindfuck and I wish it was something that was a bit
| more top of mind for people. I really think it will be useful
| when computation becomes even more ubiquitous than it is now,
| how we'll have to think even more about failure, and make it a
| first class citizen.
|
| Though I think it will be difficult to switch the narrative of
| better-performance-at-all-costs mindset toward something like
| this. For almost every application, you'd probably be better
| off worrying about integrity than raw speed.
| jecel wrote:
| "Computer architecture" is used in several different ways and
| that can lead to some very confusing conversations. Your proposed
| stack has some of this confusion. Some alternative terms might
| help:
|
| "computational model": finite state machine, Turing machine,
| Petri nets, data-flow, stored program (a.k.a. Von Neumann, or
| Princeton), dual memory (a.k.a. Harvard), cellular automata,
| neural networks, quantum computers, analog computers for
| differential equations
|
| "instruction set architecture": ARM, x86, RISC-V, IBM 360
|
| "instruction set style": CISC, RISC, VLIW, MOVE (a.k.a TTA -
| Transport Triggered Architecture), Vector
|
| "number of addresses": 0 (stack machine), 1 (accumulator
| machine), 2 (most CISCs), 3 (most RISCs), 4 (popular with
| sequential memory machines like Turing's ACE or the Bendix G-15)
|
| "micro-architecture": single cycle, multi-cycle, pipelines,
| super-pipelined, out-of-order
|
| "system organization": distributed memory, shared memory, non
| uniform memory, homogeneous, heterogeneous
|
| With these different dimensions for "computer architecture" you
| will have different answers for which was the weirdest one.
| jy14898 wrote:
| Transputer
|
| > The name, from "transistor" and "computer", was selected to
| indicate the role the individual transputers would play: numbers
| of them would be used as basic building blocks in a larger
| integrated system, just as transistors had been used in earlier
| designs.
|
| https://en.wikipedia.org/wiki/Transputer
| jy14898 wrote:
| Weird for its time, not so much today
| jacknews wrote:
| Of course there are things like the molecular mechanical
| computers proposed/popularised by Eric Drexler etc.
|
| I think Transport-triggered architecture
| (https://en.wikipedia.org/wiki/Transport_triggered_architectu...)
| is something still not fully explored.
| ranger_danger wrote:
| 9-bit bytes, 27-bit words... _middle_ endian.
|
| https://dttw.tech/posts/rJHDh3RLb
| sshine wrote:
| These aren't implemented in hardware, but they're examples of
| esoteric architectures:
|
| zk-STARK virtual machines:
|
| https://github.com/TritonVM/triton-vm
|
| https://github.com/risc0/risc0
|
| They're "just" bounded Turing machines with extra cryptography.
| The VM architectures have been optimized for certain
| cryptographic primitives so that you can prove properties of
| arbitrary programs, including the cryptographic verification
| itself. This lets you e.g. play turn-based games where you commit
| to make a move/action without revealing it (cryptographic fog-of-
| war):
|
| https://www.ingonyama.com/blog/cryptographic-fog-of-war
|
| The reason why this requires a specialised architecture is that
| in order to prove something about the execution of an arbitrary
| program, you need to arithmetize the entire machine (create a set
| of equations that are true when the machine performs a valid
| step, where these equations also hold for certain derivatives of
| those steps).
| Paul-Craft wrote:
| No instruction set computer:
| https://en.wikipedia.org/wiki/No_instruction_set_computing
| elkekeer wrote:
| A multi-core Propeller processor by Parallax
| (https://en.wikipedia.org/wiki/Parallax_Propeller) in which
| multitasking is done by cores (called cogs) taking turns: first,
| code is executed on the first cog, then, after a while, on the
| second, then on the third, etc.
| Joker_vD wrote:
| IBM 1401. One of the weirdest ISAs I've ever read about, with
| basically human readable machine code thanks to BCD.
| jonjacky wrote:
| Yes, it had a variable word length - a number was a string of
| decimal digits of any length, with a terminator at the end,
| kind of like a C character string.
|
| Machine code including instructions and data was all printable
| characters, so you could punch an executable program right onto
| a card, no translation needed. You could put a card in the
| reader, press a button, and the card image would be read into
| memory and executed, no OS needed. Some useful utilities --
| list a card deck on the printer, copy a card deck to tape --
| fit on a single card.
|
| https://en.wikipedia.org/wiki/IBM_1401
| GistNoesis wrote:
| https://en.wikipedia.org/wiki/Unconventional_computing
|
| There is also Soap Bubble Computing, or various form of annealing
| computing (like quantum annealing or Adiabatic quantum
| computation), where you set up your computation as the optimal
| value of a physical system you can define.
| joehosteny wrote:
| The Piperench runtime reconfigurable FPGA out of CMU:
|
| https://research.ece.cmu.edu/piperench/
| t312227 wrote:
| hello,
|
| great collection of interesting links - kudos to all! :=)
|
| idk ... but isn't the "general" architecture of most of our
| computers "von neumann"!?
|
| * https://en.wikipedia.org/wiki/Von_Neumann_architecture
|
| but what i miss from the various lists, is the
| "transputer"-architecture / ecosystem from INMOS - a concept of
| heavily networked arrays of small cores from the 1980ties
|
| about transputers
|
| * https://en.wikipedia.org/wiki/Transputer
|
| about INMOS
|
| * https://en.wikipedia.org/wiki/Inmos
|
| i had the possibility to take a look at a "real life" ATW - atari
| transputer workstation - back in the days at my university / CS
| department :))
|
| mainly used with the Helios operating-system
|
| * https://en.wikipedia.org/wiki/HeliOS
|
| to be programmed in occam
|
| * https://en.wikipedia.org/wiki/Occam_(programming_language)
|
| the "atari transputer workstation" ~ more or less a "smaller"
| atari mega ST as the "host node" connected to an (extendable)
| array of extension-cards containing the transputer-chips:
|
| * https://en.wikipedia.org/wiki/Atari_Transputer_Workstation
|
| just my 0.02EUR
| madflame991 wrote:
| > but isn't the "general" architecture of most of our computers
| "von neumann"!?
|
| That's something I was also curious about and it turns out
| Arduinos use the Harvard architecture. You might say Arduinos
| are not really "computers" but after a bit of googling I found
| an Apple II emulator running on an Arduino and, well, an Apple
| II is generally accepted to be a computer :)
| t312227 wrote:
| hello,
|
| if i remember it correctly the main difference of the
| "harvard"-architecture was: it uses its own data and
| program/instruction buses ...
|
| * https://en.wikipedia.org/wiki/Harvard_architecture
|
| i think texas instruments 320x0 signal-processors used this
| architecture ... back in, you guessed it!, the 1980ties.
|
| * https://en.wikipedia.org/wiki/TMS320
|
| ah, they use a modified harvard architecture :))
|
| * https://en.wikipedia.org/wiki/Modified_Harvard_architecture
|
| cheers!
| HeyLaughingBoy wrote:
| One of the most popular microcontroller series in history,
| Intel (and others) 8051, used a Harvard architecture.
| sshb wrote:
| This unconventional computing magazine came to my mind:
| http://links-series.com/links-series-special-edition-1-uncon...
|
| Compute with mushrooms, compute near black holes, etc.
| 29athrowaway wrote:
| The Soviet Union water integrator. An analog, water based
| computer for computing partial differential equations.
|
| https://en.m.wikipedia.org/wiki/Water_integrator
| dsr_ wrote:
| There are several replacements for electronic logic; some of them
| have even been built.
|
| https://en.wikipedia.org/wiki/Logic_gate#Non-electronic_logi...
| CalChris wrote:
| Intel's iAPX 432. 1975. Instructions were bit-aligned, stack
| based, 32-bit operations, segmented, capabilities, .... It was so
| late+slow that the 16-bit 8086 was created.
|
| https://en.wikipedia.org/wiki/Intel_iAPX_432
| wallstprog wrote:
| I thought it was a brilliant design, but it was dog-slow on
| hardware at the time. I keep hoping someone would revive the
| design for current silicon, would be a good impedance match for
| modern languages, and OS's.
| supercoffee wrote:
| I'm fascinated by the mechanical fire control computers of WW2
| battleships.
|
| https://arstechnica.com/information-technology/2020/05/gears...
| mikequinlan wrote:
| https://en.wikipedia.org/wiki/General_purpose_analog_compute...
| osigurdson wrote:
| I'm not sure what the computer architecture was, but I recall the
| engine controller for the V22 Osprey (AE1107) used odd formats
| like 11 bit floating point numbers, 7 bit ints, etc.
| CoastalCoder wrote:
| Why past tense? Does the Osprey no longer use that engine or
| computer?
| jareklupinski wrote:
| Physical Substrate: Carbon / Water / Sodium
|
| Computation Theory: Cognitive Processes
|
| Smallest parts: Neurons
|
| Largest parts: Brains
|
| Lowest level language: Proprietary
|
| First abstractions of programming: Bootstrapped / Self-learning
|
| Software architecture: Maslow's Theory of Needs
|
| User Interface: Sight, Sound
| theandrewbailey wrote:
| The big problem is that machines built using these technologies
| tend to be unreliable. Sure, they are durable, self-repairing,
| and can run for decades, but they can have a will of their own.
| While loading a program, there is a non-zero chance that the
| machine will completely ignore the program and tell you to go
| f*** yourself.
| variadix wrote:
| From the creator of Forth https://youtu.be/0PclgBd6_Zs
|
| 144 small computers in a grid that can communicate with each
| other
| AstroJetson wrote:
| Huge fan of the Burroughs Large Systems Stack Machines.
|
| https://en.wikipedia.org/wiki/Burroughs_Large_Systems
|
| They had an attached scientific processor to do vector and array
| computations.
|
| https://bitsavers.org/pdf/burroughs/BSP/BSP_Overview.pdf
___________________________________________________________________
(page generated 2024-07-30 23:01 UTC)