[HN Gopher] Stack Computers: the new wave (1989)
       ___________________________________________________________________
        
       Stack Computers: the new wave (1989)
        
       Author : JoachimS
       Score  : 60 points
       Date   : 2022-08-14 06:41 UTC (1 days ago)
        
 (HTM) web link (users.ece.cmu.edu)
 (TXT) w3m dump (users.ece.cmu.edu)
        
       | rwmj wrote:
       | The conventional view is that stack computers didn't scale
       | because they're very demanding in memory bandwidth - every
       | operations ends up reading 2 operands from memory and perhaps
       | writing one result back. Which is of course exactly the wrong
       | thing because registers got much faster than RAM. But they're
       | also elegantly small. Are there any stack computers used today,
       | perhaps in systems where slow speed is not a problem but ultra
       | small size (and code size) is an advantage? I only see
       | https://en.wikipedia.org/wiki/ZPU_(microprocessor) (2008) - 442
       | LUTs(!).
        
         | [deleted]
        
         | dbcurtis wrote:
         | Memory bandwidth can be managed by all manner of
         | microarchitectural methods. The issue in the modern world is
         | that doing any kind of register renaming that would allow
         | issuing 4 or more instructions at a time gets convoluted very
         | quickly. If you could crack that, the code density would be
         | great. But in the end giving the compiler 32 or so
         | architectural registers and adding a renaming backend with 100+
         | micro architectural registers is probably a good trade off and
         | avoids gnarly logic long-paths.
        
         | throwaway81523 wrote:
         | Look at for example the GA144 and similar tiny deeply embedded
         | cores like http://bernd-paysan.de/b16.html , where the stack is
         | implemented as a register file. Those cores are actually fast,
         | but limited in capability and a nuisance to program.
        
         | cmrdporcupine wrote:
         | Wouldn't a stack machine also present difficulties for branch
         | prediction and out of order execution techniques?
         | 
         | I have no proof of this and I am not a processor designer. Just
         | a hunch that this kind of analysis would be harder to perform
         | on a stack oriented program flow.
        
         | LAC-Tech wrote:
         | Couldn't the stack just be made out of the same super fast
         | memory processor registers are made of?
        
           | p1necone wrote:
           | I assume part of the problem is the physical distance from
           | the CPU hard limiting speed due to the speed of light. 3d
           | stacking of memory might enable a design with a large enough
           | stack close enough to the CPU? I don't really know what I'm
           | talking about though.
        
         | amelius wrote:
         | > Which is of course exactly the wrong thing because registers
         | got much faster than RAM.
         | 
         | Why not cache the top N words then?
         | 
         | Stacks are local to a CPU, so you don't have to deal with
         | coherence.
        
           | nine_k wrote:
           | It would be fun to imagine an architecture that would be
           | needed for OoO / speculative execution in presence of a on-
           | die stack (say, a few KB of SRAM).
           | 
           | Would it be harder than register renaming? What kind of data
           | structure would it take>
        
         | rahen wrote:
         | When Burroughs started experimenting with stack processors in
         | their B5000 computer, they didn't do it for code size but the
         | ability to use high level languages first, going as far as to
         | hide assembly language to the user. Compilers are fairly
         | straightforward to write on such an architecture, and can
         | produce reasonably fast machine code in one pass. A register
         | based CPU make this quite more complex, and is indeed a more
         | performant but less elegant approach than a stack based CPU.
         | 
         | This later inspired HP for the HP 3000 and then its RPN
         | calculators, as well as other, lesser known machines. Nowadays
         | I think I've only seen this with the JVM (to reduce the IL
         | size) and a couple MCUs.
        
       | manholio wrote:
       | I wonder how soon we will have microarchitectures that are purely
       | the result of some deep learning algorithms optimizing for
       | performance and compilability of high level languages. We would
       | be able to prove them correct - that they correctly execute the
       | code and have no exploitable bugs - but we will have almost no
       | idea how exactly they achieve said performance.
        
       | buescher wrote:
       | Koopman's more recent "Better Embedded System Software" is a
       | great overview of its subject.
        
       | pohl wrote:
       | This reminds me that it's been a while since I've checked in on
       | the belt machine that Ivan Godard & friends were working on at
       | Mill Computing.
       | 
       | It looks like their website has change a bit since I last looked:
       | https://millcomputing.com/
        
       | metalforever wrote:
       | I had this guy as a professor and he was great.
        
         | LAC-Tech wrote:
         | Would love to hear more about this.
         | 
         | The stack machines book is the only computer architecture book
         | I've read cover to cover.
        
       | bmitc wrote:
       | Does anyone know of the state of the art of stack and symbolic
       | CPUs?
       | 
       | I'm reading a few books that address stack computers and also
       | symbolic CPUs, ones which may run things like Forth, Prolog, and
       | Lisp directly on the CPU (or more directly rather than via
       | abstractions sitting on top of Von Neumann architectures). The
       | ideas in the books, such as those in _A High Performance
       | Architecture for Prolog_ , seem important and useful, but it's
       | hard to tell if they aren't useful in today's modern age or if
       | they simply fell out of favor or interest.
       | 
       | In today's age with FGPAs, ASICs, RISC-V, etc., it seems ripe
       | territory for custom CPU development, but about the only thing I
       | know of that is along the lines of the above is the GreenArrays
       | GA144 chip.
       | 
       | https://www.greenarraychips.com/
        
         | abecedarius wrote:
         | 25 years old, but http://bernd-paysan.de/4stack.html sounded
         | interesting at the time.
        
       | dang wrote:
       | Related:
       | 
       |  _Stack Computers: the new wave (1989)_ -
       | https://news.ycombinator.com/item?id=12237539 - Aug 2016 (28
       | comments)
       | 
       |  _Stack Computers: the new wave (1989)_ -
       | https://news.ycombinator.com/item?id=8657654 - Nov 2014 (3
       | comments)
       | 
       |  _Stack Computers: the new wave (1989)_ -
       | https://news.ycombinator.com/item?id=4620423 - Oct 2012 (37
       | comments)
        
       | digdugdirk wrote:
       | Not sure if this quite counts for what I'm asking, but:
       | 
       | Is there a good list of "fundamentally new" types of computing
       | paradigms over time? I'd love to see what paradigms branched off
       | of others, what reached dead ends and why, etc.
        
         | 7thaccount wrote:
         | I've seen some chart like this before. For example, APL started
         | the array family that has APL, J, A+, K, Kona...etc. Algol
         | started the procedural family of Algol, Fortran, Basic...etc.
         | Lisp started the family of lisp languages...and so on. The
         | taxonomy is convoluted and not quite that neat though.
        
       ___________________________________________________________________
       (page generated 2022-08-15 23:01 UTC)