[HN Gopher] 60-Bit Computing (2022)
       ___________________________________________________________________
        
       60-Bit Computing (2022)
        
       Author : klelatti
       Score  : 63 points
       Date   : 2024-02-03 10:01 UTC (1 days ago)
        
 (HTM) web link (thechipletter.substack.com)
 (TXT) w3m dump (thechipletter.substack.com)
        
       | MeteorMarc wrote:
       | 60 bits must have been for historic reasons:
       | https://en.m.wikipedia.org/wiki/Sexagesimal
        
         | pekim wrote:
         | The article explains the choice of 60 bits, which was indeed
         | because of the many factors of 60.
         | 
         | "60 is a multiple of 1, 2, 3, 4, 5, and 6. Hence bytes of
         | length from 1 to 6 bits can be packed efficiently into a 60-bit
         | word without having to split a byte between one word and the
         | next. If longer bytes were needed, 60 bits would, of course, no
         | longer be ideal. With present applications, 1, 4, and 6 bits
         | are the really important cases."
        
           | Karellen wrote:
           | > If longer bytes were needed, 60 bits would, of course, no
           | longer be ideal.
           | 
           | That seems like a weird conclusion for the engineers to come
           | to, because 60 is also a multiple of 10, 12, 15, 20, 30 and
           | (of course) 60 (the complement factors of 1, 2, 3, 4, 5 and
           | 6), so a selection of longer bytes are very obviously
           | available.
           | 
           | Given that they balked at the cost of moving up to 64-bit
           | words, I guess moving up to 420-bit computing (for those
           | 7-bit bytes) was out of the question ;-)
        
             | zamadatix wrote:
             | I think it's more a remark that of the first half dozen
             | byte sizes all six fit evenly but of the next dozen only
             | three do. This makes the divisibility not really hold that
             | much value in the end. Particularly since six bits isn't
             | enough to hold dual case alphanumerics and 3 types of
             | punctuation.
        
         | flohofwoe wrote:
         | It's explained in the article. But the reason is partially the
         | same:
         | 
         | "60 is a multiple of 1, 2, 3, 4, 5 and 6. Hence bytes of length
         | 1 to 6 bits can pe packed efficiently into a 60-bit word...
        
           | drewcoo wrote:
           | > the reason is partially the same
           | 
           | We do not know the actual reasons for the Sumerian number
           | system. The best we have is post hoc claims and just so
           | stories.
           | 
           | There is no indication that early computer science was
           | beholden to the Babylonian number system "for historical
           | reasons."
        
             | flohofwoe wrote:
             | The big number of factors seems to be the most Occam's
             | razor compatible reason to me in both cases. Doesn't mean
             | the computer designers copied the idea from the Sumerians.
        
         | p_l wrote:
         | It was also common to use 5bit characters (in form of some
         | variation of baudot coding) - the 6 bit characters were
         | essentially early extension to it.
        
           | timbit42 wrote:
           | Which computer systems used 5-bit characters?
        
             | PaulHoule wrote:
             | https://en.wikipedia.org/wiki/Baudot_code
        
               | timbit42 wrote:
               | I don't see any computers mentioned in that article. I
               | don't consider a tele-printer to be a computer.
        
               | p_l wrote:
               | But early computers _used_ teleprinters as I /O and that
               | drove their character sets.
        
               | timbit42 wrote:
               | Which computers?
        
               | actionfromafar wrote:
               | If you must:
               | 
               | https://dl.acm.org/doi/pdf/10.5555/1074100.1074162
        
             | p_l wrote:
             | Everything early on that used available baudot-code
             | teletypewriters for input/output.
        
               | timbit42 wrote:
               | I'm familiar with baudot but teletypewriters aren't
               | computers.
        
               | monocasa wrote:
               | They were how you interfaced with them though. Either
               | directly or indirectly through paper tape.
        
               | timbit42 wrote:
               | Which computers?
        
               | monocasa wrote:
               | For one example among many, the original Bendix G-15's
               | paper tape reader (the PR-1) only supported 5-bit paper
               | tape, with the later PR-2 supporting 5-8bit tape.
               | 
               | http://www.bitsavers.org/pdf/bendix/g-15/T29_PR-1_Tech_Bu
               | lle...
        
       | rahen wrote:
       | Standardization on 8-bit multiples only really occurred with the
       | IBM 360 in the late 1960s.
       | 
       | Before that, six bit words were very common with plenty of 12, 18
       | and 36 bit machines (every PDP besides the 11, CDC 160, UNIVAC,
       | early IBMs)...
       | 
       | Some also had variable word length like the IBM 1401 and 1620.
        
         | forinti wrote:
         | There were Dutch computers with weird combinations like 27 bit
         | registers and 15 bit address lines made by Electrologica.
        
           | p_l wrote:
           | A lot of older computers were "bit-serial", i.e. the ALU
           | processed "bytes" bit by bit. This, along with how every part
           | used to make computers was also 1bit (because VLSI wasn't a
           | thing yet).
           | 
           | MIT Whirlwind is in some ways ancestor to all modern
           | computers by switching to bit-parallel architecture with
           | multiple bits being handled at a time.
           | 
           | 36bit word length became common for scientific applications
           | because it was apparently considered enough to do equivalent
           | of 10 digit precision in fixed precision arithmetic, thus
           | common pattern of 36 among American scientific computer
           | lines.
           | 
           | Later common availability of 4bit components like 74xxx
           | families meant that multiplies of 4 bits started being
           | common.
           | 
           | 4bits also allow to handle calculations in BCD form, which
           | had obvious use cases in businesses, which is partially why
           | S/360 which attempted to consolidate IBMs scientific and
           | business lines went with 32bit word length - 8 bit was
           | comfortably 2 BCD digits and also would fit one character of
           | then-upcoming ASCII standard (the reason EBCDIC survives in
           | S/360 is that ASCII was not yet ready and IBM decided to
           | extend their existing codes instead of taking unfinished
           | standard that could change - and they had to hardcode it in
           | some devices back then)
           | 
           | The final nail in 36bit and longer word lengths was floating
           | point becoming fast enough - and arguably infighting between
           | VAX and PDP-10 groups at DEC which among other things
           | triggered existence of BSD Sockets API and their horrible
           | legacy on networking.
        
             | rahen wrote:
             | The first generation computers, using vacuum tubes, were
             | indeed usually bit-serial. Some had parallel ALUs, or
             | sometimes just parallel multipliers units, but an entirely
             | parallel computer designed with vacuum tubes would have
             | been very power hungry and unreliable.
             | 
             | Some notorious exceptions are the ENIAC and Whirlwind 1
             | which indeed were fully parallel.
             | 
             | As for 36-bit machines, they were also popular because of
             | LISP, as a cons cell was 2x18 bits. I think this influenced
             | the design of the PDP-6 and 10.
             | 
             | And yeah, I also wished DEC didn't kill the PDP-10 for the
             | VAX, but that's another story.
        
               | p_l wrote:
               | 36bits for lisp was specific to PDP-6/10 not anything
               | else.
               | 
               | And Lisp in turn took its "memory word is a cons cell
               | because you can fit two addresses in one word" from a
               | 36bit IBM scientific computer with 15 bit addressing.
        
               | lispm wrote:
               | The Symbolics 3600 series were also 36bit machines. Later
               | the Symbolics Ivory microprocessor then was a 40bit cpu.
        
               | p_l wrote:
               | yes, but in both cases the motivation was to have 32bit
               | fixnums and addresses and thus word size was extended
               | with extra bits for tags, unlike the predecessor design
               | of CADR which used 32bit bus and had to stuff tags and
               | fixnum/addresses together - resulting in 24bit
               | fixnum/address space, or 3 8bit characters stuffed into
               | each word, plus 4 bits of tags. TI Explorer later updated
               | CADR/LMI design to use 25bit fixnum/address and 3bit
               | mandatory tags.
               | 
               | For people who don't know:
               | 
               | Tag space included both CDR-coding (compression mechanism
               | for linked lists), datatype tags, as well as formed
               | portion of the instructions if the word contained
               | instructions - on 3600 and Ivory each word could contain
               | 1 or 2 instructions.
        
               | jhallenworld wrote:
               | Bit serial made a lot of sense because the memory was
               | often serial (drum memory or acoustic delay lines use in
               | 1960s calculators).
               | 
               | >And yeah, I also wished DEC didn't kill the PDP-10 for
               | the VAX, but that's another story.
               | 
               | The DEC-20 was 6x faster than VAX-11/780 on at least one
               | benchmark (that I wrote when I had access to both in the
               | 80s..). DEC-20 was a PDP-10 implemented in ECL, it was
               | quite fast.
        
           | rahen wrote:
           | Ah, yes, there were even ternary computers, using three
           | states (1, -1 and 0) instead of two.
        
             | hnlmorg wrote:
             | In the UK we had Dekatrons that could come in a variety of
             | different bases:
             | 
             | > While most dekatrons are decimal counters, models were
             | also made to count in base-5 and base-12 for specific
             | applications.
             | 
             | Source: https://en.m.wikipedia.org/wiki/Dekatron
             | 
             | I've personally played on the Harwell Dekatron, it's a fun
             | machine
             | 
             | https://www.tnmoc.org/first-generation-gallery
        
           | jacquesm wrote:
           | Aka Dijkstra's baby. Those were the predecessor of our modern
           | CPUs in many ways, so many concepts were pioneered on those.
           | There is even a direct line between that machine via THE and
           | Multics to Unix.
        
         | drewcoo wrote:
         | > Standardization on 8-bit multiples only really occurred with
         | the IBM 360 in the late 1960s.
         | 
         | It's not "standardization" when only one model from one company
         | is doing it.
         | 
         | The 8-bit byte was common in microcomputers and as they grew to
         | dominate people's programming, you could claim it became some
         | kind of de facto standard in the 70s or 80s. The de facto
         | standard was documented by a standards body in the 90s. FWIW,
         | in a CS program in the 90s, I was taught that 8-bit was most
         | common, but not "standard."
         | 
         | https://en.wikipedia.org/wiki/Byte#cite_note-ISO_IEC_2382-1_...
         | 
         | Word length was even less common across architectures, but
         | again, we ended up setting current definitions based on
         | microcomputer use.
        
           | TheOtherHobbes wrote:
           | Micros were 8 bits because that's the sweet spot in the trade
           | off between usefulness, complexity, and expense.
           | 
           | With 8 bits you have a reasonable instruction space, simple
           | access to a 16-bit address space, and a good platform for
           | ASCII.
           | 
           | It's a minimal viable CPU - powerful enough to get non-
           | trivial general purpose work done, easy to package into VLSI,
           | and cheap enough to sell in quantity.
           | 
           | But the design went in the opposite direction. The original
           | micro - in terms of market positioning - was the PDP-8, and
           | that was 12-bits. The 16-bit PDP-11s were a step up from the
           | 8, and the 8080 etc were a step back down.
        
             | couchand wrote:
             | Can you help me out with your last paragraph? My
             | understanding was that the PDP-8 was squarely a mini, not a
             | micro.
        
               | timbit42 wrote:
               | The DEC PDP-8 is a mini computer but because it was so
               | small and inexpensive, it was often used as a personal
               | computer. Micro computers were also personal computers,
               | especially in the beginning, so perhaps there is some
               | conflation there.
        
               | PaulHoule wrote:
               | It was a much smaller machine than the PDP-11 and the
               | PDP-10. In 1981 Manchester Memorial High School had a
               | PDP-8 which had two floppy drives, a large printing
               | terminal, and two unusual video terminals that were a bit
               | smarter than the common VT-100 connected via "current
               | loop" that had two 8-inch floppy drives for storage.
               | 
               | That machine could be brought up in single user or
               | multitasking modes: it could support 1, 2 or 3 people
               | programming BASIC or run larger applications like
               | 
               | https://en.wikipedia.org/wiki/Colossal_Cave_Adventure
               | 
               | in single-user mode on the printing terminal.
               | 
               | A bigger school, like one in Milford, NH, had a PDP-11
               | with about 20 terminals and an OS called RSTS/E that was
               | mainly used to let people log in and use BASIC the way
               | they would on a Commodore PET or TRS-80 except it was
               | timesharing, it had tape storage and some kind of
               | primitive hard drive.
               | 
               | By the late 1980s Manchester Memorial got a VAX-11 which
               | was similar to the 386 in architecture and ran an OS very
               | much like Linux. They would teach you to program in
               | PASCAL using the compiler on the command line, there were
               | maybe 8 terminals in the classroom and then another 8 or
               | so terminals in other parts of the school to run
               | administrative applications.
               | 
               | DEC never found its footing in a microprocessor world. It
               | is not difficult to make a PDP-8 on FPGA
               | 
               | https://hackaday.io/project/179357-pdp-8-fpga
               | 
               | and Digital had made single chip "microprocessor"
               | versions of the PDP-11 and the VAX-11. DEC could have
               | made a machine based on a single-chip PDP-11 that would
               | have competed with the IBM Personal Computer in some ways
               | but would have faced the problem that user space memory
               | addresses were 16-bit. For a "personal" computer you want
               | to run 1 big BASIC and not 10 little BASICs. The IBM PC
               | had a terribly hackish way to access a 640k user space
               | but developers thought it was worth the hassle.
               | 
               | Similarly, other vendors like Motorola and Intel and
               | National Semi were trying to develop a model for "32-bit"
               | machines similar to the VAX, one can imagine that Digital
               | might have aggressively transitioned the VAX line to
               | "microcomputers". A VAX workstation would have trounced
               | an IBM PC AT. Maybe Apple would have switched to VAX when
               | Motorola came apart and Intel might be remembered as a
               | RAM manufacturer.
        
               | jhallenworld wrote:
               | Bitsavers has a bunch of competitive analysis documents
               | from 1980s DEC that are interesting to read:
               | 
               | http://www.bitsavers.org/pdf/dec/competitiveAnalysis/
               | 
               | You can feel the desperation..
        
               | jhallenworld wrote:
               | OS/8 was a single user OS (like RT11), so it often was
               | used as a personal computer. It inspired CP/M.
               | 
               | 6-bit chars were popular on this machine since you could
               | fit two in one word, and uppercase only a thing. Strings
               | in BASIC used 6-bit chars, but you could PRINT all ASCII
               | (especially control characters) with a special PNT()
               | function (only usable as a PRINT argument). If you look
               | at the available OS/8 images, you'll find "business
               | BASIC"- at least one change compared to regular OS/8
               | basic was support for 8-bit strings.
               | 
               | Floppies used 8-bit bytes. You wasted 4-bits out of every
               | two for image files.
        
               | rahen wrote:
               | To be very pedantic, CP/M was mostly inspired by TOPS-10,
               | which also inspired OS/8, RT/11, and even PC-DOS.
        
               | timbit42 wrote:
               | Hmm. My notes suggest TOPS-10 inspired Data General's
               | RDOS, which inspired CP/M.
        
               | NikkiA wrote:
               | Here's a small 'micro' PDP-8:
               | 
               | https://en.wikipedia.org/wiki/DECmate#/media/File:VT78.jp
               | g
               | 
               | Also, the VT78's precursor was the PDP8/f which was
               | little bigger than an altair, definitely in the
               | microcomputer camp.
        
           | timbit42 wrote:
           | > It's not "standardization" when only one model from one
           | company is doing it.
           | 
           | It is when that company is IBM and the model is their new
           | platform that all of their future systems will be based on.
           | They were a juggernaut back in those days, more than
           | Microsoft is today.
        
         | shrubble wrote:
         | 6 bits was used to represent alphanumeric characters prior to
         | ASCII as well.
        
         | miohtama wrote:
         | Is there anything special why 8-bit won, or is it just a
         | compromise between 6,8,10,12,etc.
        
           | PaulHoule wrote:
           | On some sense 7 bits won since ASCII used only 7 bits,
           | leaving the question of do you leave the high bit zero, use
           | it for parity, pack 7-into-8 for long strings, etc.
           | (Sometimes in serial comms in the old days we really sent 7
           | bit characters)
           | 
           | You could pack upper and lowercase roman letters and numerals
           | for a total of 62 characters and then have room for a period
           | and space and you're full. With only uppercase you can a
           | decent set of symbols, see the SIXBIT encoding
           | 
           | https://rabbit.eng.miami.edu/info/decchars.html
           | 
           | that Digital used for file names back when there was a
           | computer industry on the US East Coast. If you wanted
           | something general purpose you'd need to add some kind of
           | shift to get lower case characters if not additional symbols
           | and control characters, but now the meaning of the characters
           | depends on state which might have been seen as a burden on
           | the peripherals of the day and is certainly more of a hassle
           | to program. Note it was seen as decisive that IBM's 360 used
           | 8-bit characters that could have fit the ASCII code but they
           | went with the 8-bit
           | 
           | https://en.wikipedia.org/wiki/EBCDIC
           | 
           | code instead because it was easier to make compatible with
           | older keypunch machines.
           | 
           | Already 8-bit is looking like overkill, I think the control
           | characters were not really properly utilized in 7-bits,
           | 8-bits let you fit some combination of extra symbols,
           | modified roman letters used in other european languages, and
           | such. You can't cover every European language in 8 bits
           | (which is why they had different code pages) but you could
           | get most latin variants and probably some other sets like
           | Greek, Cyrillic, even Japanese Kana, in 10-bits but hardly
           | anyone wants to use _all_ those characters at the same time.
           | 
           | If you had a sixty bit computer you might also consider a
           | 5-bit code set like
           | 
           | https://en.wikipedia.org/wiki/Baudot_code
        
             | radiator wrote:
             | Wow, in your last link about Baudot's code we read: _In
             | 1876, he changed from a six-bit code to a five-bit code, as
             | suggested by Carl Friedrich Gauss and Wilhelm Weber in
             | 1834_. I have seen references to Gauss ' name in so many
             | places -- at some point I should learn not to be surprised
             | anymore.
        
               | jacquesm wrote:
               | The total scientific establishment in the world in those
               | days was a few hundred people at best, and the output of
               | the most prolific ones was very impressive. These people
               | were true polymaths, usually equally comfortable with the
               | state of the art of mathematics, biology, physics and
               | astronomy of their age.
        
           | Someone wrote:
           | Possibly just because it's the smallest that supports
           | 'enough' characters for us English and was the cheapest
           | useful one at a time when memory was expensive.
           | 
           | 6 bits is too few for a lower- and uppercase alphabet, digits
           | and punctuation.
           | 
           | 7 bits could have worked (see: ascii), but I guess 8 won
           | because 7 would be an odd (no pun intended) choice and
           | because of the popularity of the VAX.
        
             | NikkiA wrote:
             | 6 could have worked too, we'd just need a uppercase prefix
             | and let the terminal handle it. In fact given normal
             | English case rules, I'm surprised that we devoted a whole
             | extra 26 characters to uppercase when bit rate was at a
             | premium.
        
               | Someone wrote:
               | When bit rate was really at a premium, we didn't.
               | https://en.wikipedia.org/wiki/Baudot_code#Character_set:
               | 
               |  _"The cells marked as reserved for extensions (which use
               | the LS code again a second time--just after the first LS
               | code--to shift from the figures page to the letters shift
               | page) has been defined to shift into a new mode. In this
               | new mode, the letters page contains only lowercase
               | letters, but retains access to a third code page for
               | uppercase letters, either by encoding for a single letter
               | (by sending LS before that letter), or locking (with
               | FS+LS) for an unlimited number of capital letters or
               | digits before then unlocking (with a single LS) to return
               | to lowercase mode."_
               | 
               | In computing, using such 'shift' codes complicates
               | programming, though, as it makes it hard to compute
               | string length or to index into a string (similar to the
               | problem with UTF-8). Worse, if you see a given code
               | sequence in a larger sequence, you may have to go
               | arbitrarily far back in the input to figure out what it
               | means.
        
           | timbit42 wrote:
           | 6-bit was common previously and then ASCII was 7-bit but
           | that's an odd number so 8 made the most sense and it's a
           | power of 2, which makes it even better than 6 which isn't.
        
           | rogerbinns wrote:
           | Binary Coded Decimal was also popular - eg most processors
           | included instructions for operating that way. You need 4 bits
           | for a BCD digit, so a multiple of 4 bits is convenient.
           | 
           | https://en.wikipedia.org/wiki/Binary-coded_decimal
        
             | rahen wrote:
             | Some early computers even had a dual-mode (binary / BCD)
             | ALU:
             | 
             | https://en.wikipedia.org/wiki/Bull_Gamma_3
        
               | teo_zero wrote:
               | The glorious 6502 had a flag to perform all arithmetic
               | operations in BCD.
        
               | Animats wrote:
               | IBM mainframes still do. It's used by COBOL.[1]
               | 
               | [1] https://www.ibm.com/docs/en/cobol-
               | zos/6.4?topic=arithmetic-s...
        
           | Animats wrote:
           | Because of the IBM System/360 unified architecture.
           | 
           | Before the IBM System/360, there were "scientific" computers,
           | which were word-oriented and used binary arithmetic, and
           | "business" computers, which were character-oriented and used
           | decimal arithmetic. The IBM System/360 unified the product
           | lines with a standardized architecture.
        
           | o11c wrote:
           | Note that non-power-of-2 bytes means it's impossible to use
           | mere bitwise operations when computing bit indices. Instead,
           | you have to use integer division (ick!).
           | 
           | There are probably lots of similar problems; this is just one
           | that I thought of while writing recent code.
        
         | gumby wrote:
         | Also variable _byte_ length. The 36-bit PDP-6 and quite popular
         | PDP-10 allowed you to address a byte (length from 1-36 bits) in
         | memory; incrementing that address got you the next byte.
         | Allowed for very efficient packed arrays of small values that
         | could easily be used in assembly code.
        
           | jhallenworld wrote:
           | ASCII was often 7-bits on these machines, since you could
           | efficiently pack 5 characters in one word.
           | 
           | But C used 9-bits for chars, for better compatibility between
           | word and char pointers.
        
             | gumby wrote:
             | That's why FTP has character and binary modes, which
             | doesn't make a difference for 8-bit machines.
        
         | kps wrote:
         | Early computer word sizes varied widely: https://ed-
         | thelen.org/comp-hist/BRL61table03.html ( _A Third Survey of
         | Domestic Electronic Digital Computing Systems_ , 1961)
        
       | dtaht wrote:
       | When I saw the title of this post, I was thinking it was going to
       | be about 60 bit computing with 4 bit tags on the pointers. I
       | generally thought this was a good idea way back when and am glad
       | to see it re-appearing now...
       | 
       | but noooo.....
        
       | rozzie wrote:
       | Beyond being 60-bits, programming the 6400/6500/6600/6700 was
       | interesting and memorable in other ways.
       | 
       | - Ones' complement (rather than two's complement) binary
       | representation of integers, and thus the need to cope with "-0"
       | in your code. Modern programmers are surprised that there was a
       | day when "-1" had a different binary representation than today.
       | 
       | - The CPU/CPUs were not actually 'in charge' of the machine.
       | There were ten 12-bit processors called PPU's (peripheral
       | processing units) which did all I/O, and which had the unique
       | capability of doing an "Exchange Jump" instruction to do a CPU
       | task switch. In a sense, the CPUs were 'compute peripherals' to
       | the PPUs.
       | 
       | - The architecture was fascinating in terms of memory hierarchy.
       | The "centeral memory" used by the CPUs was augmented by a much
       | larger "extended memory" (ECS - Extended Core Storage) with block
       | transfer primitives. One could implement high-scale systems (such
       | as the one I worked on - PLATO) that smoothly staged data between
       | CM, ECS, and disk.
       | 
       | In those days, there was a necessarily-direct relationship
       | between the machine language (the bit encoding of instructions
       | for operations & registers) and the assembly language (COMPASS).
       | As a developer it was incredibly enjoyable because, in Ellen
       | Ullman's words, you felt very 'close to the machine'.
        
         | klelatti wrote:
         | Hi! Author of this short post here. Thanks so much for this
         | comment. It's been 'on my list' to do a much longer post on
         | Control Data and Seymour Cray for quite a while. This has
         | convinced me to bump it up the list!
        
           | KerrAvon wrote:
           | I'm a subscriber and I'd be happy to read as much of that as
           | you want to write. There's not that much biographical or
           | technical history coverage of Control Data and Cray other
           | than what's in the not-very-technical Supermen book and
           | anecdotes form one or two individual engineer memoirs.
        
       | tomxor wrote:
       | > The "practical" reasons all favour the 60-bit word. The
       | advantages of the 64-bit word are of the kind that appear more
       | fundamental. The best that can be said of the 64-bit word is that
       | "practical" considerations have a habit of disappearing in time.
       | 
       | Interesting, in long lived cases this could be considered another
       | form of "technical debt", or "impedance mismatch", but due to
       | ephemeral requirements.
        
       ___________________________________________________________________
       (page generated 2024-02-04 23:02 UTC)