[HN Gopher] The IBM 7094 and CTSS
___________________________________________________________________
The IBM 7094 and CTSS
Author : pncnmnp
Score : 20 points
Date : 2023-02-16 17:52 UTC (5 hours ago)
(HTM) web link (www.multicians.org)
(TXT) w3m dump (www.multicians.org)
| DougMerritt wrote:
| This had a 36-bit word, as did its predecessor, and as did the
| later DEC 10 and DEC 20 systems used famously at MIT and
| elsewhere, and I understand that people found that word size to
| have various advantages once they'd used it extensively.
|
| But how was 36 bits originally decided upon back in the 1950s?
| What were the tradeoffs that made them say "two 18 bit half words
| are just right, but two 20 bit half words is clearly too big",
| and so on?
|
| These days everyone takes power-of-two character and integer and
| floating sizes for granted; I'm not talking about that, I'm just
| wondering how they looked at it back at the beginning.
| kps wrote:
| Early computers had _many_ different word sizes. If you look at
| the 1955 Ballistic Research Laboratory document, _A survey of
| domestic electronic digital computing systems_ , you find
| lengths from a low of 10 bits to a high of 42 decimal digits.
| (pp234-235; Text: http://ed-thelen.org/comp-hist/BRL.html Scan:
| https://babel.hathitrust.org/cgi/pt?id=wu.89037555299&view=1...
| )
|
| The influential machine to look at here, then, would be the IBM
| 701.
|
| _Edit:_ The IBM 701 patent
| <https://patents.google.com/patent/US3197624A> says, "A binary
| number of a full word of 36 bits has a precision equal to that
| of about a 10 decimal digit number". It doesn't mention
| characters, and the machine was designed for scientific
| (military) calculations, not text/data processing. The
| associated IBM 716 printer did not use a 6-bit code. However,
| this does not rule out a 6-bit character code as a design
| consideration, even if this machine didn't use one, since IBM
| did have a large business in pre-computer data processing,
| using punched-card character sets that would fit in no less
| than 6 bits. So that may have driven the design of shared
| peripherals like 7-track tape (6 bits plus parity) and led to a
| multiple-of-6 word size.
| 082349872349872 wrote:
| backwards compatibility:
| https://en.wikipedia.org/wiki/36-bit_computing#History
|
| cf https://datatracker.ietf.org/doc/html/rfc4042 (note date)
| DougMerritt wrote:
| Your link does say: "Early binary computers aimed at the same
| market therefore often used a 36-bit word length. This was
| long enough to represent positive and negative integers to an
| accuracy of ten decimal digits (35 bits would have been the
| minimum). It also allowed the storage of six alphanumeric
| characters encoded in a six-bit character code."
|
| Which helps a little, but it still begs the question: why ten
| decimal digits? Why not nine or eleven or something?
|
| Are they implying that six characters of six bits was _the_
| critical issue? If so, why not seven characters? Or five?
| Etc.
| lapsed_lisper wrote:
| If you're keen to go down the wikipedia hole,
| https://en.wikipedia.org/wiki/Six-bit_character_code and
| then https://en.wikipedia.org/wiki/BCD_(character_encoding)
| explain that IBM created a 6-bit card punch encoding for
| alphanumeric data in 1928, that this code was adopted by
| other manufacturers, and that IBM's early electronic
| computers' word sizes were based on that code. (Hazarding a
| guess, but perhaps to take advantage of existing
| manufacturing processes for card-handling hardware, or for
| compatibility with customers existing card handling
| equipment, teletypes, etc.)
|
| So backward compatibility is likely the most historically
| accurate answer. Fewer bits wouldn't have been compatible,
| more bits might not have been usable!
|
| (Why 6 bit codes for punch cards in 1928? Dunno. Perhaps
| merely the physical properties of paper cards and the
| hardware for reading them. This article talks about that
| stuff: https://web.archive.org/web/20120511034402/http://ww
| w.ieeegh...)
| tablespoon wrote:
| > Why 6 bit codes for punch cards in 1928?
|
| I'm guessing it was the smallest practical size to encode
| alphanumeric data, and making it bigger than it needed to
| be would have added mechanical complexity and expense.
|
| https://en.wikipedia.org/wiki/Six-bit_character_code:
| "Six bits can only encode 64 distinct characters, so
| these codes generally include only the upper-case
| letters, the numerals, some punctuation characters, and
| sometimes control characters."
|
| There was apparently a 5-bit code in use since the 1870s,
| but that was only enough for alphas:
| https://en.wikipedia.org/wiki/Baudot_code
| DougMerritt wrote:
| IIRC I think six characters was also the maximum for the
| length of global symbols in C on early Unix systems,
| possibly just because that's what everyone was used to on
| earlier systems.
|
| But note that I asked about why six _characters_ , not
| why six bits per character -- however your note is
| perhaps suggestive -- maybe the six character limit is
| similar to the six bit character after all: something
| established (possibly for mechanical reasons) in 1928?
| Perhaps?
| lapsed_lisper wrote:
| Right, good questions. Pure conjecture on my part: maybe
| it's just that 36 is the smallest integral multiple of 6
| that also had enough bits to represent integers of the
| desired width?
| AnimalMuppet wrote:
| One reason: Because 10 was enough to accurately calculate
| the differences in atomic masses, which was essential for
| atomic weapons design. (Source: my mother worked on 36-bit
| machines back in the 1950s. This was her explanation of the
| reason for the word size.)
| tgv wrote:
| Six character is (in)famously the maximum length of linker
| symbols on IBM systems, at least for FORTRAN. Perhaps that
| had something to do with it. And of course, there comes a
| time when you have to pick a number, so why no 6x6, which
| is also good for 10^10 integers?
| DougMerritt wrote:
| I'm asking about the _original_ thinking, not about later
| backward compatibility.
| greenyoda wrote:
| The Wikipedia article cited above has some practical
| reasons:
|
| > _Early binary computers aimed at the same market
| therefore often used a 36-bit word length. This was long
| enough to represent positive and negative integers to an
| accuracy of ten decimal digits (35 bits would have been the
| minimum). It also allowed the storage of six alphanumeric
| characters encoded in a six-bit character code._
| DougMerritt wrote:
| Coincidentally I commented on this above at the same time
| you were posting -- thanks.
| hvs wrote:
| The entirety of my knowledge about CTSS is from Hackers: Heroes
| of the Computer Revolution which, being a book about a bunch of
| misfits that hated the cloistered culture of IBM computing,
| wasn't complementary. ITS (the Incompatible Timesharing system)
| was also developed with help from Project MAC.
|
| https://en.wikipedia.org/wiki/Hackers:_Heroes_of_the_Compute...
|
| https://en.wikipedia.org/wiki/Incompatible_Timesharing_Syste...
___________________________________________________________________
(page generated 2023-02-16 23:02 UTC)