Post AXAn0MkZJsOTl9VUlk by b0rk@social.jvns.ca
 (DIR) More posts by b0rk@social.jvns.ca
 (DIR) Post #ATPWD6yzRr83erT9qS by b0rk@social.jvns.ca
       2023-03-08T15:15:41Z
       
       0 likes, 1 repeats
       
       one final computer history question: why are Intel processors little endian? Why were some other processors big endian?Mostly interested in pre-1980 reasons (before the internet)I'd love citations / links if possible -- I got a lot of guesses and speculation in replies to the last computer history question.
       
 (DIR) Post #ATWAXgHh4M8W3DLA4O by fernomac@discuss.systems
       2023-03-08T15:56:40Z
       
       0 likes, 0 repeats
       
       @b0rk if you want to go even further into history: why are English decimal numbers big endian? (Arabic is written right-to-left, Arabic numbers are little endian, they got copied wholesale into western left-to-right languages which reversed the endianness 😁)
       
 (DIR) Post #ATWAXgnx8PwNfGQvLs by b0rk@social.jvns.ca
       2023-03-08T16:01:58Z
       
       0 likes, 0 repeats
       
       @fernomac I don't understand — are you saying that arabic numbers used to be written right-to-left? AFAIK Arabic numbers are written left-to-right today.
       
 (DIR) Post #ATWAXhT4fWplinflRo by riley@toot.cat
       2023-03-11T20:45:48Z
       
       0 likes, 1 repeats
       
       @b0rk:Most people using Arabic-derived scripts and Hindu-Arabic numerals write right-to-left, and when they write numbers, least-significant-to-most-significant. So, most significant digits are on the left, just like in English (but the numerals themselves may have slightly different shapes).Why this is so has disappeared in the sands of history, but speculatively, chances are high that early adopters of Hindu-Arabic numerals used them for business records and account-keeping, so they did a lot of addition and subtraction, and then it just made sense to write the digits in the same order than they would be processed.At the time of conversion, most Latin-script languages' users' first numeral system would have been the Roman numeral system, and while that isn't positional, it's conventionally written with the highest-magnitude component first, so keeping the geometric digit order while adopting the Hindu-Arabic system from right-to-left scripts to a left-to-right script, effectively reversing the semantic order, probably just made sense to them, as well.@fernomac
       
 (DIR) Post #ATWqBWdOSmfDpGbljc by kenshirriff@oldbytes.space
       2023-03-11T17:52:48Z
       
       0 likes, 0 repeats
       
       @b0rk Intel processors are little-endian because they copied the Datapoint 2200, which was a serial computer so it needed to be little endian.In more detail, the Datapoint 2200 (1971) was built with TTL chips and used shift registers for memory, so it operated on one bit at a time. This forced it to be little endian, since arithmetic needs to start with the lowest bit.Datapoint asked Intel and Texas Instruments about replacing the board of TTL chips with a single chip. Texas Instruments built the TMX 1795 and Intel slightly later built the Intel 8008. As copies of the Datapoint 2200, they were little endian. Intel improved the 8008 creating the 8080, 8086, x86, keeping the little endian architecture and many other Datapoint features.I can give you lots of references. See my IEEE Spectrum article https://spectrum.ieee.org/the-surprising-story-of-the-first-microprocessorsAlso see the book "Datapoint: The Lost Story of the Texans who Invented the Personal Computer."
       
 (DIR) Post #ATWqBXDYILaTdPWe5w by stuartmarks@mastodon.social
       2023-03-11T19:20:41Z
       
       0 likes, 0 repeats
       
       @kenshirriff @b0rk Excellent; this is the first rationale I’ve seen for byte order that didn’t resort to logical hand waving (such as, ā€œIt’s more consistent if low order bits are at lower addresses.ā€) Are you aware of a similar rationale for big-endian order? The only thing I’m aware of is the consistency with human L-to-R reading order as described in IEN 137, but not a hardware-based rationale.
       
 (DIR) Post #ATWqBXtjlVKbkFGKqe by kenshirriff@oldbytes.space
       2023-03-11T22:58:14Z
       
       1 likes, 0 repeats
       
       @stuartmarks @b0rk If you're processing punch cards, your numbers are big-endian because that's how they appear on the card. The IBM System/360 (1964) was the first popular system with bytes. Since it was designed to support punch card data processing, it was big-endian. A lot of systems were big-endian for compatibility with the market-dominating System/360.
       
 (DIR) Post #AXAn0GMx2lw9yAPY7k by b0rk@social.jvns.ca
       2023-03-08T16:21:28Z
       
       0 likes, 0 repeats
       
       I'm also trying to figure out the reasons for this "holy war" about whether whether networks should be big or little endian https://www.rfc-editor.org/ien/ien137.txtWas the motivation mostly that people didn't want their machines to pay the cost of byte order conversion? ("my platform is little endian, so if networks are little endian too, that will be better for meā€)again, I'd love references or links if possible, not speculation. I'm really struggling to find any archives of these discussions from 1980.
       
 (DIR) Post #AXAn0H86DTeQKOTCc4 by adamshostack@infosec.exchange
       2023-03-08T17:12:08Z
       
       0 likes, 0 repeats
       
       @b0rk @huitema might have some recollections/pointers.
       
 (DIR) Post #AXAn0HhC6ziw5EtEJc by huitema@social.secret-wg.org
       2023-03-08T18:07:16Z
       
       1 likes, 0 repeats
       
       @adamshostack @b0rk Danny Cohen's paper tells it all. CPUs and languages can be either big endian (ibm 360, English), little endian (intel 8086, Arabic) or baroque (pdp 11, German).  For networking standards, you have to pick just one, you certainly don't want baroque, and thus "network order" is "big endian". And of course the name picked by Danny Cohen is based on Gulliver's Travels by Jonathan Swift, and refers to ways of eating eggs...
       
 (DIR) Post #AXAn0IPVSFAYIfccNs by huitema@social.secret-wg.org
       2023-03-08T18:13:34Z
       
       0 likes, 0 repeats
       
       @adamshostack @b0rk Of course, different network protocols can pick different endianess. There was actually a proposal to pick little endian logic for QUIC, because most of the code runs on little endian architectures (x86, arm). But tradition won.
       
 (DIR) Post #AXAn0J1RBDViCJMuVU by SteveBellovin@mastodon.lawprofs.org
       2023-03-08T18:31:16Z
       
       0 likes, 0 repeats
       
       @huitema @adamshostack @b0rk I would add one thing to Christian's excellent answer: proper host programs generally don't know or care about endianness. They call, e.g., ntohl() and htonl()—network to host long and host to network long—to do appropriate order conversions. If the host machine is bigendian, they're identity functions—but you use them anyway for the sake of portable code.
       
 (DIR) Post #AXAn0JnIJHn8ajl86K by danmcd@hostux.social
       2023-03-08T18:35:10Z
       
       0 likes, 0 repeats
       
       @SteveBellovin @huitema @adamshostack @b0rk Proper use of ntoh/hton definintely does this.>30 years ago Sean O'Malley said in a talk that systems people should never do typing, and the Compiler Cavalry should save us all.  I agree with him, and mentioned it indirectly in this blog post from 2009:https://kebe.com/blog/?p=434
       
 (DIR) Post #AXAn0Ke78u2hEYTJQm by b0rk@social.jvns.ca
       2023-03-08T18:38:42Z
       
       0 likes, 0 repeats
       
       @danmcd @SteveBellovin @huitema @adamshostack that all makes sense. Still trying to figure out an answer to my original question though -- why was the choice between big & little endian a "holy war" in 1980? Why were people fighting about it? What were the stakes?
       
 (DIR) Post #AXAn0LM4VTCjQt2Pwm by adamshostack@infosec.exchange
       2023-03-08T19:08:34Z
       
       0 likes, 0 repeats
       
       @b0rk @danmcd @SteveBellovin @huitema the computers that had to translate were slowed down by it
       
 (DIR) Post #AXAn0M2FycwrXim6hU by SteveBellovin@mastodon.lawprofs.org
       2023-03-08T19:10:47Z
       
       0 likes, 0 repeats
       
       @adamshostack @b0rk @danmcd @huitema That plus strongly-held opinions on the proper host architecture. Me—I've never understood why anyone would like littleendian designs, but most of my early experience was on IBM systems which were all bigendian. (No, this is *not* an invitation to ā€œeducate" me…)
       
 (DIR) Post #AXAn0MkZJsOTl9VUlk by b0rk@social.jvns.ca
       2023-03-08T21:09:46Z
       
       0 likes, 0 repeats
       
       @SteveBellovin I'm really curious about why people have such strongly held beliefs -- do little endian designs negatively affect you somehow? (how?)(i have no personal stake in this, I just don't understand why people care either way)
       
 (DIR) Post #AXAn0NSsf7q5yaEsq0 by SteveBellovin@mastodon.lawprofs.org
       2023-03-08T21:51:26Z
       
       0 likes, 0 repeats
       
       @b0rk To me, there are two big issues. One is, as @danmcd mentioned (https://hostux.social/@danmcd/109989837059338898), observability: it's easier when reading packet dumps to see things in bigendian order. (We used to call this the NUXI problem: UNIX written in the wrong byte order.) The second point is more subtle and more debatable: architectures work better if they're ā€œcleanā€. Making QUIC littleendian would have been a disaster, because there are no existing primitives akin to htonl() for writing portable code.
       
 (DIR) Post #AXAn0QD2STwwUJqm3s by huitema@social.secret-wg.org
       2023-03-08T22:56:09Z
       
       1 likes, 0 repeats
       
       @SteveBellovin @b0rk @danmcd Lot's of early protocol choices where done based on CPU cost. Look for example at using exponential average with coefficient 1/8 for smoothing RTT measurements -- with 1/8 chosen because it can be computed as >>3. People were counting number of instructions when evaluating specifications!